Engineering

Q&A: Preventing privacy breaches and strengthening trust in AI

Shagufta Mehnaz, assistant professor of computer science and engineering, was awarded an NSF CAREER Award to support her work

Shagufta Mehnaz, assistant professor of computer science and engineering in the School of Electrical Engineering and Computer Science at Penn State, received a five year, $632,430 U.S. National Science Foundation (NSF) Early Career Development (CAREER) Award for her project, “Privacy Auditing Frameworks and Defenses for Machine Learning Models Trained on Tabular Data.” Credit: Poornima Tomy/Penn State . All Rights Reserved.

UNIVERSITY PARK, Pa. — Shagufta Mehnaz, assistant professor of computer science and engineering in the School of Electrical Engineering and Computer Science at Penn State, received a five-year, $632,430 U.S. National Science Foundation (NSF) Early Career Development (CAREER) Award for her project, “Privacy Auditing Frameworks and Defenses for Machine Learning Models Trained on Tabular Data.”  

Mehnaz discussed her goals for the project in this Q&A.  

Q: What do you want to understand or solve through this project? 

Mehnaz: The goal of this project is to better understand and prevent privacy risks that arise when machine learning (ML) systems are trained on sensitive personal data, such as medical or financial records. Specifically, we aim to study a type of privacy breach called model inversion attacks, where an adversary can infer private details about individuals by strategically querying a trained ML model. 

While such attacks are widely studied in image-based ML systems, little is understood about how they affect the more common tabular data used in real-world applications. Tabular data refers to structured datasets organized in rows and columns, such as databases, where each row represents an individual record — for example, a patient, customer or transaction — and each column corresponds to a specific attribute — for example, age, income, diagnosis or account balance. 

This project seeks to fill that knowledge gap by developing new frameworks to systematically audit ML models for privacy risks, identify which individuals or groups are most vulnerable and design robust defenses that reduce these risks. Ultimately, we aim to ensure that ML models can be used safely and fairly without compromising the privacy of the individuals whose data make them possible. 

Q: How could advances in this area impact society? 

Mehnaz: Advances from this work will strengthen public trust in artificial intelligence (AI) systems by making them more transparent, accountable and privacy-preserving. As ML models are increasingly used in sensitive areas like health care, education and finance, preventing privacy breaches is crucial for protecting individuals’ personal data and maintaining confidence in data-driven technologies. 

This project will contribute practical tools for auditing ML privacy risks, as well as a public dashboard that tracks known vulnerabilities. By identifying and addressing disparities in privacy risks — where certain groups may face higher exposure — the project will also help promote fairness and equity in the use of AI. In the long term, these outcomes will help ensure that the societal benefits of machine learning are achieved in a responsible and ethical manner. 

Q: Will undergraduate or graduate students contribute to this research? How? 

Mehnaz: Yes, both undergraduate and graduate students will play vital roles in this project’s research and educational components. Graduate students will conduct in-depth studies of machine learning vulnerabilities, develop new auditing algorithms and design privacy-preserving defenses that strike a balance between data protection and model performance. Undergraduate students will be engaged through research collaborations, capstone projects and hands-on data analysis tasks. 

Additionally, students will participate in creating open-source tools and educational materials for broader use. The project’s planned competitions and workshops will further provide experiential learning opportunities, inspiring students to pursue careers in trustworthy and secure AI research. 

Q: The NSF CAREER award not only funds a research project, but it also recognizes the potential of the recipient as a researcher, educator and leader in their field. How do you hope to fulfill that potential? 

Mehnaz: This award will allow me to develop my potential as a researcher, educator and leader by integrating rigorous scientific investigation with innovative teaching and mentorship. As a researcher, I aim to establish a leading program in machine learning privacy and fairness, developing systematic frameworks to understand, measure and mitigate privacy risks in real-world datasets. My work will push the boundaries of knowledge in this area and create tools and methodologies that can be widely adopted. As an educator, I plan to design and teach both undergraduate and graduate courses on ML security and privacy, making the materials publicly available to expand access to these emerging topics. As a leader, I aim to inspire and coordinate research collaborations across institutions and disciplines, fostering a community that advances privacy-aware machine learning. Through organizing competitions, public dashboards and outreach activities, I will help set research standards, disseminate best practices and shape the broader research agenda in this critical field. 

I am deeply grateful to NSF for this CAREER award, which offers the sustained support necessary to build a long-term, integrated research and education program. This stability will allow me to pursue a coherent vision — advancing the theory and practice of privacy auditing in machine learning while training students to think critically about data ethics and security. 

The award will help me establish a dedicated research group and develop publicly available tools that will serve both academia and industry. It will also enable me to expand educational and outreach activities that inspire younger students and broaden participation in computing. I sincerely appreciate NSF’s support, which provides the foundation I need to become a leader in developing responsible, privacy-aware AI technologies that safeguard individuals and strengthen public trust. 

At Penn State, researchers are solving real problems that impact the health, safety and quality of life of people across the commonwealth, the nation and around the world.        

For decades, federal support for research has fueled innovation that makes our country safer, our industries more competitive and our economy stronger. Recent federal funding cuts threaten this progress.     

Learn more about the implications of federal funding cuts to our future at  Research or Regress.   

Last Updated December 8, 2025

Contact