UNIVERSITY PARK, Pa. — Digital content generated by artificial intelligence (AI) has become increasingly realistic and difficult to differentiate from human-created content. In response, some online platforms recommend that AI-generated content be self-disclosed so that users know where the content comes from. But these disclosures often rely on visual cues that are ineffective for blind and low-vision users.
Tanusree Sharma, assistant professor in the Penn State College of Information Sciences and Technology (IST), received a $60,000 Google Research Scholar Award for a project titled “Designing Accessible Tools for Blind and Low-Vision People to Navigate Deepfake Media.”
The Google Research Scholar Program supports early career professors who have held their doctoral degree for fewer than seven years and are conducting research in fields relevant to Google, according to the program’s website.
Sharma earned her doctoral degree in informatics from the University of Illinois at Urbana-Champaign and joined the IST faculty in 2024. Her work lies at the intersection of security, artificial intelligence and human-computer interaction, addressing the question: How do we build tools that verify the authenticity of both the people and the trustworthiness of the data they provide in digital platforms? She aims to design secure systems to help people interact safely and effectively in increasingly AI-driven digital ecosystems.
In the Q&A below, Sharma spoke about the award and the work it will support.
Q: What do you want to understand or solve through this project?
Sharma: This proposal builds on our early-stage research exploring how people with diverse abilities face unique challenges in navigating deepfake content. Through this project, we want to understand how people interact with AI-generated content. We have seen extensive work in automated classification — using technology to categorize and label data based on predefined criteria — but these efforts have since resulted in a constant cat-and-mouse game with the AI race, with no end in sight.
Instead of detection — that is, identifying AI-generated content — we are shifting toward data provenance, which focuses on the origin, historical context and authenticity of content. Platforms like Meta and YouTube are starting to adopt AI labels to provide some extent of safety from fake content; however, these systems may unintentionally assume that users are vision abled. The broader goal is to design and develop tools to better support people with different abilities in navigating AI-generated content.
Q: How will advances in this area impact society?
Sharma: People interact with large amounts of AI-generated content daily, some intentionally deceptive and some self-disclosed by content creators. AI-generated content has become increasingly realistic, making it difficult for people to distinguish deepfakes from human-generated images, videos, text and audio. An expected impact of our work is the introduction of critical infrastructure and usable and accessible provenance tools for AI-generated content. We envision that this infrastructure, with minimal adaptation, could be integrated into online platforms, such as search, video streaming, social media and new media. We are looking forward to this collaboration with Google, which could also inform broader policy recommendations in enhancing accessible AI indicators and provenance across media platforms.
Q: How will undergraduate and/or graduate students contribute to this research?
Sharma: My advisee, Ayae Ide, a graduate student pursuing a doctoral degree in informatics from the College of IST, will be leading this project moving forward. Ayae conducted the early research that motivated this project's direction. Tory Park, a third-year undergraduate student majoring in cybersecurity, is also a member of our team.
Q: You submitted your project in the privacy, safety and security research category. How might your work align with Google’s research interests?
Sharma: Google has been actively promoting transparency in AI-generated content, especially through initiatives like Coalition for Content Provenance and Authenticity AI labels. However, today’s authenticity cues are still overwhelmingly only visual, thus excluding people with different needs, such as people with visual impairment. Our project sets out to tackle these threats by ensuring accessibility is not an afterthought, but rather a core design of security and safety.
Jaron Mink, assistant professor at Arizona State University, is also contributing to this research.