AI Tools Combat Online Exploitation: Notre Dame’s Privacy Sandbox and Dark Pita Unveiled

Researchers at the University of Notre Dame, led by Assistant Professor Toby Li, are developing AI tools to help consumers understand online exploitation and improve digital literacy. They created a Chrome browser plug-in, Privacy Sandbox, which uses GPT-4, a large language model from OpenAI, to replace user data with AI-generated personas. Another plug-in, Dark Pita, identifies manipulative design features on popular platforms like Amazon, YouTube, Netflix, and Facebook. These tools aim to empower users to make informed decisions about their online interactions. The research was funded by Google and the National Science Foundation.

AI Tools for Enhancing Digital Privacy Literacy

Researchers at the University of Notre Dame are developing artificial intelligence (AI) tools aimed at enhancing the digital literacy of consumers. The objective is to help users understand how their data is used and manipulated on online platforms, thereby enabling them to make informed decisions about their interactions with these websites.

In a recent study, the researchers created a Chrome browser plug-in called Privacy Sandbox. This tool replaced user data with personas generated by GPT-4, a large language model from OpenAI, allowing participants to experiment with online privacy settings without any real-world consequences. As participants navigated various websites, the plug-in applied AI-generated data, making it easier for them to understand how they were targeted based on factors such as age, race, location, income, and household size.

Toby Li, assistant professor of computer science and engineering at Notre Dame, who led the research, emphasized the importance of understanding the implications of allowing platforms access to private data. He noted that while users might find it appealing to share data for better content, they should be aware that once shared, this data cannot be retrieved.

Dark Patterns in Digital Platforms

The research team also conducted a study on dark patterns, which are design features on digital platforms that subtly influence users to perform specific actions. These patterns are often used on websites to manipulate customers into consuming more content or making impulsive purchasing decisions.

To identify these dark patterns, the researchers developed another Chrome browser plug-in called Dark Pita. This tool was used to detect dark patterns on five popular online platforms: Amazon, YouTube, Netflix, Facebook, and an unnamed platform. Using machine learning, Dark Pita would notify users when a dark pattern was detected, identify the threat susceptibility of the pattern, and explain its potential impact, such as financial loss, invasion of privacy, or cognitive burden.

Empowering Users with AI Tools

The researchers plan to make both browser plug-ins, Privacy Sandbox and Dark Pita, publicly available. Li believes these tools exemplify how AI can be democratized for regular users to benefit society. He argues that as companies increasingly use AI to their advantage, the power gap between them and users widens. Therefore, providing the public with AI tools to counteract oppressive algorithms can help level the playing field.

Research Presentations and Publications

The study titled “An empathy-based sandbox approach to bridge the privacy gap among attitudes, goals, knowledge, and behaviors” was presented at the 2024 Association of Computing Machinery CHI Conference. The research team included Chaoran Chen and Yanfang (Fanny) Ye from Notre Dame, Weijun Li from Zhejiang University, Wenxin Song from the Chinese University of Hong Kong, and Yaxing Yao from Virginia Tech.

Another study led by Li, “From awareness to action: Exploring end-user empowerment interventions for dark patterns in UX,” has been published in the Proceedings of the ACM on Human-Computer Interaction (CSCW 2024). The co-authors of this study include Yuwen Lu from Notre Dame, Chao Zhang from Cornell University, Yuewen Yang from Cornell Tech, and Yao from Virginia Tech.

Funding and Support

The research was funded by a Google Research Scholar Award, a Google PSS Privacy Research Award, and the National Science Foundation. These awards highlight the importance and relevance of the research in the current digital landscape, where privacy concerns and manipulative design are increasingly prevalent.

More information
External Link: Click Here For More
Quantum News

Quantum News

As the Official Quantum Dog (or hound) by role is to dig out the latest nuggets of quantum goodness. There is so much happening right now in the field of technology, whether AI or the march of robots. But Quantum occupies a special space. Quite literally a special space. A Hilbert space infact, haha! Here I try to provide some of the news that might be considered breaking news in the Quantum Computing space.

Latest Posts by Quantum News:

Toyota & ORCA Achieve 80% Compute Time Reduction Using Quantum Reservoir Computing

Toyota & ORCA Achieve 80% Compute Time Reduction Using Quantum Reservoir Computing

January 14, 2026
GlobalFoundries Acquires Synopsys’ Processor IP to Accelerate Physical AI

GlobalFoundries Acquires Synopsys’ Processor IP to Accelerate Physical AI

January 14, 2026
Fujitsu & Toyota Systems Accelerate Automotive Design 20x with Quantum-Inspired AI

Fujitsu & Toyota Systems Accelerate Automotive Design 20x with Quantum-Inspired AI

January 14, 2026