Abstract

Practitioners are increasingly integrating privacy principles into contemporary systems through privacy review processes—such as Privacy Impact Assessments and expert audits. However, these processes are labor-intensive, retrospective, and expensive, making it impractical to small or rapidly iterating teams. We propose the use of an agentic AI system to help practitioners observe, document, and analyze how their privacy designs compare to industry standards. Rather than avoiding privacy pitfalls later, our application aims help practitioners design systems that promote user privacy from the ground up. The can agent complement existing human-centered privacy design methodologies by offering continuous, data-driven visibility into evolving platform practices. Our application aims to leverage agentic AI to compile information on various digital platform structures and service usages as it relates to user privacy, facilitating a deeper understanding of effective user privacy design principles. We will discuss the implications for the future of automated privacy auditing and propose directions for integrating agentic feedback loops into participatory design workflows..

View the Artifact Directory for more details

Introduction

Privacy by Design has become a central principle in both regulatory and design discourse, emphasizing the integration of privacy considerations throughout the system development process. Yet, practitioners often struggle operationalizing these principles due to unclear lines of responsibility, limited institutional support, and the high costs associated with expert-led reviews. Recent advances in agentic AI—autonomous systems capable of multi-step reasoning and direct interaction with user interfaces—present new opportunities to scale and systematize privacy evaluation. In this work, we explore how such agents can function as design intermediaries, bridging the gap between the practical realities of interface implementation and the normative goals of privacy frameworks. Specifically, we examine whether agentic AI can systematically capture and interpret representations of privacy interfaces to help designers identify recurring interaction patterns that shape user understanding and control.

Implementation of privacy by design can pose a challenge due to confusion of roles in development, limitations of tools, and high cost of formal reviews. Li et al. (2024) explored how user feedback shapes the redesign of privacy-related features in Redesigning Privacy with User Feedback: The Case of Zoom Attendee Attention Tracking. The research was conducted in the form of analysis of public forums and interviews with engineers. This paper identified challenges including: polarized feedback, confirmation bias, and blurred responsibility boundaries. Findings from this paper show a need for structured, repeatable mechanisms to gather and integrate user perspectives. Wen et al. (2023) also advanced this investigation in Teaching Data Science Students to Sketch Privacy Designs through Heuristics, investigating how designers integrate privacy reasoning through heuristic-based sketching techniques. Their study introduced three heuristics—device-based data flow, with stakeholder interaction, and multi-layered representation. These improved ability to interpret privacy sketches. This work demonstrates the value of lightweight scaffolds that guide designers in representing data relationships and accountability flows, a process that our agentic system seeks to automate at the interface level. Schaub et al. (2015) proposed A Design Space for Effective Privacy Notices, articulating a taxonomy of notice characteristics spanning timing, channel, modality, and control. By reframing privacy notices as dynamic, contextual design components rather than static policy artifacts, their framework offers perspective for evaluation of users interpretation of privacy information. We can take inspiration from this taxonomy for analyzing the interaction traces and screenshots collected by the agentic AI system. Finally, Jin et al. (2021) introduced Lean Privacy Review: Collecting Users’ Privacy Concerns of Data Practices at a Low Cost, presenting a scalable, crowdsourced alternative to formal privacy audits. Ultimately, a method that was efficient and comprehensive was eliciting privacy concerns through structured free-text. Our approach extends this trajectory by introducing automation into the review pipeline: instead of relying on crowdworkers to surface privacy concerns, agentic AI autonomously collects empirical data about privacy interfaces, generating a corpus to be annotated or analyzed. These works outline a research landscape that values human-centered interpretation but lacks scalable, data-driven mechanisms for continuous privacy evaluation.

Previous work on automation and privacy has primarily focused on textual artifacts, particularly privacy policies. For instance, Polisis: Automated Analysis and Presentation of Privacy Policies Using Deep Learning (Harkous et al., 2018) leverages large-scale policy text and deep learning techniques to automatically extract meaningful insights and interpretations from privacy policies. Our approach differs in our focus on real user capacity within interfaces, examining how accessible privacy settings are through a website’s UI. More recent research, such as A Privacy-Driven UI Exploration Framework for Mobile App Settings (PDUE) (2025), takes a step in this direction by investigating how mobile app interface elements relate to data collection behavior. Building on these ideas, our project aims to develop an automated method through agentic AI for discovering and analyzing privacy-related information embedded within user interfaces — such as settings pages and menus — and examining how these design choices compare across different online platforms.

User privacy is increasingly recognized as a critical dimension of digital service design, yet the mechanisms by which it is implemented in practice remain inconsistent. Although regulations such as the GDPR and CCPA define formal requirements for data protection and consent, the translation of these frameworks into concrete software design often depends on the judgment of individual developers. Many software engineers lack formal training in privacy principles and security design, resulting in implementations that fall short in supporting meaningful user control. As a result, the technical and interaction layers of privacy remain fragmented, varying widely in visibility, accessibility, and effectiveness across websites.

This project aims to address the gap by developing an AI-driven framework for crawling and classifying privacy-related design elements across online platforms. Leveraging agentic AI, our system explores website structures, identifies privacy-related components, and classifies them based on their functionality and design characteristics. By aggregating these findings, we seek to uncover patterns that define effective, user-centered privacy design and highlight inconsistencies or usability issues across platforms.

We envision this AI-driven framework as a developer-facing tool that continuously updates to reflect the latest privacy setting changes across platforms. It allows software engineers to examine how privacy designs evolve across services and over time. By surfacing representative design patterns and historical trends, the system can help guide the development of more transparent and user-centered privacy mechanisms.

Our dataset consists of screenshots and structured interaction logs generated by autonomous agents engaging with privacy and security settings across multiple platforms. The inclusion of screenshots serves to approximate real user behavior, capturing not only textual content but also the visual and spatial context through which privacy controls are presented. Our screenshot corpus provides concrete evidence of how privacy features are represented and accessed within diverse interface designs across platforms. Through iterative deployment across applications, the agent constructs a cross-platform privacy map that reveals how design conventions vary across ecosystems. These data form the empirical basis for our classification and comparative analysis of privacy mechanisms, enabling us to examine how privacy is operationalized in practice and how automated observation can inform future privacy-aware design.

Methods

Our prototype system operationalizes an agentic AI workflow by enabling privacy setting modifications across real web platforms. First, it begins by loading a structured database of harvested privacy sections and setting descriptors, which were harvested from our automated web crawler. These settings are accessible through a Chainlit command interface. Users can input a command prompt (e.g., "change Reddit allow\_people\_to\_follow\_you to off"). Subsequently, the system will search for the relevant settings by looking through screenshots in our database to find the exact URL. Then, the system will open a Playwright browser session using pre-saved storage states. Once the page has been loaded, the agent will send to Gemini a full-page screenshot, DOM text map, DOM outline, and an execution state describing the target setting and prior failures. With all of this information, the planner will generate a sequence of actionable UI operations, which includes clicks, text selections, and coordinate interactions, encoded as a strict JSON file. Afterwards, these actions are applied on the Playwright browser and returns structured feedback. When Gemini finishes its operation, a separate verifier model will analyze the UI screenshot to confirm whether the setting has reached the target state. This methodology integrates structured data, browser automation, and interactive agentic reasoning to achieve end-to-end, model-driven UI manipulation.

Results & Conclusion

Our prototype system was able to demonstrate that agentic, model-driven UI interaction is a viable method for automatically navigating and modifying privacy controls across different web platforms, such as Reddit and Instagram. The agent was able to successfully locate and change specific settings requested by the user. It has also shown that it has been able to determine that the state has been saved to the desired state, indicating a successful setting change. One strength of our system is that it is able to navigate through noisy webpages and change settings through scrollable menus and pop-up confirmations. It also has a second layer of reliability when dealing with confirmation popups by sending Gemini an inspection screenshot to see if the setting was successfully changed. Our system was able to navigate through different platforms that have completely different UI layout proves that using agentic AI for scalable privacy-setting evaluation is viable. This method allows the agent to adapt to unfamiliar interfaces and learn in a generalizable manner.