© 2026 TrustArc Inc. Proprietary and Confidential Information.
From Trends to Action:
Fitting AI Governance
into Privacy Ops
2
LEGAL DISCLAIMER
The information provided during this webinar does not,
and is not intended to, constitute legal advice.
Instead, all information, content, and materials presented during this
webinar are for general informational purposes only.
3
Speakers
Ridhi Varma
Senior Global Privacy Manager
TrustArc
Daniel Berrick
Senior Policy Counsel for Artificial Intelligence
Future of Privacy Forum
Lindsay Palmer
Privacy Knowledge Principal
TrustArc
4
Agenda
1. AI Trends & the Evolving Risk Landscape
2. Spotlight on Agentic AI
3. From Governance to Operations: Embedding AI
into Privacy Ops
4. What AI Governance Looks Like in Practice
5. Integrating AI Governance Without Slowing
Innovation
6. Why Privacy Ops Is the Foundation for Scalable
AI Governance
5
AI Trends and an Evolving Risk Landscape
Trends
● Shift from deployment to development
● Agentic AI
● Increased use of AI in real-time interactions:
○ More automation of tasks
○ More interactions with AI “helpers”
● Focus on transparency and fairness
(e.g., AI hiring tools taken to task in the courts)
Risk Landscape
● Governance and Accountability Pressures
● Complex Compliance Requirements:
○ Creep of AI-related provisions embedded into legislation
○ Consistency issues (e.g., chatbot rules)
○ Data-driven pricing laws
● Focus on security - adopting a “Security by Design” approach
6
Characteristics of Agentic AI
In the broadest sense, we have long had
“automated decision-making systems” that act
on our behalf - raising issues of oversight, data
protection rights for “legal or similarly significant”
effects (Art. 22), and liability
What’s changed?
○ Rise of LLMs: provide the natural language understanding that enables
complex instructions, breaking down problems, and communicating
○ Reasoning Models: enhance the ability to plan, evaluate options, and
execute complex workflows
○ Retrieval-Augmented Generation: add dynamic knowledge access, to
incorporate real-time, domain-specific, or new information not in training
data (including private databases)
7
Agentic Describes A Trend
Moving towards….
➔ More complex problems: planning, task
assignment, and orchestration
➔ Greater autonomy: deciding how to solve the
problem and what data and systems are needed
➔ Greater adaptability: e.g., using different data if
sought-after information is unavailable
➔ Greater access to systems and real-world
ability to do things that may have economic impact
(book tickets, make reservations)
8
Examples and Use Cases
● OpenAI Operator
● Amazon’s Q
● Microsoft Copilot
● Google’s Project Astra
● Anthropic Claude Computer Use, Claude Code,
Claude for Chrome
Growing Agentic Use Cases:
a. Enterprise and Industry: Coding, Building,
Security/Threat Detection, Financial Trading,
Supply Chain
b. Consumer Day to Day: Shopping and
Commerce, Personal Productivity, Health
and Wellness, Travel/Logistics
9
1/ Similarities to LLMs
● Fundamental data protection issues of model
memorization, accuracy, ethical training of models,
and access to or transmission of data to third
parties, guardrails
● Operationalizing data subject rights
● Anthropomorphization and safety risks
10
2/ Data Collection, Disclosure, and Security Vulnerabilities
● Tool usage (e.g., application programming
interfaces, data stores, and extensions) enables
access to external systems and data
● Data categories that agent may access grows with
diversifying use cases (e.g., browser screenshots,
telemetry data)
● Design features and characteristics make agents
susceptible to new kinds of security threats (e.g.,
injection attacks tailored to browser-use agents)
11
3/ Accuracy of Outputs
● Hallucinations may have different implications than
those raised by LLMs (e.g., misrepresenting a user’s
characteristics and preferences when it fills out a
consequential form)
● Compounding errors, where the agent’s accuracy
decreases the more steps a task takes
● Unpredictable behavior due to dynamic operational
environments and agents’ non-deterministic nature
12
4/ Barriers to “Alignment”
● AI alignment: Designing AI models and systems to
pursue a designer’s goals, such as prioritizing human
well-being and conforming to ethical values
● Consumer protection alignment for commerce
● Alignment faking: Strategically mimicking training
objectives to avoid undergoing behavioral
modifications
● Data privacy implications of agentic systems
autonomously making decisions (e.g., “accepting all
cookies” or sharing sensitive data with a third party
despite not being in user’s best interests
13
5/ Explainability and Human Oversight
● Users ability to understand an agent’s decisions,
even if these decisions are correct
● Speed and complexity of AI agents’ decision-making
processes may create heightened roadblocks to
realizing meaningful explainability and human
oversight
● The ability to provide system reasoning in natural
language are becoming more complicated and are
not always indicative of the agent’s actual reasoning
14
Current and Future Challenges
● Negotiation of responsibilities: How will liability
be determined when systems make mistakes or
harm people?
● Effective inter-system API communications and
secure financial transactions (developments in
open source protocols like MCP, A2A, and AP2)
● Addressing the sheer scale of data sensitivity:
Can our legal systems adapt to the scale,
sensitivity, and intimacy of data collected?
● “AI Privilege”? Will more advances in technology
be the answer (Local vs. Cloud-based LLMs?
Privacy-enhancing tech?)
15
Resources
● Daniel Berrick, “Minding Mindful Machines:
AI Agents and Data Protection
Considerations,” (Apr. 2025)
● Daniel Berrick and Stacey Gray, “Concepts
in AI Governance: Personality vs.
Personalization” (Sept. 2025)
● Daniel Berrick, “From Chatbot to Checkout:
How Pays When Transactional Agents
Play?” (Feb. 2026)
16
From Governance to Operations: Embedding AI into Privacy Ops
● Identifying the role in AI ecosystem:
Developer vs Deployer vs Other/Hybrid
● Clarifying vision regarding intended purpose and
deployment context of the AI system
● Determining risk level (low / medium / high) and
required mitigation (EU AI Act Arts. 6–14):
○ Human oversight
○ Escalation and override mechanisms
● Setting ownership and accountability across AI system
lifecycle (GDPR Art. 5)
● Identifying data origin and use of sensitive personal
information
● Ongoing Monitoring of outputs
17
What AI governance looks like in practice - 1. Extending DPIAs
● Include AI-specific assessment elements that address purpose and deployment
context, impacts on individuals, and bias, fairness, and explainability considerations.
● Assess risks arising from:
○ Training data selection
○ Deployment context changes
○ Output use beyond intended purpose
● Reuse existing DPIA tooling, workflows,
and approval gates to ensure consistent,
repeatable AI risk assessments.
18
2. Evolving ROPA to Manage AI Data Provenance
● Leverage and extend existing data maps to provide deeper visibility into data used by AI
systems.
● Document and maintain records of training data sources, and re-calibrate datasets as
needed.
● Identify, update records and manage:
○ Personal vs non-personal data
○ Sensitive categories of personal data
○ Known or potential bias in datasets
● Data maps evolve into model-aware data
lineage to:
○ Support lifecycle controls
○ Data Retention and Deletion
19
● AI expands the vendor ecosystem beyond traditional service providers, creating
hidden dependencies across the AI lifecycle, embedded AI in SaaS tools and
platforms increases indirect AI risk exposure, often outside direct product or privacy
team visibility
● Due diligence must assess how models are trained, what data is used, and who
controls the data and the model (joint controller, processor, or sub-processor).
What organizations must operationalize:
● Contracts must clearly define permitted data use, reuse restrictions, onward
transfers, and responsibility allocation.
● Output-level controls must be implemented to prevent misuse, detect model drift,
and manage unintended or harmful outputs.
● End-to-end transparency and auditability across the AI supply chain are required to
maintain oversight and regulatory defensibility.
3. Expanding Vendor Risk Management to AI Supply Chain Risks
20
● Establish and communicate an AI Governance Policy, covering:
○ Intended use and restrictions
○ High-level system explainability
○ Risk management and escalation
● Reuse Privacy Ops infrastructure:
○ ROPA covering personal and non-personal AI data
○ DPIAs and vendor assessments as primary risk tools
● Risk-tier AI systems:
○ Lighter controls for low-risk use cases
○ Deeper review and approvals for high-impact AI
● Intake questionnaires at ideation stage
● Privacy & AI checks before procurement or deployment
Integrating AI Governance Without Slowing Innovation
21
Why Privacy Ops Is the Foundation for Scalable AI Governance
● Privacy and Security by design is already embedded in Privacy Ops and extends naturally to AI
system development, including data minimization, purpose limitation, and prevention of
re-identification (EU AI Act Art. 10; GDPR Art. 5(1)(b), (c))
● Security in processing is operationalized through existing technical and organizational measures
such as encryption, anonymization or pseudonymization, access controls, secure deletion, and
periodic reviews.
● Awareness and training - personnel operating, designing and using are aware of AI policy and
procedures, role based duties and responsibilities, context, relevant regulations, biases and
processes to challenge AI outcomes
● Incident detection, escalation, response and management
● Privacy Ops already delivers: Accountability, Transparency and Audit readiness
● AI governance adds new risks, not necessarily a new operating model
22
Key Takeaways
Risk-Based Governance Framework
● Clear role identification to better understand legal requirements
● Risk tiering: “Lighter touch” for low-risk systems and “deep dives” for high-impact AI
● Human-in-the-Loop: Clear escalation and override mechanisms for high-risk systems
Data Lineage and “Model Aware” Mapping
● Expanded data maps
● Bias detection: Proactively identify “sensitive categories” and potential biases within datasets
before
● Unintended use trap: Ensure risk assessments specifically account for model drift
Vendor Management
● Supply chain transparency
● Tightened contracts
Operationalize the AI Lifecycle
● Early intake questionnaires
● Privacy and security by design
● Output monitoring
23
Summary: Privacy Ops vs. AI Governance
Feature Privacy Ops (Traditional) AI Governance (Expanded)
Primary Tool DPIA / ROPA “Model Aware” DPIA
Data Focus Storage and Access Training, Fine-tuning, &
Inference
Risk Focus Data Breaches / Privacy
Loss
Bias, Explainability, & Model
Drift
Human Role Data Subjects Rights Oversight & Override
Mechanisms
Thank You!

TrustArc Webinar - From Trends to Action: Fitting AI Governance into Privacy Ops

  • 1.
    © 2026 TrustArcInc. Proprietary and Confidential Information. From Trends to Action: Fitting AI Governance into Privacy Ops
  • 2.
    2 LEGAL DISCLAIMER The informationprovided during this webinar does not, and is not intended to, constitute legal advice. Instead, all information, content, and materials presented during this webinar are for general informational purposes only.
  • 3.
    3 Speakers Ridhi Varma Senior GlobalPrivacy Manager TrustArc Daniel Berrick Senior Policy Counsel for Artificial Intelligence Future of Privacy Forum Lindsay Palmer Privacy Knowledge Principal TrustArc
  • 4.
    4 Agenda 1. AI Trends& the Evolving Risk Landscape 2. Spotlight on Agentic AI 3. From Governance to Operations: Embedding AI into Privacy Ops 4. What AI Governance Looks Like in Practice 5. Integrating AI Governance Without Slowing Innovation 6. Why Privacy Ops Is the Foundation for Scalable AI Governance
  • 5.
    5 AI Trends andan Evolving Risk Landscape Trends ● Shift from deployment to development ● Agentic AI ● Increased use of AI in real-time interactions: ○ More automation of tasks ○ More interactions with AI “helpers” ● Focus on transparency and fairness (e.g., AI hiring tools taken to task in the courts) Risk Landscape ● Governance and Accountability Pressures ● Complex Compliance Requirements: ○ Creep of AI-related provisions embedded into legislation ○ Consistency issues (e.g., chatbot rules) ○ Data-driven pricing laws ● Focus on security - adopting a “Security by Design” approach
  • 6.
    6 Characteristics of AgenticAI In the broadest sense, we have long had “automated decision-making systems” that act on our behalf - raising issues of oversight, data protection rights for “legal or similarly significant” effects (Art. 22), and liability What’s changed? ○ Rise of LLMs: provide the natural language understanding that enables complex instructions, breaking down problems, and communicating ○ Reasoning Models: enhance the ability to plan, evaluate options, and execute complex workflows ○ Retrieval-Augmented Generation: add dynamic knowledge access, to incorporate real-time, domain-specific, or new information not in training data (including private databases)
  • 7.
    7 Agentic Describes ATrend Moving towards…. ➔ More complex problems: planning, task assignment, and orchestration ➔ Greater autonomy: deciding how to solve the problem and what data and systems are needed ➔ Greater adaptability: e.g., using different data if sought-after information is unavailable ➔ Greater access to systems and real-world ability to do things that may have economic impact (book tickets, make reservations)
  • 8.
    8 Examples and UseCases ● OpenAI Operator ● Amazon’s Q ● Microsoft Copilot ● Google’s Project Astra ● Anthropic Claude Computer Use, Claude Code, Claude for Chrome Growing Agentic Use Cases: a. Enterprise and Industry: Coding, Building, Security/Threat Detection, Financial Trading, Supply Chain b. Consumer Day to Day: Shopping and Commerce, Personal Productivity, Health and Wellness, Travel/Logistics
  • 9.
    9 1/ Similarities toLLMs ● Fundamental data protection issues of model memorization, accuracy, ethical training of models, and access to or transmission of data to third parties, guardrails ● Operationalizing data subject rights ● Anthropomorphization and safety risks
  • 10.
    10 2/ Data Collection,Disclosure, and Security Vulnerabilities ● Tool usage (e.g., application programming interfaces, data stores, and extensions) enables access to external systems and data ● Data categories that agent may access grows with diversifying use cases (e.g., browser screenshots, telemetry data) ● Design features and characteristics make agents susceptible to new kinds of security threats (e.g., injection attacks tailored to browser-use agents)
  • 11.
    11 3/ Accuracy ofOutputs ● Hallucinations may have different implications than those raised by LLMs (e.g., misrepresenting a user’s characteristics and preferences when it fills out a consequential form) ● Compounding errors, where the agent’s accuracy decreases the more steps a task takes ● Unpredictable behavior due to dynamic operational environments and agents’ non-deterministic nature
  • 12.
    12 4/ Barriers to“Alignment” ● AI alignment: Designing AI models and systems to pursue a designer’s goals, such as prioritizing human well-being and conforming to ethical values ● Consumer protection alignment for commerce ● Alignment faking: Strategically mimicking training objectives to avoid undergoing behavioral modifications ● Data privacy implications of agentic systems autonomously making decisions (e.g., “accepting all cookies” or sharing sensitive data with a third party despite not being in user’s best interests
  • 13.
    13 5/ Explainability andHuman Oversight ● Users ability to understand an agent’s decisions, even if these decisions are correct ● Speed and complexity of AI agents’ decision-making processes may create heightened roadblocks to realizing meaningful explainability and human oversight ● The ability to provide system reasoning in natural language are becoming more complicated and are not always indicative of the agent’s actual reasoning
  • 14.
    14 Current and FutureChallenges ● Negotiation of responsibilities: How will liability be determined when systems make mistakes or harm people? ● Effective inter-system API communications and secure financial transactions (developments in open source protocols like MCP, A2A, and AP2) ● Addressing the sheer scale of data sensitivity: Can our legal systems adapt to the scale, sensitivity, and intimacy of data collected? ● “AI Privilege”? Will more advances in technology be the answer (Local vs. Cloud-based LLMs? Privacy-enhancing tech?)
  • 15.
    15 Resources ● Daniel Berrick,“Minding Mindful Machines: AI Agents and Data Protection Considerations,” (Apr. 2025) ● Daniel Berrick and Stacey Gray, “Concepts in AI Governance: Personality vs. Personalization” (Sept. 2025) ● Daniel Berrick, “From Chatbot to Checkout: How Pays When Transactional Agents Play?” (Feb. 2026)
  • 16.
    16 From Governance toOperations: Embedding AI into Privacy Ops ● Identifying the role in AI ecosystem: Developer vs Deployer vs Other/Hybrid ● Clarifying vision regarding intended purpose and deployment context of the AI system ● Determining risk level (low / medium / high) and required mitigation (EU AI Act Arts. 6–14): ○ Human oversight ○ Escalation and override mechanisms ● Setting ownership and accountability across AI system lifecycle (GDPR Art. 5) ● Identifying data origin and use of sensitive personal information ● Ongoing Monitoring of outputs
  • 17.
    17 What AI governancelooks like in practice - 1. Extending DPIAs ● Include AI-specific assessment elements that address purpose and deployment context, impacts on individuals, and bias, fairness, and explainability considerations. ● Assess risks arising from: ○ Training data selection ○ Deployment context changes ○ Output use beyond intended purpose ● Reuse existing DPIA tooling, workflows, and approval gates to ensure consistent, repeatable AI risk assessments.
  • 18.
    18 2. Evolving ROPAto Manage AI Data Provenance ● Leverage and extend existing data maps to provide deeper visibility into data used by AI systems. ● Document and maintain records of training data sources, and re-calibrate datasets as needed. ● Identify, update records and manage: ○ Personal vs non-personal data ○ Sensitive categories of personal data ○ Known or potential bias in datasets ● Data maps evolve into model-aware data lineage to: ○ Support lifecycle controls ○ Data Retention and Deletion
  • 19.
    19 ● AI expandsthe vendor ecosystem beyond traditional service providers, creating hidden dependencies across the AI lifecycle, embedded AI in SaaS tools and platforms increases indirect AI risk exposure, often outside direct product or privacy team visibility ● Due diligence must assess how models are trained, what data is used, and who controls the data and the model (joint controller, processor, or sub-processor). What organizations must operationalize: ● Contracts must clearly define permitted data use, reuse restrictions, onward transfers, and responsibility allocation. ● Output-level controls must be implemented to prevent misuse, detect model drift, and manage unintended or harmful outputs. ● End-to-end transparency and auditability across the AI supply chain are required to maintain oversight and regulatory defensibility. 3. Expanding Vendor Risk Management to AI Supply Chain Risks
  • 20.
    20 ● Establish andcommunicate an AI Governance Policy, covering: ○ Intended use and restrictions ○ High-level system explainability ○ Risk management and escalation ● Reuse Privacy Ops infrastructure: ○ ROPA covering personal and non-personal AI data ○ DPIAs and vendor assessments as primary risk tools ● Risk-tier AI systems: ○ Lighter controls for low-risk use cases ○ Deeper review and approvals for high-impact AI ● Intake questionnaires at ideation stage ● Privacy & AI checks before procurement or deployment Integrating AI Governance Without Slowing Innovation
  • 21.
    21 Why Privacy OpsIs the Foundation for Scalable AI Governance ● Privacy and Security by design is already embedded in Privacy Ops and extends naturally to AI system development, including data minimization, purpose limitation, and prevention of re-identification (EU AI Act Art. 10; GDPR Art. 5(1)(b), (c)) ● Security in processing is operationalized through existing technical and organizational measures such as encryption, anonymization or pseudonymization, access controls, secure deletion, and periodic reviews. ● Awareness and training - personnel operating, designing and using are aware of AI policy and procedures, role based duties and responsibilities, context, relevant regulations, biases and processes to challenge AI outcomes ● Incident detection, escalation, response and management ● Privacy Ops already delivers: Accountability, Transparency and Audit readiness ● AI governance adds new risks, not necessarily a new operating model
  • 22.
    22 Key Takeaways Risk-Based GovernanceFramework ● Clear role identification to better understand legal requirements ● Risk tiering: “Lighter touch” for low-risk systems and “deep dives” for high-impact AI ● Human-in-the-Loop: Clear escalation and override mechanisms for high-risk systems Data Lineage and “Model Aware” Mapping ● Expanded data maps ● Bias detection: Proactively identify “sensitive categories” and potential biases within datasets before ● Unintended use trap: Ensure risk assessments specifically account for model drift Vendor Management ● Supply chain transparency ● Tightened contracts Operationalize the AI Lifecycle ● Early intake questionnaires ● Privacy and security by design ● Output monitoring
  • 23.
    23 Summary: Privacy Opsvs. AI Governance Feature Privacy Ops (Traditional) AI Governance (Expanded) Primary Tool DPIA / ROPA “Model Aware” DPIA Data Focus Storage and Access Training, Fine-tuning, & Inference Risk Focus Data Breaches / Privacy Loss Bias, Explainability, & Model Drift Human Role Data Subjects Rights Oversight & Override Mechanisms
  • 24.