What I Learned at P.S.R. 2025: The Overlooked Data Risk Behind AI

Scotty Hagen
Partner & VP, Data & Analytics, Northern
A few weeks ago, I had the chance to attend the International Association of Privacy Professionals’ P.S.R. conference — Privacy. Security. Risk. — down in San Diego. So I took one look at my calendar, cleared my schedule, and booked my flight. Because let’s be honest, there was no way I was missing that one.
When I arrived, the conversations opened up a lot of real questions teams are wrestling with right now. People were talking about new AI behaviours, emerging agent-like systems, and how quickly this whole space is evolving. The trending topics of data privacy, shadow AI, and AI governance really dominated the halls. It was exciting to see how organizations are thinking critically about AI, privacy, and compliance, and how eager they are for actionable solutions.
When all was said and done, one thing stood out to me: Many teams understand the risks of AI, but they’re struggling to get their entire organization aligned on how to handle this newer challenge.
The notion that I can collect any data I want and then use it in AI models downstream is simply no longer true. As I talked with other attendees at P.S.R., the same point came up again and again: if you can’t explain how your model uses that data and how it was collected, you’re setting yourself up for trouble.
Why I’m Concerned
AI is still a relatively new capability, yet even in these first few years I’ve repeatedly seen organizations collect data for a specific purpose, such as customer experience, and then repurpose that same dataset for AI decisioning without re-evaluating the legal basis or user consent.
At P.S.R., one of the hot button topics of the show was this exact question:
Do you actually have the right to use the data you collected in your AI models for the purpose that you’re using it?
I shared a simple example with our team: If your website asks the question “Do you require wheelchair accessibility?” and the user answers “yes,” well, you just collected sensitive personal information, which would place that data in a much higher protection category. If you use that data as input into an AI model for scoring or personalization, that organization may have just used that data in a way the user never agreed to.
Once an AI model is trained, you can’t remove that data. It’s already in there, like eggs baked into a cake.
The Stakes Are Higher Than Many Realize
Another theme that came up again and again at P.S.R. was this: regulators aren’t just reading your privacy policy anymore. They’re hiring technologists who can dig into network traffic and see what’s actually happening under the hood. That’s right — they’re looking directly at how your systems behave to see if what you say matches what you do.
That means they’re checking things like:
- Tag and tracking behaviour on websites
- Whether consent signals are truly being honoured in your technical systems
- Whether your AI models contain data users never agreed could be used for modelling
We’re entering what I’d call a phase of proof-based privacy. Promises are no longer enough. Now you have to be able to demonstrate it with evidence.
Here’s Where Teams Usually Get Stuck
- Consent to Collect does not mean Consent to Model
Just because data was collected does not automatically mean it can be used for training or decisioning purposes. That gap is where many organizations unintentionally step into trouble. - Legacy Data in New AI Models
If older data was collected in the past (before today’s higher expectations for consent), simply feeding it into a new AI system could now be against regulations. - The Fallacy of “Anonymous” Models
It’s no longer enough for a model to avoid outputting personal information. According to Canadian guidance, data used as part of the training process may still be considered personal data even if it doesn’t appear in the final model (Source: Government of Canada – Responsible Use of Generative AI Guidance). - Your Website Is Where Enforcement Usually Starts
Your website is usually where regulators start. If the consent signals don’t match what your systems actually do, that’s the first thread they pull. And once they see a gap there, they’ll dig deeper into the rest of your systems.
What Organizations Should Do Now
| Step | Action | Why It Matters |
|---|---|---|
| 1 | Validate your Consent Management Platform (CMP) | A CMP is necessary, but it’s not sufficient on its own. |
| 2 | Run ongoing consent monitoring or scanning | Confirms that tags, pixels, and data flows actually stop when users opt out. |
| 3 | Map your AI model data sources back to consent basis | You need to prove you have permission to use the data, not just collect it. |
| 4 | Be transparent in how AI influences decisioning | Users, regulators, and internal teams expect clarity and accountability. |
Teams often discover gaps here only when they test their own systems end-to-end. Small inconsistencies on a website are usually where issues surface first.
Why This Matters for Northern’s Clients
In the rush to adopt AI, many organizations are moving faster with their AI initiatives than with their data governance strategies. It’s understandable.
Because on one hand, building an AI program and gathering the data for it can feel like a quicker path to value because these initiatives often sit outside the day-to-day operational grind.
On the other hand, if your data strategy lags behind your AI efforts, you could be exposed to legal, financial, and reputational risk before you know it.
If you’re building or scaling AI, you need to make sure your consent strategy, data collection patterns, training datasets, and model governance controls are aligned before the model goes live, not after.
Because if regulators catch you, it happens fast, and retrofitting consent into an AI use case is going to be, at best, extremely difficult.
Final Thought
AI can absolutely drive value. It’s become a Swiss Army knife for marketing teams, helping with everything from personalization to content. But responsible data use isn’t just a compliance checkbox — it’s a trust signal.
The real question moving forward isn’t:
“Can we collect this data?”
but rather:
“Should we use this data in AI, and can we prove we have the right to?”
If you’d like a second set of eyes on your consent controls, data flows, or AI readiness, my team and I at Northern are happy to help. You can get in touch through our contact page.
Stay informed, sign up for our newsletter.