AI risk is not new. The velocity is. What leaders must understand about AI safety, governance, and trust before moving faster.
To our community of data and AI leaders,
This week, a major AI safety report made the rounds, including the recently released International AI Safety Report 2026, which examined topics ranging from deepfakes to AI companions to increasingly autonomous systems. Coverage of the report quickly spread across mainstream media, including a detailed breakdown in The Guardian on the risks associated with deepfakes and emotional reliance on AI (read the article here).
And honestly, my first reaction was simple.
What didn’t stand out?
There is a lot in that report that should give leaders pause. But not because it introduces entirely new risks. What makes this moment different is the speed, scale, and emotional weight attached to the technology.
When you put “AI” in front of something like a phishing scam or a deep fake, it suddenly feels more ominous. The reality is that the underlying behavior hasn’t changed as much as the delivery mechanism has.
The end goal is still manipulation. The stakes are just higher now.
The risk is not new. The velocity is.
Deep fakes did not suddenly appear last year. Neither did social engineering. We have been training employees for years to question suspicious emails, unexpected requests, and unusual behavior.
What AI has done is increase the realism and the reach.
That means awareness matters more than ever. Not just awareness of tools, but awareness of intent. Organizations have a responsibility to train their people. Employees also have a responsibility to recognize that the world has changed, both at work and at home.
This is no longer just a corporate issue. It is a societal one.
One of the most concerning themes in the report is how quickly AI capabilities are advancing compared to our ability to understand and govern them.
We see this every week.
New models. New features. New announcements. Sometimes even the organizations building these systems are surprised by what they can do.
That is exactly why narrowing scope matters.
AI readiness is not about deploying everything that is possible. It is about being clear on what problem you are trying to solve, what opportunity you are pursuing, and how AI actually helps. When you define that clearly, you can put meaningful guardrails in place. Not perfect ones, but better ones.
Speed without clarity creates exposure.
Too often, governance shows up after the technology decision has already been made.
That is backwards.
Smart leaders think about governance alongside business objectives and technology choices. Not because governance slows things down, but because it protects the business when things inevitably get complicated.
Policies on paper are not enough. People have to understand them. They have to recognize when something feels off. And they have to feel empowered to speak up when it does.
That only happens when awareness and trust are built into the system from the start.
One of the more unexpected themes in the report is emotional reliance on AI companions.
There is a meaningful difference between conversational AI that helps you do your job and AI that becomes a substitute for human connection. Leaders need to be thoughtful about where that line is drawn, especially as conversational AI becomes more present in customer and employee experiences.
This again comes back to awareness.
What feels appropriate? What feels unhealthy? What should be encouraged and what should be constrained?
These are not purely technical questions. They are leadership questions.
At Data Society, we talk a lot about readiness over hype.
Real AI readiness does not come from moving faster. It comes from slowing down long enough to understand what you are doing, why you are doing it, and how people and technology are meant to work together.
If you invest the time upfront, you reduce risk later. And when issues do arise, you are far more prepared to handle them.
This applies well beyond technical teams.
For decades, poor data understanding has led to bad decisions. Think about spreadsheets passed around organizations where no one really understands the numbers inside them.
Now add AI to that equation.
The consequences are faster, broader, and harder to reverse. Data and AI fluency across non-technical roles is critical if organizations want to spot problems early and avoid compounding mistakes.
This is a conversation we explore often across the broader Data Society Group ecosystem, including our work at Data Society, CDO Magazine and The Data Lodge, where leaders regularly share how trust, literacy, and governance show up in real-world AI adoption.
If there is one takeaway I would leave leaders with, it is this.
AI should strengthen trust, not erode it.
That only happens when leaders partner with their employees. When they equip them, educate them, and trust them to engage responsibly with these tools. And when leaders trust themselves enough to slow down, ask better questions, and resist the pressure to adopt technology without purpose.
Trust is not built by technology alone. It is built by people making thoughtful decisions together.
Until next time,
Doug Llewellyn
P.S. If you missed the last Friday Feature on why connection is the missing link in AI transformation, it is worth a read.