In this week’s Friday Feature, Doug Llewellyn reflects on why many organizations are overcomplicating AI strategy and what data leaders should focus on instead: clarity, workforce readiness, and disciplined experimentation.
To our community of data leaders and learners,
Over the past year, I have had many conversations with executives about AI strategy. One theme continues to surface in those discussions. A number of organizations are making AI more complicated than it needs to be.
That complexity usually does not come from the technology itself. More often, it comes from the pressure leaders feel to act quickly and demonstrate that their organization has a comprehensive plan in place.
Boards are asking about AI. Investors are asking about AI. Employees are experimenting with AI tools every day. In that environment, it is easy for companies to jump straight into building long roadmaps, launching multiple initiatives, and introducing new technologies all at once.
But the better starting point is usually much simpler.
Start with the problem.
Instead of asking, “How do we use AI?” leaders should begin by asking, “What are we trying to improve in our business?”
AI can help organizations increase revenue, improve efficiency, and support better decision making. But it rarely needs to solve everything at once. In fact, when companies attempt to apply AI everywhere immediately, that is often when the complexity begins to slow them down.
Another pattern I see is the belief that a longer roadmap equals a stronger strategy.
It is easy to fall into the assumption that strategic work must look expansive and long term. A large plan can feel more thoughtful than a smaller, focused effort. But complexity does not always produce better outcomes.
In practice, overly complex plans often delay execution. Teams spend months coordinating initiatives, evaluating technologies, and aligning stakeholders before meaningful work begins.
Meanwhile, the organizations that are making the most progress tend to start smaller. They focus on specific problems, run disciplined experiments, and build from what they learn.
Just as importantly, they stay focused on results.
One challenge I see across organizations today is the confusion between activity and progress. AI initiatives generate a lot of motion. There are pilots running, working groups forming, vendors presenting capabilities, and dashboards tracking experiments.
From the outside, that activity can look like momentum.
Inside the organization, however, the real question is whether those efforts are producing meaningful outcomes. As leaders, we are not measured by how much activity we generate. We are measured by results.
Another place complexity shows up is when organizations begin with technology instead of the business problem they are trying to solve.
Teams spend time exploring tools and platforms before clearly defining how those tools will improve the business. Without that clarity, AI initiatives can grow quickly into large efforts that employees do not fully understand and cannot easily connect to their daily work.
That is why workforce readiness matters just as much as technology readiness.
AI is not only a technology shift. It is also a workforce shift. Some employees are excited about what these tools can do. Others are uncertain or even concerned about how AI might affect their roles.
If leaders introduce new systems without preparing people to use them effectively, organizations will struggle to capture the value of those investments.
This is where change management becomes essential.
Throughout history, major shifts in how companies operate have required thoughtful leadership around how people adopt new ways of working. AI is no different. If organizations want their teams moving in the same direction, they need clear communication, practical training, and a shared understanding of how these tools support the mission of the business.
Governance also plays an important role in simplifying AI strategy.
Some leaders worry that guardrails will slow innovation. In practice, clear governance often has the opposite effect. Guardrails help teams stay focused. They reduce scope creep and prevent too many initiatives from competing for attention at the same time.
When people understand what they can do, how they should do it, and how decisions are made, it becomes much easier to move forward with confidence.
Across our conversations with organizations, we also see many companies experimenting with multiple AI pilots at once. Sometimes this reflects a healthy culture of innovation. In other cases, it signals a lack of focus.
The difference comes down to alignment.
Disciplined experimentation begins with executive buy-in. From there, organizations bring together technology, people, and change management to ensure that innovation translates into real business results.
For leaders who feel their AI strategy has become overly complex, the next step may simply be to return to a few foundational questions.
What problem are we solving?
How will we measure success?
How will our workforce adopt these tools?
And how will we keep our efforts focused as we move forward?
In many cases, simplifying the strategy is what allows organizations to move the fastest.
Thank you for being part of this community and for the work you are doing to advance data, AI, and responsible innovation in your organizations.
Doug
P.S. Did you miss the last feature? Check it out here: Doug’s Friday Feature: AI is already influencing decisions inside your organization.