Generative AI has moved from curiosity to board-level priority. Yet most organisations remain stuck in pilot mode. A strategic pause can help executives reset priorities, remove structural friction, and convert experimentation into measurable value.
In a world where Generative AI dominates executive agendas, many organisations are trying to drive value from it. Despite billions in enterprise investment, the vast majority of organisations report no measurable profit and loss impact, and only a small fraction of initiatives scale. Tools often do not retain feedback, adapt to context, or integrate into workflows, creating a persistent learning gap.
On paper, adoption appears high. In practice, transformation remains limited. This leaves executives in organisations of all sizes asking a tricky question: Why do so many proofs of concept fail to translate into scaled outcomes?
To explore this, we conducted interviews with executives from international organisations across retail, insurance, engineering, manufacturing, and IT. The conversations reveal the structural frictions that keep AI initiatives stuck in pilot purgatory. They also show how a strategic pause can help executives realign ambition with operational reality and move from experiments to outcomes.
Many organisations now use tools such as ChatGPT or AI-assisted development environments, such as Copilots. Yet individual adoption is rarely the limiting factor. Instead, a persistent learning gap prevents tools from adapting to an organisation’s context and from being embedded into end-to-end processes, which is why most initiatives stall before measurable value is realised.
Worldwide AI spending continues to accelerate, and most large organisations plan to increase GenAI funding despite unresolved issues around data readiness, governance, and operating-model maturity. These pressures are expected to drive stronger data governance and the wider use of agentic AI in customer-facing interactions over the next several years.
So, what is driving the widening gap between expectations and reality? Across our interviews, one theme was consistent. Enthusiasm for AI is high, but strategic direction, operating-model readiness, and decision velocity are often missing.
The Head of Global Marketing and E-commerce at a global insurance company noted that the organisation runs many experiments but lacks a clear “dot on the horizon” to enable prioritisation and scale.
That lack of focus is not unique. The Innovation and Ecosystem Partnerships Director at a global engineering and services organisation observed that many traditional sectors are “playing it safe”. Projects stay small and, so far, add little to customer value.
The SLS Director of AI at a global engineering services company warned that this pattern creates a material risk of never realising value. He described it as pilot purgatory: an endless loop of demos and prototypes that never reach production.
Even in technology-driven environments, similar dynamics persist. A senior leader at a collaboration-technology company noted that while large firms often claim broad AI adoption, actual usage is frequently confined to small experimental teams. Meanwhile, the rest of the organisation hesitates, partly due to unclear direction and unclear boundaries for acceptable use.
The outcome is recognisable. Considerable hype, but limited business impact. AI is powerful, but most pilots deliver no financial return. The constraint is rarely the technology itself. More often, it is the organisational approach. Executives are beginning to recognise that the core bottleneck is strategic, not technical.
Across the interviews, four recurring frictions consistently blocked AI initiatives from scaling: Lack of strategic clarity and executive alignment; Cultural frictions and unclear ownership; Governance, risk, and compliance constraints; Legacy technology and integration challenges. In the following paragraphs, we unpack each friction and show how they interact to keep pilots from becoming enterprise capabilities.
Launching AI initiatives without a clear strategy or end state is rarely effective. “There’s often a gap between ambition and reality”, said the SLS Director of AI at a global engineering services company. Without a strategic approach, organisations act opportunistically. Teams spin up use cases ad hoc, chase trends, and struggle to justify the investment required for integration and scale.
The Head of Global Marketing and E-commerce at a global insurance company described the absence of a “north star” to guide decisions. Without alignment to business objectives and a clear rationale for prioritisation, pilots fail to earn the sponsorship needed to move forward. Even where individual adoption is high, the vast majority of initiatives remain stalled because tools and operating models do not learn, adapt, and integrate quickly enough to move from experiments to enterprise outcomes.
The Innovation and Ecosystem Partnerships Director at a global engineering and design organisation echoed this, noting a lack of daring among executives. Challenging or reinventing a successful legacy business model is rarely attractive until competitive pressure forces the issue.
Some caution is rational. Return on investment is not always clear at the outset, and the technology and geopolitical context remain volatile. But caution often becomes fragmentation. As one interviewee explained, “decision-making is too slow for fast-evolving technology. Projects stay small and add little customer value.”
Where strategic clarity is missing, culture quickly becomes the next barrier. The co-author of the Agile Manifesto we interviewed noted that management styles are often poorly suited to innovation because they remain too rule-bound. He argued that leaders need to empower rather than control, particularly in AI, where learning and iteration are essential.
A second cultural friction appears even where experimentation is encouraged. Innovation still requires clear ownership and accountability. Several executives described uncertainty about who should own and drive AI initiatives. Organisations often treat AI as an IT responsibility, yet IT may lack the business context to define valuable outcomes. The result is an ownership gap. Initiatives become stranded between departments.
Finally, adoption demands a shift in how people work. “AI isn’t just a tech issue. It’s a human one”, said the Manager of Emerging Technologies, a global manufacturing organisation. Employees need to adapt workflows, build trust in AI outputs, and unlearn entrenched habits. As the Head of Global Marketing and E-commerce at the insurance company put it, the challenge is “using what we have differently”. That requires capability-building, not just tooling.
The third friction centres on governance, risk, and compliance. AI raises legitimate concerns about privacy, security, intellectual property, and brand reputation. Many organisations respond with rigid controls that slow experimentation. “Our business operates in eight countries with three different legal systems, so everything new in AI raises a multitude of questions”, said the Head of Global Marketing and E-commerce at a global insurance company.
In consumer-facing sectors, risk aversion can be even stronger. The Director of Technology for Supply Chain Execution at a global retailer explained that the brand name is on the line when something goes wrong. Because retail can still thrive without AI, executive leadership often sees limited urgency to take bold steps.
Overly restrictive governance can trigger a counterproductive side-effect. Employees turn to unsanctioned tools, creating a growing shadow AI economy. This introduces additional security and compliance exposure, intensifying the tension between innovation and control.
To manage risk, many organisations use sandbox environments. The SLS Director of AI at a global engineering services company described these as a helpful first step but also noted a scalability challenge. If AI becomes a separate layer on top of legacy platforms, pilots may remain isolated, integration becomes harder, and the architecture fragments.
For many organisations, legacy infrastructure remains a fundamental obstacle to scaling AI. As one executive explained, “You can have the nicest dashboard and steering wheel, but if the engine is missing and the tyres are flat, you have no car.” Many enterprises lack modern data foundations. Even those who have digitised often struggle with cross-platform integration.
AI increasingly cuts across ERP, CRM, and operational systems. The Manager of Emerging Technologies noted that scaling requires an architecture that unites data, security, and compliance across the organisation.
In Europe, emerging regulations such as the EU AI Act raise the bar for governance and, in many use cases, require stronger controls over data lineage, risk management, and accountability. Executives also noted that large-scale transformation programmes frequently cost more than the underlying AI tools. As initiatives move from pilot to production, both complexity and caution rise. Many organisations stall before achieving scale, limiting the value AI can create
A strategic pause is not a retreat from innovation. It is an intentional reset to align ambition, governance, and delivery capacity. In practice, the pause should culminate in a small set of concrete decisions:
Douglas Voeten. Atradius, Head of Global Marketing and E-commerce.
Tom Oostens. Equans, SLS Director of AI.
Ignacio Bonetto. Damen Shipyards, Manager of Emerging Technologies.
Prasad Hegde. Albert Heijn, Director of Technology, Supply Chain Execution.
Sid van Wijk. Miro, Global Head, EBC and CAB Program.
Arie van Bennekum. Co-author of the Agile Manifesto and thought leader
Anonymous. Global Innovation and Ecosystem Partnerships Director.