“Harnessing machine learning can be transformational, but for it to be successful, enterprises need leadership from the top. This means understanding that when AI changes one part of the business, other parts must also change.” Erik Brynjolfsson, Stanford Institute for Human-Centered AI
Brynjolfsson is one of the world’s most cited economists on technology and productivity, a Stanford professor who has spent three decades studying what separates the few organisations that extract real value from transformative technology – which we will call the 6% club – from those that do not. He finds it an organisational issue: failure to consider the structural, governance and cultural changes needed to lead through AI transformation inevitably leads to under-achievement and disillusion.
Eighty-eight per cent of organisations globally now use AI in at least one business function, yet only around 6% qualify as genuine AI high performers – businesses attributing more than 5% of EBIT directly to AI and reporting significant value across the enterprise. The remaining 94% are somewhere between enthusiastic experimenter and quietly disillusioned pilot operator. Most have the tools. Very few have the results.
These high performers do not have access to better technology. What distinguishes them is organisational. McKinsey found that high performers are 3.6 times more likely to be pursuing transformational, enterprise-level change through AI and nearly three times more likely to have fundamentally redesigned their workflows in the process. Bolting AI onto existing processes is a false economy that leads to wasted resources, lost opportunities and competitive drag. The 6% rebuild those processes around what AI can actually do.
They are also three times more likely to have senior leaders who actively own and champion AI, genuinely modelling its use and driving its integration into strategic decision-making. This is the strongest single predictor of enterprise-level AI impact in the data. When senior leadership treats AI as a technology upgrade, the organisation stalls. When they treat it as a strategic shift that requires them personally to change how they work, the organisation moves.
The high performers apply the same capital discipline to AI investment as they would to a major acquisition: clear strategy aligned with organisational objectives, defined milestones and criteria for adjusting or closing underperforming initiatives. They manage AI investment across three horizons: foundational infrastructure (two to four year payback), near-term productivity (six to twelve months) and longer-term transformation (ongoing). They do not allow short-term return pressure to collapse everything into the second horizon at the expense of the first and third.
The Kyndryl Readiness Report, drawing on 3,700 senior leaders, found that 61% of CEOs now face intensified pressure to demonstrate AI returns compared with the prior year, while 53% of investors expect positive returns within six months or less. Responding to that pressure by sacrificing infrastructure and transformation investment to feed short-term results is one of the primary reasons organisations get trapped in pilot purgatory. Honest, clear communication from the outset – managing expectations, helping stakeholders understand realistic timescales and reimagining how success is measured – is itself a leadership responsibility. Equally, so is recognising when to kill a pilot that is not working, and to explain why.
Two-thirds of organisations remain in experimentation or piloting phase, lacking the operating model maturity to convert deployment into value. The most common single failure is the absence of clearly named executive ownership for AI outcomes across product, legal, risk and compliance. When nobody is explicitly accountable for what AI is doing across the organisation – which McKinsey found to be the norm – innovation slows, risk accumulates and resources are wasted.
Most organisations view governance as a constraint. The 6% experience it as a competitive advantage: the mechanism that builds stakeholder trust, enables faster decision-making within defined boundaries and provides the audit trail that allows boards to demonstrate responsible operation to regulators, investors and customers.
Regional AI regulatory frameworks add further complexity. The EU AI Act is now in phased application, with penalties reaching 7% of global annual turnover for high-risk non-compliance. The UK places the burden of interpretation directly on boards, making personal executive accountability the operative principle. In the US, enforcement is arriving through litigation rather than legislation, making documentation, testing and explainability the primary risk mitigation tools. Working across different regions demands flexible compliance models, but across all three regimes AI governance is a board-level responsibility and the expectation that it can be delegated to IT or legal functions is no longer sustainable.
Moving from the 94% to the 6% requires coordinated evolution across five interconnected dimensions. Here are five questions your board should be able to answer:
Who in your organisation is accountable if your AI produces a wrong outcome? In most organisations, nobody can answer that. Executive accountability means designating named individuals responsible for AI outcomes across every relevant function – product, legal, risk, compliance and people – with those owners demonstrating AI literacy in capital allocation decisions.
Are you asking how AI could transform how this work is done, or just how to make existing processes faster? Workflow redesign is the single most powerful lever in the McKinsey data. High performers decompose roles into task sets, identify which activities are best automated, which augmented and which require human judgement, and rebuild performance metrics around value delivered rather than activity completed. (See our previous insight, Redefining Work in an Human/Machine Era.)
Is your AI training a one-off event or embedded into how people work every day? McKinsey’s data shows that high performers embed at least 81 hours of annual AI training per employee into operations. Sixty-three per cent of employers globally identify capability gaps as their primary barrier to AI scaling, yet most continue to look externally for capabilities that reskilling could develop internally at lower cost and with less disruption.
Have you defined what failure looks like before you start? Capital discipline with kill-switch criteria means defining in advance, at the point of approving any AI initiative, when a pilot gets shut down rather than scaled. The organisations accumulating the most expensive AI failures are those that never established what insufficient progress looked like.
Can you explain to every stakeholder – employees, customers, regulators, investors – exactly how AI is influencing decisions that affect them? Stakeholder trust architecture is an operational requirement, not a PR exercise. In an environment where 51% of organisations report AI-related incidents, eroded trust is difficult to rebuild. High performers are more than twice as likely to have defined human-in-the-loop validation processes – 65% versus 23%.
McKinsey found that function-level returns in software engineering, manufacturing and IT regularly reach 10-20% cost reductions, with marketing and product development seeing revenue uplift above 10% in leading deployments. But the ROI conversation in most boardrooms is still too narrow. Organisations measuring only financial return are missing both the value and the risk.
Two thirds of organisations in McKinsey’s survey report AI-driven improvements in innovation capacity, while 45% report improved customer satisfaction and 36% see strengthened competitive differentiation. These are leading indicators of future financial performance. Organisations tracking only EBIT impact miss the earlier signals that tell them whether their AI investment is building the capabilities that will compound into revenue.
Stakeholder trust is measurable and its erosion is one of the most expensive and least discussed AI risks. Customer trust in AI-mediated decisions, employee confidence in the organisation’s approach to workforce impact and investor trust in governance quality all affect the cost of capital, talent retention and customer lifetime value in ways that do not appear in short-term financial metrics. Regulatory standing carries an implicit financial value that almost no organisation currently quantifies, and boards that require AI investment proposals to include a regulatory exposure assessment alongside the financial case are making a sound capital allocation decision, not an over-cautious one.
Leadership seeking to help their organisations break into the top 6% can learn much from the earlier pioneers — both what to do, and what not to do.
JPMorgan Chase is the most thoroughly documented example of an organisation in the 6%. Its AI programme has more than 450 live use cases delivering between $1.5 billion and $2 billion in annual value. More than 200,000 employees use its proprietary LLM Suite platform daily and AI-attributed benefits have grown 30-40% year-on-year. AI coding assistants have lifted developer productivity by 10-20% across a technology workforce of 63,000, its Coach AI advisory tool contributed to a 20% increase in gross sales in asset and wealth management between 2023 and 2024, while fraud prevention and operational efficiencies saved a further $1.5 billion.
What explains it? Not the technology. JPMorgan uses many of the same foundation models available to every competitor. What distinguishes the bank is its governance architecture: a firmwide Chief Data Officer mandate aligning data platforms with model risk management, legal and security functions across every business line; rigorous ROI measurement at the individual initiative level; and a board-level treatment of AI as a core operating function. As JPMorgan’s own Chief Analytics Officer put it: “There is a value gap between what the technology is capable of and the ability to fully capture that in an enterprise.” Their answer to that gap has been structural and the returns reflect it.
The bank also acknowledges the risks candidly: recouping the $18 billion investment will take time, and the technology comes at human cost, with a projected 10% reduction in operations headcount. Organisations carry an ethical and societal responsibility to mitigate those potentially significant losses.
In 2012, MD Anderson partnered with IBM to build an AI clinical decision support tool for oncologists. The goal was to democratise world-class cancer care, giving any oncologist anywhere access to the diagnostic intelligence of one of the world’s leading cancer institutions. Five years and $62 million later, the contract expired before the system had been used on a single real patient. Inquests found the failure organisational rather than technological: the system was incompatible with existing platforms, scope had ballooned, the original six-month delivery timeline had been extended twelve times and no one with clear authority had been accountable for keeping the project within workable boundaries. It failed where JPMorgan succeeded – in governance, data foundation, accountability and the integration of human and technical design.
The gap between the 6% and the 94% continues to widen because AI advantage compounds. The organisations that have redesigned their workflows, built their people’s capabilities and embedded governance into their operating models are iterating faster and learning more with every cycle. Their data gets richer, their models improve and the distance between them and the organisations still running disconnected pilots increases.
The structural work needed – governance architecture, operating model redesign, talent investment, cross-functional accountability – is neither glamorous nor fast. The 6% understood this earlier than most. They made different choices, at the leadership level, about what kind of organisation they were building. That, ultimately, is the only gap that matters.
This insight is edited from a section of the first Rialto AI Business Leaders Circle Strategic Briefing of 2026, a biannual benefit of membership, which also includes the opportunity to help shape the future of AI in UK business with a seat at the table of the All-Party Parliamentary Group for AI (APPG AI) alongside MPs and other leading figures across government, academia and investment.
You can find out more about joining here
Many senior executives are responding to today’s slowing job market in the same way: stay put, deliver results, avoid unnecessary…
The human-machine era marks a shift in how organisations think about work, productivity and capability. AI is no longer confined…
The boardroom discussion that never happened often costs the most. While organisations pour resources into strategic planning, market analysis and…