top of page

Read this article on our Substack - The Impossible Brief


MIT’s 2025 study found that 95% of enterprise AI initiatives deliver zero measurable ROI.[1] The market’s response has been predictable: blame the vendors, demand better promises, negotiate harder contracts.


Now the correction is arriving—and it’s more sophisticated than just shifting blame. Enterprise buyers are demanding contractual outcome guarantees, not just vendor promises. But that shift only works if both parties understand what they’re actually signing up for.


Enterprise AI has run on a comfortable fiction: vendors sold outcomes in the pitch, delivered working systems at go-live, and everyone quietly agreed not to look too closely at the gap between demo and deployment. That arrangement is ending.


Enterprise buyers, burned by the failure gap, are starting to demand contractual outcome guarantees. Not “we believe this will reduce costs” but: we commit to forecast accuracy above 85%, cycle-time reductions of 20% within six months, and we share financial risk if we miss.[2] By 2026, Gartner forecasts that 40% of enterprise SaaS contracts will include outcome-based elements, up from roughly 15% just a few years earlier.[3] In professional services, McKinsey reported that approximately 25% of its global client fees in 2025 came from outcome-based contracts—a significant shift for firms historically built on billable hours.[4] The vendors capable of offering these guarantees are winning the largest strategic accounts precisely because they’ve accepted that deployment success is their problem, not just the client’s.


The logic is sound. When 88% of AI pilots fail to reach production[5] and 42% of companies abandon the majority of their AI initiatives before deployment,[6] theoretical ROI stops being credible. Outcome guarantees are the market’s correction.

But here’s what doesn’t get said in the coverage of this shift: guarantees are bilateral. If the vendor commits to measurable outcomes, the buyer commits to the organisational conditions that make those outcomes possible. Clean enough data. Workflows willing to change. Leadership willing to define what “success” actually means in specific, measurable terms—not the comfortable vagueness of “improved efficiency.”


And the data on readiness is brutal.


Only 7% of enterprises say their data is completely ready for AI adoption, according to a 2025 study by Cloudera and Harvard Business Review Analytic Services.[7] Gartner warns that 63% of organizations either do not have or are unsure if they have the right data management practices for AI, and predicts organizations will abandon 60% of AI projects through 2026 due to insufficient data quality.[8] Meanwhile, Data Society’s 2025 AI Readiness Report found that 65% of leaders don’t know when or where to apply AI, and 52% lack foundational understanding of how AI actually works.[9]


The organizational readiness picture is just as stark. Deloitte’s 2026 survey of 3,235 enterprise leaders found that only 34% of organizations are using AI to deeply transform their businesses—creating new products, services, or reinventing core processes.[10] Another third are redesigning key processes, while 37% remain at surface-level optimization. MIT’s State of AI in Business 2025 study found that 95% of enterprise AI initiatives deliver zero measurable ROI. McKinsey’s research shows only 21% of companies have actually redesigned workflows to integrate AI effectively.[11]

This is the gap outcome-based contracts are supposed to bridge—but most organisations aren’t ready to cross it.


When you sign a contract with teeth, both parties commit. In outcome-based contracting, mutual accountability means districts and providers share responsibility for creating the conditions that lead to success, from setting clear expectations to tracking outcomes and establishing remedies when challenges arise. That means defining success with specificity before the contract is signed. It means honest assessment of whether your data, processes, and culture can support what you’re promising. It means treating the vendor’s embedded team as a genuine partner rather than a delivery resource to be managed at arm’s length.


Gartner predicts that by 2027, the cost-to-value gap in process-centric service contracts will be reduced by at least 50% due to agentic AI—but only when contracts evolve from rigid service-level agreements based on work hours into genuine outcome-based models.[12] The vendors making this shift aren’t just changing pricing structures. They’re embedding engineers for months, not weeks, to understand the operational reality of each client—not just the technical architecture, but the workarounds, the politics, the resistance.[13] They have to build for the organisation as it exists, not as the slide deck imagined it.


That work is hard, and it’s expensive, which is why most vendors can’t genuinely offer outcome guarantees yet. Zendesk’s per-resolution pricing at $1.50 per AI-resolved ticket, or ServiceNow’s efficiency guarantees, represent the direction, but adoption remains selective.[14] The market is still navigating uncertainty around how to measure outcomes fairly, how to contract for them, and how to account for factors outside the AI’s control.


The uncomfortable implication for buyers: outcome guarantees are only valuable if you’re willing to be accountable to the other side of them.


Most organisations aren’t. In 2025, global enterprises invested $684 billion in AI initiatives; by year-end, over $547 billion of that investment—80%+—had failed to deliver intended business value.[15] The failures weren’t primarily technical. MIT’s AI Incident Database analysis found that the biggest AI failures of 2025 were organizational: weak controls, unclear ownership, and misplaced trust.[16] MIT researchers reviewed more than 300 publicly disclosed AI implementations in 2025 and found that most have yet to deliver measurable profit-and-loss impact, with just 5% generating millions in value. Projects stall in proof-of-concept stages with no clear accountability, economic model, or scaling plan. Data problems range from harmful bias to broken pipelines.


Signing an outcome-based contract when you’re not ready doesn’t protect you—it just makes the failure contractual.


So what should organisations actually look for before signing one of these?


Start with the vendor’s deployment track record, not capability demonstrations. Anyone can demo a working system. Far fewer can show you deployments where people adopted the solution, workflows changed, and business impact remained measurable six months later. Ask for those stories specifically. If they can’t provide them, that tells you what you’re actually buying.


Then examine how they staff for outcome guarantees. A vendor who commits to outcomes but plans to hand you documentation and a support email isn’t serious about the guarantee—they’re serious about the sale. The vendors who mean it embed people with you. Not project managers checking in weekly. Engineers and strategists who sit in your environment long enough to understand it properly. Ask who specifically will be onsite, for how long, and what happens when things don’t go to plan.


Watch how they talk about your organisation’s readiness. A vendor confident enough to guarantee outcomes should also be confident enough to tell you when you’re not ready for them. If they’re willing to sign anything without asking hard questions about your data quality, your change appetite, or how clearly you’ve defined success—that’s not confidence. It’s the old sales script with a new clause.


And critically, before any contract conversation, do the internal work. Outcome-based contracting requires organizations to identify specific student populations served, define desired outcomes, establish metrics to measure progress, and create mutual success plans including training requirements, usage commitments, and regular monitoring cadence. Get specific about what you’re trying to achieve and what you’d need to see in twelve months to call this a success. You must be able to ascertain with significant certainty that your solution enhances business performance, and ensure customers utilize the solution effectively—slow uptake and incorrect usage will diminish benefit and affect earnings.


If you can’t answer that clearly—if 65% of your leadership doesn’t know where to apply AI, if only 7% of your data is ready, if you haven’t redesigned a single workflow to accommodate new systems—no outcome guarantee will save you. It’ll just give the post-mortem something concrete to point at.


The shift to outcome-based contracting is real, and it’s the right direction. But the buyers who will benefit from it are the ones who treat it as a forcing function for their own organisational readiness—not a mechanism to transfer all the risk to the vendor while keeping the ambiguity for themselves.


Because when the contract has teeth, everyone gets honest. And honesty, in enterprise AI, is still rarer than it should be.



References:

[1]: MIT NANDA, “The GenAI Divide: State of AI in Business 2025,” reported in “AI Readiness & Implementation Guide 2026,” Svitla Systems, November 2025.

[2]: “The 7 Agentic AI Trends Shaping Enterprise Supply Chains in 2026,” PRNewswire, February 3, 2026.

[3]: Gartner forecast reported in “AI and the SaaS industry in 2026,” BetterCloud, January 21, 2026; “The 2026 Guide to SaaS, AI, and Agentic Pricing Models,” Monetizely, January 1, 2026.

[4]: McKinsey reported in “2026 Consulting’s AI Revolution Update: Billions Spent, But the Old Pyramid Persists,” Future of Consulting, January 25, 2026.

[5]: IDC research in partnership with Lenovo, reported in “88% of AI pilots fail to reach production — but that’s not all on IT,” CIO, March 25, 2025.

[6]: S&P Global, reported in “What Changed in Q4 2025 and Why Enterprises are afraid of 2026–2027,” Medium, December 22, 2025.

[7]: “Only 7% of Enterprises Say Their Data Is Completely Ready for AI,” Cloudera and Harvard Business Review Analytic Services, March 5, 2026.

[8]: Gartner, “Lack of AI-Ready Data Puts AI Projects at Risk,” February 26, 2025; “Enterprise AI Procurement In 2026: The Shift From Pilot Experiments To Outcome Driven Buying,” AI Spectrum India, 2026.

[9]: “2025 AI Readiness Report: Key Insights to Build Your 2026 AI Strategy,” Data Society, November 19, 2025.

[10]: “The State of AI in the Enterprise - 2026 AI report,” Deloitte, survey of 3,235 leaders conducted August-September 2025.

[11]: McKinsey data reported in “Data Management Trends in 2026: Moving Beyond Awareness to Action,” Dataversity, February 2026.

[12]: Gartner prediction reported in “Gartner Top 10 Predictions for 2026: Enterprise AI Trends,” Thoughtminds AI, 2026.

[13]: “AI and Enterprise Technology Predictions from Industry Experts for 2026,” Solutions Review, January 2026.

[14]: Zendesk and ServiceNow examples from “AI and the SaaS industry in 2026,” BetterCloud, January 21, 2026.

[15]: “AI Project Failure Statistics 2026: The Complete Picture,” Pertama Partners, February 2026; synthesizing data from RAND Corporation, MIT Sloan, McKinsey, Deloitte, and Gartner.

[16]: “Avoiding AI Pitfalls in 2026: Lessons Learned from Top 2025 Incidents,” ISACA Now Blog, December 15, 2025, citing MIT AI Incident Database analysis.

bottom of page