top of page

Read this article on our Subtack - The Impossible Brief


“Nearly a third of workers admit to sabotaging their company’s AI strategy.” That was the headline of a short article in Fast Company earlier this week, drawn from a survey of 2,400 employees and C-suite leaders by AI company, Writer. The behaviours logged: refusing training, ignoring guidelines, feeding sensitive data to unapproved tools, tampering with metrics to make AI look less effective than it is.


Read past the headline, though, and the more significant number isn’t 29%. It’s 75% — three quarters of executives in the same survey, admitting that their company’s AI strategy is “more for show” (ie. for example, PR and investor relations) than actual internal guidance.


What this data points to is something more structurally awkward: both sides of the organisation are struggling with the same problem and talking about it in ways that don’t reach each other. Executives are under pressure to demonstrate strategic control over a technology that is genuinely difficult to evaluate — 61% of C-Suite in the survey fear losing their jobs if they fail to lead the transition well. The people they lead are being asked to adopt tools whose purpose, within their specific work, hasn’t been clearly articulated to them. And the primary metric being offered to both groups — productivity, efficiency, time saved — doesn’t capture what either side is actually trying to navigate. It measures output. It says nothing about whether the organisation is building something worth building, or whether the people inside it are developing in ways that will hold up as the technology continues to shift. The gap between what’s being measured and what’s at stake is where most of the tension lives, and it isn’t resolved by reporting usage numbers upward.


In our work with organisations in a regulated industry that takes compliance seriously, runs on distributed teams, and has seen enough technology rollouts to be appropriately cautious about them — we see something different from the picture the report describes, though not without its own complications.


The organisations making meaningful progress are designing around people first and tools second. That sounds straightforward, but in practice it requires a deliberate act of restraint in environments that are under pressure to show results. For example, on client built a tool that lets any member of staff bring an idea to a working prototype within minutes. The design intent wasn’t to accelerate output — it was to shift who gets to build things, and to give people a form of agency they didn’t previously have in the organisation.


None of these is without friction. These are corporate environments with governance requirements, budget cycles, and competing priorities. The tools that get built have to survive procurement, security review, and the scrutiny of teams who have seen well-funded initiatives fail before. What keeps them alive is that the people they were designed for can feel that they were designed for them — not to make them more measurable, but to make their work more possible. That distinction matters more than it might initially appear. When adoption is built on usefulness rather than mandate, the relationship between the person and the tool is different. Resistance looks different too — it tends to be specific and informative rather than generalised and defensive.


The harder problem, consistently, is what happens once something is working. Translating ground-level progress into language that travels upward — demonstrating value to the people who control budgets, in terms they can act on — is where many of these efforts stall. The work is ongoing, embedded, and genuinely difficult to reduce to a metric that fits a board presentation. The organisations that navigate this well are building the measurement frameworks alongside the tools, not retrofitting them afterward. It remains, for most, the least resolved part of the challenge.


The executives whose strategies are “more for show” are not, for the most part, acting in bad faith. They are navigating a specific kind of pressure that produces a specific kind of behaviour. Leading an AI transformation when the technology is moving faster than your organisation’s ability to evaluate it — when the tools you’re being asked to champion are genuinely uncertain in their implications, when strategic clarity is scarce even amongst people who do this full time — is an uncomfortable position. Under that pressure, organisations reach for what is legible. “Are our people using the tools?” is a question you can answer in a board deck. “Are we building the right capabilities, in the right sequence, in a way that will hold up as the technology develops?” is much harder to answer — and harder still to present as evidence that leadership has a grip on things. So adoption becomes a proxy for strategy, because adoption is countable.


The 29% of workers pushing back — and the 44% of Gen Z who report some form of resistance — are often reading this dynamic accurately. They can see that the urgency is driven by anxiety rather than organisational vision – and that an adoption mandate measures compliance rather than capability. When there is resistance, there is frequently a specific knowledge about where the tool fits the actual work and where it doesn’t. That knowledge is exactly what organisations need to be curious about.


What AI transformation asks of leaders, more than anything else, is a tolerance for visible uncertainty — and an understanding of how rare that is to ask when someone’s job is contingent on the outcome.


The organisations navigating this honestly are the ones able to say — internally, and sometimes externally — that they are still working out what this means for them. That they are building from what they observe rather than from what they’ve announced. That the people closest to the work are a source of information rather than a compliance problem. And that putting humans at the centre of the design isn’t a values statement — it’s the practical condition for building something that lasts beyond the first rollout.


The question most organisations are asking is whether their people are using AI. The more useful question is whether they’ve built the conditions in which their people can tell them honestly what they’re finding. That distinction is harder to report upward. It tends to determine everything else.

bottom of page