Skip to content
·5 min read·Mohamad Omran

Why the second AI pilot stalls

The first AI pilot earns applause. The second one almost always stalls - and the reason is rarely the model. It's the operating model around it.

The first AI pilot in an organisation almost always succeeds. Someone sponsors it, someone champions it, the demo lands well, and a slide goes up the chain explaining what the technology can do.

The second pilot is where it gets interesting. The novelty has worn off. The C-suite has seen the demo. Now the question is whether AI is going to be a standing capability - or a recurring novelty act with a six-figure invoice attached.

In the engagements we've seen across the GCC over the past two years, the second pilot stalls in roughly three ways. None of them are technical.

The first stall: the team that runs it doesn't exist yet

The first pilot ran on borrowed energy - a curious analyst, a sympathetic engineer, an executive sponsor who cleared the calendar. That's not a team. It's a coalition.

The second pilot needs a team with a name on the org chart, a mandate, and a budget that survives a quarterly review. Most organisations don't build that team until after the third pilot has already fallen apart. By then the executive sponsor has lost patience and the curious analyst has taken a job somewhere else.

If you're scoping a second pilot, the question isn't what should we automate next? The question is who runs this six months from now? If you can't answer it, you're not ready.

The second stall: the rest of the business hasn't caught up

The first pilot ran inside a bubble. The team that built it understood what it did, what it didn't do, and where the sharp edges were. Everyone else got the demo and the executive summary.

The second pilot has to live in the actual business. Now you need:

  • A way to explain the system's outputs to people who didn't build it
  • A pattern for when humans should override the model and when they shouldn't
  • A definition of "wrong" that operations can use without an engineer in the room
  • Training that gives people enough literacy to disagree with the model intelligently

This isn't an AI problem. It's a soft-skills problem dressed in a technical jacket. Most organisations underbudget it by an order of magnitude, then express genuine surprise when adoption flatlines.

The third stall: the strategy never updated

The first pilot was a proof. The second pilot is supposed to be a commitment. And commitments that aren't reflected in the strategy don't get the institutional cover they need to survive contact with the budget cycle.

We've seen organisations run a successful pilot, declare it a strategic priority on a town hall slide, and then submit a five-year plan that doesn't reference it once. When the plan and the practice diverge, the practice loses. Always.

If your AI work isn't visible in your strategic plan - with named outcomes, owners, and a number - it isn't strategic. It's an experiment. That's fine, but you should call it that and budget it accordingly.

What the successful second pilots had in common

The teams that got to a third, fourth, and fifth pilot did three things differently:

  1. They named a small permanent team before they ran the second pilot. Two or three people, half-time minimum, with a clear mandate. The mandate did not say "explore AI." It said "deliver one production-grade automation per quarter." Constrained mandates outlast curious ones.

  2. They invested in literacy before they invested in capacity. Before training a single new model, they ran short workshops with the operators who would live with the outputs - covering what models are confident about, what they hallucinate about, and how to disagree without breaking the workflow. Six hours of workshop saved six months of slow rollback.

  3. They wrote the second pilot into the strategy, not the budget. The budget tells you what something costs. The strategy tells you what you're willing to defend when the cost is questioned. The successful pilots had both.

There's a pattern here, and it's the one we keep finding: the technical part is increasingly the easy part. The hard part is the operating model around it - the people, the literacy, the strategic cover. That's the work that doesn't get demoed at the steering committee, and it's the work that decides whether the next pilot lands or stalls.

If the second pilot is what you're working on right now, we'd love to compare notes.

  • Mohamad Omran, Founding Partner