By Paul Curtis, CTO & E-commerce Director, easyJet
Look, AI is exciting. The innovation happening right now is genuinely remarkable. But here's what I've learned at easyJet: actual AI success depends on the boring stuff. Data. Integration. The unglamorous foundation work that nobody wants to talk about at conferences.
Everyone's focused on innovation. And I get it. But integration is more important than innovation when it comes to AI. That's not a sexy message, but it's the truth.
Let me walk you through three things we've learned about making AI actually work.

Let me give you an example we see all the time.
You're in a meeting. Someone's pitching you this new AI-powered helpdesk solution. It's got all the bells and whistles. Machine learning, natural language processing, predictive analytics. The demo looks fantastic. Everyone in the room is getting excited about the possibilities.
But here's the question nobody's asking: we've got ServiceNow. How do we integrate a model that allows us to automate the triage of tickets that go into that solution?
That's the right starting point. Not the shiny new thing. Not innovation for innovation's sake. Distilling it down to the business problem you're trying to solve, that's what matters.
And avoiding the moonshot? That's another thing we discovered to our pains. We explored about 50 different potential AI solutions, each one promising to generate incremental tens of millions for the business. There was nothing wrong with the tools themselves. But we were starting from the wrong question, "what can this AI solution do?" instead of "what problem are we actually trying to solve?" Most of those initiatives went nowhere because we hadn't done the foundation work first.
The other key area that's sometimes referred to as a bit dull is around data governance. And I'm going to be blunt about this: there's no point building anything, models, apps, AI solutions, anything, on bad data. Because you're not going to get the outcomes you want as a business. The AI solution that you’re building on top of an LLM is only going to be as good as the data you’re feeding the model.
A lot of businesses have dived into AI, particularly in the customer space, and are finding that it's not delivering the outcomes they expected. The underlying data is still a problem. And that's the thing the AI is learning from.
Here's what we're seeing at easyJet. We use a number of different vendors across our ecosystem. The ones that are, from our perspective, the most capable, the ones with the best engineers, are claiming about a 5 to 10% efficiency gain through the use of Copilot and test automation. The less capable ones? They're stating 30, 40% gains.
There's this real mismatch between what people think they should be hearing and what's actually happening. Most CEOs and C-suite members think we're behind the curve because there's this assumption that everyone else is saving 20, 30, 40% in terms of efficiency gains. That's the expectation versus reality challenge. You only have to spend 10 minutes on LinkedIn to have a massive sense of FOMO that’s based on this fictitious reality.
And when you look at the whole engineering cycle, the focus is very much on the code generation side of things. But the actual writing of code is probably 20, 30% of the overall job. So even if AI is increasing the efficiency of engineers in that one area, there's quite a low ceiling in terms of the overall efficiency gains you can actually achieve.
So what is the relationship between MACH and AI?
There was a survey recently that interviewed 100 senior people from large retailers in the UK and the US. Those who had fully embraced MACH or were well along on their journey? 95% of them were very confident in their ability to implement AI initiatives. And 94% were already positioned to move far quicker in terms of implementation.
That's not a coincidence.
Composable architecture is a natural fit for the agentic world we're moving into. But it's not just about enabling agent-to-agent communication, it's about how the architecture is structured to make that communication work reliably at scale.
Agent-to-agent communication needs exactly what MACH already provides: solid data at the core, all functionality approachable through APIs, and the flexibility to put it together in whatever way makes sense for the business.
This is how enterprises should be architecting for the agentic future. Not bolting AI onto monolithic systems and hoping for the best. Not creating isolated AI experiments that can't talk to anything else. The ability to have agent-to-agent communication is absolutely key in terms of how systems are defined. And MACH gives organizations that foundation. Not as something that needs to be retrofit or rearchitected, but as something that's already there, built into how the systems work.
If you consider how long it takes large enterprises to deliver change in their ecosystem, you've got to start now. Not with the moonshots. Not with the shiny new AI tools that promise to transform everything overnight.
Start with integration. Start with data governance. Start with the composable architecture that makes AI actually work in practice, not just in demos.
Because when the agentic future arrives, and it's arriving faster than most people think, you'll either have the foundations in place to adapt quickly, or you'll be scrambling to rebuild everything from scratch.
The organizations that are winning at AI aren't the ones chasing innovation. They're the ones who did the unglamorous work of getting their data right and their architecture ready. That's where AI success actually comes from.