What an operating model is
An operating model is the set of structural decisions about how your organisation actually works. Who is accountable for what? Who has authority to make decisions? How does information flow between parts of the organisation? What metrics drive behaviour? It's how you're actually organised — not how the org chart says you're organised.
An operating model is rarely written down. It lives in practice. It's the result of years of habit, decisions, power structures, and cultural norms. When a new technology arrives, most organisations try to run it inside the existing operating model. That works for some technologies. It doesn't work for AI.
Why AI changes the operating model
AI shifts when decisions need to be made. A task that took three weeks now takes an hour. That seems good. But it means the decision point moves. The person who previously had a week to think about it now has an hour. Or the decision used to happen at the end of a three-week process; now it happens at the beginning. That changes who needs to be involved.
AI changes what information is needed before a decision. A process that used to require three weeks of back-and-forth with stakeholders to gather information now needs all the information upfront so the AI can act on it. That changes information flows.
AI creates new role boundaries. "This thing used to take three weeks and a team of five. Now it takes an hour and one person — but that one person needs visibility into new information, is responsible for new things they weren't responsible for before, and needs to understand what the AI is doing well enough to catch problems." Those aren't technology questions. Those are operating model questions.
What usually goes wrong
Organisations add AI tools to an operating model designed for human decision-making. They don't change the approval chains. They don't change the metrics. They don't change the frequency of review. So people use the AI tool inside a workflow that was designed to route things through hierarchies and gate-keep decisions.
The result is predictable. The AI accelerates things to a bottleneck. The bottleneck gets more stressed. The person at the bottleneck becomes the constraint. If it was a senior person before (because decisions used to take time and required experience), now it's an even bigger bottleneck because everything is moving faster. People lose faith in the tool. "This accelerated our work to the point where it made things worse." They're right.
A second failure mode is decentralised decisions moving too far. Some organisations try to push decision-making down to front-line people without building the oversight structure or the information infrastructure to support it. You give a junior person access to an AI that makes important decisions, but you don't have monitoring in place, you don't have a feedback loop, you don't have clear escalation. Then someone makes a bad decision, and the organisation either pulls everything back or builds governance after the fact (which usually means rebuilding the workflow).
What effective operating models have in common
Clear role definition for who operates AI and when. Not "everyone can use it if they want." Specific roles have responsibility for deploying AI in specific contexts. That doesn't mean it's centralised. It means it's clear. If you're a department head and you want to deploy AI in your team's workflow, here's the charter, here's what you're responsible for, here's the oversight structure.
Faster feedback loops. Because when AI is involved, you find out if something is wrong faster. If a human took a week to process a claim and made an error, you might not find out for weeks. If an AI processes it in an hour and makes the same error, you find out in an hour. So the feedback loop needs to be tighter. You need monitoring. You need a process for surfacing problems quickly.
Distributed decision rights. If every AI decision has to escalate to a board, scaling is impossible. Effective operating models distribute decision rights — but with clarity about how far they can go and what happens if they drift. A department can deploy AI in their process, but if it touches a certain threshold of customers or financial value, it escalates for review.
Transparency about what AI is doing. So that stakeholders can see it. Not because they need to approve every decision, but because they need to understand what's happening and trust that it's being done well. Transparency isn't the opposite of speed. It's the foundation of trust at scale.
The design question
Organisations that get this right ask: "If this decision is made faster, what else has to change so that the outcome is still right?" Not "how do we make this faster with AI," but "if we make this faster, what are the second and third order consequences, and are we ready for them?"
Faster decisions might mean you need real-time monitoring instead of daily reviews. That means building dashboards. Faster decisions might mean you need escalation pathways that are clear and pre-agreed. That means mapping where decisions go wrong and where they have to be reviewed. Faster decisions might mean you need to change the skill profile of the role — less time on data collection and processing, more time on judgement and oversight. That might mean retraining people, or hiring different people.
The organisations that scale AI successfully don't just deploy the AI. They redesign the operating model to support it. That's what takes time. That's what's hard. That's also what actually works.
The leadership skill
This is about adaptive capacity. The ability to change how you operate as circumstances change. Not just the ability to deploy new tools. Most organisations can do that. Few can sustain it — because sustaining it requires that senior people give up power, or change their visibility into decisions, or accept that the chain of command is no longer the only chain of information.
Those are hard things. If decisions used to flow up the hierarchy and you were the person at the top who saw them all, and now decisions are distributed across the organisation with different people making them in different parts of the system, you have to get comfortable with that. You have to trust the oversight structures you've built. You have to be able to spot-check without micromanaging. You have to learn to lead by principle rather than by visibility.
That's the actual hard part of AI adoption. Not the technology. Not even the workflows. The hard part is changing how human beings relate to authority, decision-making, and accountability. That's an operating model question. And it's why AI adoption is usually harder than AI implementation.