In early 2024, I co-authored an AI policy for Cotswold District Council and led a rollout of Copilot to over 250 officers and members. The project was successful by most measures: adoption was strong, productivity gains were visible, people were using it, and the council had a governance framework in place. But the shape of the success taught me things that I don't think show up in typical tech rollouts. This is what happened, and what it meant.

Policy first, tools second

We started with policy. Not with procurement. Not with a pilot group testing the tool. We started with the question: what is AI for in a council context? What kinds of decisions can it inform? Where must it not be used? What does accountability look like?

We wrote those answers before we rolled out the tool. It took six weeks with the right people in the room — legal, information security, operations, service heads, someone who actually does the frontline work. The output was a document that sat on twenty pages, not two hundred. It was clarity about principles, not a compliance checklist.

Most organisations do it backwards. They roll out the tool and write policy afterwards. The advantage of our approach was that it meant people knew before they started using Copilot what they were supposed to use it for and what they weren't. The governance wasn't retrofit. It was baked in from the start.

The tool doesn't matter as much as the discipline. We succeeded not because Copilot is magic — it's good, but it's not magic. We succeeded because someone made governance a first-class concern and someone made it clear that this was a change programme, not a software deployment.

The adoption curve is steep, then flat

In the first month, there was hunger. People heard about the tool. They wanted to use it. They found ways to integrate it into their work. They'd ask "can I use it for this?" The answer was usually yes. There was energy and genuine excitement about time saved and tasks made easier.

By month two, usage had stabilised. By month three, it had reached its plateau. The people who were going to adopt it had adopted it. The rest hadn't moved much. Some people use it daily. Some people tried it once and didn't see the point. Most people use it sometimes, when they remember it or when the task matches its capability.

This isn't a failure. It's real. You're not going to get to 100% adoption. You'll get to the people for whom the tool is useful for the work they actually do. That cohort is often smaller than organisations expect. We aimed for 40% regular usage. We got there. The other 60% were either using it occasionally or not at all. That's not disappointing. That's realistic.

Where the real adoption gaps are

Most people in a council do work that is heavily regulated, involves direct citizen contact, or requires local knowledge and judgement. Copilot is brilliant at generating drafts of written communication, summarising policy, or helping people think through a problem. It's less useful when the constraint is not "I don't have a first draft" but "I don't have the information I need" or "someone has to physically inspect this property" or "the decision is actually political and I need someone to sign up for it."

You can't automate your way out of those constraints. A planning officer can't use Copilot to make a site visit. A licensing officer can't use Copilot to talk to a business owner and understand their circumstances. A councillor can use Copilot to draft a response to a constituent, but the decision about what to do is theirs, and it's usually based on judgement and relationships that the tool can't help with.

Adoption is bounded by the nature of the work. In departments where the work is document-heavy and human judgement is the constraint, Copilot made a real difference. In departments where the constraint is information-gathering, access, or relationships, the difference was smaller. That's not a failure of the tool. That's the real shape of adoption.

Why operating model matters

The biggest change wasn't in what people could do. It was in how information flowed. Some services started moving work around because they realised that a task that used to require a senior person to spend an hour could be done by a junior person plus Copilot in fifteen minutes. They didn't just deploy a tool. They reorganised who did what.

A planning service started using Copilot to draft policy summaries that were previously created by senior planners. That freed up the senior planners to spend more time on complex cases. A junior planner could now do the summary work in a quarter of the time. It sounds like we just displaced seniority. What actually happened is we rearranged the work so that the expensive brain was doing expensive thinking and the less expensive person was doing routine work — plus a tool.

That requires cultural change. Senior people have to be comfortable that the work they do is moving. Junior people have to be trusted to use a tool responsibly. Managers have to think differently about supervision — not "is the junior person doing it the same way I would" but "is the junior person using the tool responsibly and catching when the output is wrong?" These are operating model changes. That's where the actual value lives.

What governance actually protects

We built into the governance framework a requirement that the information security team review any use of Copilot that involved personal data. There were moments where this slowed things down. Someone wanted to use Copilot to draft a response to a constituent, and we had to check whether the use case involved personal data and what the security implications were.

But it also meant that when someone wanted to use Copilot in a way that would expose citizen data — feeding a database of addresses and phone numbers into Copilot to generate something — the framework caught it. Governance wasn't a throttle. It was a firewall. It made a bad thing impossible rather than making a good thing slow.

Most governance conversations in the public sector feel like they're about risk and restriction. This one felt different because it was about permission. We're saying yes to these uses and no to those ones, and here's why. That clarity meant people could move quickly in the green zone and didn't waste energy trying to work around the red zone.

Why the moment matters

The timing of the policy and rollout was important. It was early enough that we weren't racing to catch up to a wave of unsanctioned use. A lot of councils had people already using Copilot on their personal logins, through their Microsoft accounts, with no governance. We started governance before we had that problem.

It was late enough that we had good evidence about what Copilot could actually do. Early 2024 was not the moment when everyone had crazy expectations about what AI could do. People had seen it in action. They had a realistic sense of what it was good for.

And it was a moment when the council leadership understood that this wasn't a nice-to-have. It was going to be a fact of life. Councils are under resource pressure. If a tool can reduce routine work, someone is going to want it. Better to lead that with policy and governance than to let it happen ad-hoc. That clarity at the top made everything else possible.

What I'd do differently

We got the policy and governance right. If I'd done something different, it would be on training. We did basic training — how to use Copilot, what it's good for, what it's not. But the people who got real value out of it were the ones who figured out how to use it well. They discovered prompting techniques. They understood when to trust the output and when to check it. They built that into their work.

If we did it again, I'd build in intermediate training. Not "how to use the tool" but "how to use the tool for your specific job in a way that actually saves you time." Some people would get there on their own. Many wouldn't. That's a missed opportunity.

The closing thought

AI adoption in local government teaches something that's probably true everywhere: the tool doesn't matter as much as the discipline. We succeeded not because Copilot is magic. We succeeded because someone made governance a first-class concern and someone made it clear that this was a change programme, not a software deployment. Those are mundane things. But they're what matters. The AI is useful, but it's not the constraint. Never has been.