I spent twenty-four years at Cisco, working my way from Product Manager to Senior Director running a £430M EMEA business. Before and after Cisco, I led business transformation at organisations from Fortune 100 scale to fifty-person startups, across technology, cybersecurity, media, and the public sector. One phrase Cisco's CEO used constantly was "eat your own dog food." Use what you sell. Live what you advise. After that long, it stops being a principle and starts being instinct.

So when I decided to build an AI transformation consultancy — inspired by the conviction that AI's real potential lies in how organisations work, not just what tools they buy — the idea of not using AI to build and run it felt like a contradiction I couldn't live with. The question wasn't whether. It was how far to take it.

I'd built a consulting business once before, a few years ago, pre-AI. That experience gave me useful perspective. I remembered what it took to get a new venture off the ground when every deliverable, every email, every piece of positioning had to come from me or from people I was paying. The contrast with what's possible now is striking, and I'll come back to that.

Two years of deepening use — and hitting the ceiling

I've been using ChatGPT since it launched publicly. By the end of 2025, I was in the top 0.1% of users — thousands of conversations across a wide range of work. But the volume isn't the interesting part. What matters is how the way I worked with it kept evolving.

At Verimatrix, I was appointed by the CEO to lead enterprise AI transformation across a nine-country SaaS organisation. I built a board-approved Responsible AI governance framework, stood up an AI Steering Group across engineering, sales, support, and compliance, and started embedding AI into how the company actually operated — not as experiments, but as governed capability. I was building AI assistants for sales enablement, competitive intelligence, compliance workflows, and customer support using a mix of ChatGPT Projects, Custom GPTs, Microsoft Copilot Studio, and Claude on AWS Bedrock.

In parallel, as Cabinet Member for Digital Services at Cotswold District Council — a role equivalent to a non-executive director, with democratic accountability — I co-authored the council's AI Policy and Strategy, oversaw a Microsoft Copilot rollout to around 250 officers and members, and started building AI assistants for planning validation, resident advisory services, and climate policy.

I was also coaching cabinet members from councils across the UK through the UK100 Climate Leadership Academy, and mentoring on the public sector Stepping Up programme. I noticed the same thing in every conversation: AI had moved from a peripheral curiosity to a central question for organisations of every size. The question was everywhere.

Through all of this, my own use of AI was deepening. I'd moved well past the early stage of drafting emails and researching topics. I was building structured assistants with persistent context, creating reusable frameworks, designing systems I could run repeatedly. AI had become a genuine thinking partner. But each assistant sat in its own silo. Each conversation started without the context of the last. I was getting real value, but I could feel the ceiling.

Designing the workforce before the tools

When I started designing Human–AI Systems, the question sharpened. Could the business be built from day one with AI not as a productivity tool, but as part of the workforce? AI filling roles that would normally require people — not replacing people philosophically, but occupying functional positions in an organisation that didn't yet have anyone in them.

I'd developed frameworks for how organisations should adopt AI: a capability progression from Tool to Assistant to Worker, and a methodology I call Radar, Pilot, Scale. These weren't just things I planned to sell to clients. They became the design principles for the business itself.

I was already pushing towards worker-level AI with my Custom GPTs and Projects. They gave me something close — persistent context, memory across conversations, specialised behaviour. But there were limits. Each one operated in isolation. I couldn't connect them into a coherent operating model — a shared workspace where every role draws from the same organisational context and builds on what the others have done.

The timing was fortunate. A new generation of AI platforms was arriving — tools that could maintain a persistent workspace, execute code, integrate with external services, and operate across interconnected workflows. I evaluated several, including Claude Cowork and ClawBot. I chose Claude Cowork, partly because the integrated workspace matched the operating model I was designing, and partly because it's the kind of enterprise platform I'd expect clients to adopt. I kept ChatGPT for what it does well — I still use it daily for personal work, council responsibilities, and community energy projects. Different platforms for different operating contexts. That's not brand loyalty. That's the same fit-for-purpose thinking I'd advise any organisation to apply.

Building it

I designed the business the way you'd design any organisation: roles first, then recruitment. A Chief Marketing Officer, a Chief Revenue Officer, a Chief Operating Officer, a CFO, a CTO. A delivery team for client engagements. A storyteller for the narrative library. A business coach. Each role had defined responsibilities, quality standards, and access to the organisational context they needed.

I named them after computing pioneers. Not to pretend they're people — they're not — but because I've found it's a surprisingly effective mental model for applying good leadership practice. Clear roles and responsibilities. Defined competencies that get developed over time. Context, guidelines, and goals that shape behaviour. Critical work gets checked. Feedback improves performance. The parallels with managing a real team are closer than you'd expect.

Grace Hopper runs operations and delivers my morning briefing. Ada Lovelace handles marketing and content. Vint Cerf manages the pipeline. Blaise Pascal does the invoicing. The website — fifteen pages, designed and deployed within a week — was built through Linus Torvalds, without a developer or agency.

I used AI to research the market, stress-test the commercial model, develop the service architecture, create the brand identity and voice guidelines that keep every output consistent. Compare that to my previous consultancy, where each of those steps took weeks of my own time or a paid specialist. The difference isn't incremental. It's structural.

And I found I didn't need the usual stack of SaaS subscriptions. The AI handles CRM, action tracking, pipeline management, content planning, financial tracking, and website maintenance. Things that would normally require five or six separate platforms, each with their own cost and learning curve, are part of the operating model.

The build was fast but not smooth. Early versions of the AI workers needed debugging. Outputs defaulted to generic consulting language until I built quality controls into every role. The AI team needed managing — in ways that felt surprisingly familiar.

The build was fast but not smooth. Early versions of the AI workers needed debugging. Context has limits. Outputs defaulted to generic consulting language until I built quality controls into every role. The AI team needed managing — setting expectations, correcting course, refining how things worked — in ways that felt surprisingly familiar. The imperfections were part of the learning, and they taught me things about AI adoption that I wouldn't have understood from advising on it.

Early client engagements were the real test. Discovery preparation, structured meeting notes, follow-up communications, and invoicing — all managed through the AI workforce. Not perfectly. But it worked, and it improved with each iteration.

What it actually feels like

My day starts with a briefing — what's outstanding, what's due, what happened yesterday. I'll review the pipeline, check content, sometimes have a story drafted for a prospect meeting or an article outlined for LinkedIn. If there's an invoice to raise, it gets handled.

But I want to be honest. This isn't autonomous. The AI generates first drafts and I provide direction. Some outputs need two or three rounds. Some miss the mark and I redirect from scratch. The constant work is quality control — making sure what goes out reflects real experience and sharp thinking, not fluent-sounding generality. Human judgement isn't optional. It's the thing that makes the system produce work worth using.

There's an observation I keep coming back to. I don't prompt — I brief. That distinction sounds small but it captures the entire shift. Prompting is commanding a tool. Briefing is directing a colleague. The difference between those two things is the difference between using AI and operating with AI.

I notice it most clearly because I still use ChatGPT every day for other work. It's a capable, valued thinking partner. Claude Cowork feels different — I give it a goal, and it works through multiple steps to get there, using tools and integrations, building and maintaining the business infrastructure. The daily business brief arrives without me asking. Market research gets flagged. LinkedIn posts are suggested. My business coach checks in. Both platforms are useful. They feel qualitatively different to work with.

What I've learned that I couldn't have learned from advising

The technology wasn't the hard part. Designing how the work should flow — which roles, which responsibilities, what quality standards, where human oversight sits — that was the real work. The same work, it turns out, that I help client organisations do.

AI doesn't replace judgement. It changes what you spend your judgement on. I spend less time drafting and more time directing. Less time on administration and more time deciding whether the output is good enough. That shift is more significant than it sounds.

The progression from using AI as a tool, to working with it as an assistant, to operating it as part of the workforce happened over two years. But the jump from assistant to worker didn't happen gradually. It required a deliberate design decision — thinking about the organisation first and the technology second. Better tools alone wouldn't have got me there.

That's what most organisations are missing. They're experimenting with AI. Many are getting real value. But very few have redesigned how work actually happens to make AI part of the operating model. That's where the real shift is, and it's not a technology problem.

I built this practice the way I believe organisations should adopt AI. Not by buying better tools, but by redesigning how the work gets done.