Post-Image

A Target Operating Model for AI

Organisations that do not define an AI TOM do not avoid having one; they inherit one by default, shaped by ad hoc adoption rather than deliberate design.

Introduction

AI often finds its way into an organisation gradually and in an unstructured way. One team starts using it for productivity, another experiments with customer workflows, someone else connects it to sensitive data, and before long, the business has real AI exposure without any shared design for how it should be governed, secured, or supported. What looked like fast progress turns into confusion: nobody is fully sure which tools are approved, who owns the risks, how outputs should be validated, what data can be used, or how incidents would be handled if something goes wrong. The result is scattered adoption, where value is inconsistent, risk is harder to see, and the organisation ends up trying to retrofit structure after AI is already embedded in day-to-day work.

Sounds familiar? We have seen this pattern before with the rise of cloud platforms, SaaS services, and the dot-com boom. Each time, organisations moved quickly to capture the benefits, often before the right structures, controls, and ownership were in place, with security added later rather than designed in from the start. In some cases, that seemed manageable at the time, but AI raises the stakes: the pace is faster, the usage is more dispersed, and the risks can affect data, decisions, operations, and trust all at once.

This is the problem a Target Operating Model is there to solve: giving the organisation a clear structure for how AI is adopted, governed, secured, and supported before disorder becomes embedded.

A Practical Blueprint

A Target Operating Model, or TOM, is the practical blueprint for how a capability will work across an organisation. It defines how people, processes, technology, governance, and controls come together to deliver value consistently and sustainably. Its purpose is to turn intent into something operational, giving the organisation clear ownership, clearer decision-making, more consistent delivery, and a more controlled way to scale the capability over time.

Deploying solutions before defining the Target Operating Model usually means implementing technology before deciding how it will be owned, governed, supported, secured, and used. A TOM gives you the structure first: who is responsible, how processes will work, what controls are needed, how teams interact, and how the capability will operate at scale. Without that, solutions often end up fragmented, inconsistently used, harder to secure, and difficult to support, leaving the organisation to retrofit structure later at greater cost and with more risk.

In the context of AI, it is used to move beyond ambition and into execution, giving the business a clear structure for how AI will be introduced, governed, secured, supported, and managed over time. Without that, AI remains a set of disconnected initiatives rather than a capability the organisation can operate with confidence.

It’s All About The Layers

A good TOM usually works in layers. At the organisational level, it should stay high enough to define the overall model: principles, governance, roles, control points, decision rights, operating processes, and how value is delivered. But it also needs enough detail below that level to show how those things will actually be applied in practice.

This is where many organisations get stuck. They define the high-level model but stop short of translating it into the patterns, controls, and operational detail needed to make it real. The same is true of many consultancy-led TOMs: strong on structure and governance, but weaker on the implementation detail that determines whether the model can actually be delivered, secured, and supported.

That does not mean every application needs its own full TOM. More often, an organisation has an enterprise-level TOM, supported by more detailed patterns, standards, reference architectures, control requirements, and operating procedures that can be applied at the platform, product, or application level.

The Case For An AI Target Operating Model

An AI Target Operating Model is needed because AI does not just introduce a new technology choice; it changes how decisions are made, how data is used, how services are delivered, and how risk must be managed across the organisation.

Without a clear operating model, AI adoption often becomes fragmented, with different teams using tools, data, and processes inconsistently, creating security gaps, governance confusion, duplicated effort, and poor accountability. A good AI TOM provides the organisation with a practical structure for enabling, governing, implementing, securing, monitoring, and supporting AI, so that innovation can move forward in a controlled and sustainable way rather than becoming a collection of disconnected experiments.

When an organisation uses multiple AI models for different purposes, the TOM should provide a single operating structure at the top, with sufficient flexibility below to accommodate differences across use cases, models, platforms, and risk levels. This matters because different AI models can bring very different requirements around data sensitivity, explainability, integration, resilience, validation, and security controls. A general-purpose chatbot used for low-risk internal tasks does not need to be operated in exactly the same way as a model embedded into business workflows, connected to sensitive data, or given the ability to trigger downstream actions.

Without making any of these considerations, however, organisations often end up with fragmented governance, duplicated controls, inconsistent risk treatment, and a growing mix of AI services that are hard to manage as a whole.

Components of an AI Target Operating Model

An AI Target Operating Model should define the core components needed to enable AI in a controlled, usable, and sustainable way.

Strategy & Governance

This sets the direction for how AI will be used, governed, and overseen across the organisation.

What: Defines the organisation’s AI direction, objectives, governance structure, decision-making, accountability, approval routes, policy ownership, and oversight.

Why: To ensure AI is adopted with a clear purpose, aligned to business goals, and governed in a way that avoids fragmented decisions and unclear ownership.

How: By setting out the outcomes the organisation wants from AI, the principles that guide its use, the forums and roles that make decisions, and the governance mechanisms used to oversee adoption and control risk.

Risk & Compliance

This ensures AI is used in a way that meets legal, regulatory, ethical, and organisational requirements.

What: Defines how legal, regulatory, ethical, and organisational requirements are applied to AI use, alongside the support arrangements needed to sustain it.

Why: To ensure AI is used in a way that is compliant, defensible, and manageable in practice, rather than creating unmanaged exposure.

How: By embedding risk assessment, compliance requirements, service support, maintenance, troubleshooting, change management, and governance activities into the operating model.

People & Process

This defines who is responsible for AI and how it will be managed in practice.

What: Defines the roles, responsibilities, and operational processes needed to manage AI across its lifecycle.

Why: To ensure AI is not just introduced but properly owned, operated, supported, and managed over time.

How: By clarifying who owns AI at the enterprise level, who manages risk, who implements and supports solutions, and by defining the processes for identifying, assessing, approving, deploying, monitoring, reviewing, and retiring AI use cases.

Technical & Security

This explains how AI will be built, integrated, secured, and supported within the wider technology environment.

What: Defines the technical architecture, platform design, integration patterns, and security controls needed to support AI safely and effectively.

Why: To ensure AI solutions can be implemented in a way that is secure, scalable, supportable, and aligned with the wider enterprise environment.

How: By setting out where models are hosted, how they connect to systems and data, how environments are separated, and which controls are required for identity, access, data protection, logging, monitoring, validation, guardrails, resilience, third-party risk, and incident response.

Measurement & Improvement

This shows how the organisation will track AI performance, control effectiveness, and maturity over time.

What: Defines how the organisation will measure AI value, performance, risk, control effectiveness, and maturity over time.

Why: To ensure AI remains effective, controlled, and able to improve as adoption grows.

How: By establishing metrics, review points, feedback loops, and continuous improvement mechanisms that track outcomes, monitor control effectiveness, and guide future refinement.

Final Thoughts

AI strategy matters, but an operating model only has value if it can survive contact with implementation. That is the real test. Not whether it looks coherent in a deck, but whether it can be built, run, governed, secured, and supported in the real world.

If an AI Target Operating Model stops at the high level and avoids the practical detail, it leaves the hardest questions unanswered. If nobody can make it real, it is not a target operating model. It is just ambition in a box.

The organisations that will get the most from AI will not be the ones with the most polished strategy slides. They will be the ones that put structure around ambition, translate intent into operation, and make AI something they can use with confidence over time.

And organisations that do not consider an AI Target Operating Model at all are unlikely to avoid one; they will simply end up with an accidental one shaped by fragmented decisions, inconsistent controls, and whatever gets adopted first.

Next step

How are you operating AI?

If your organisation is seriously exploring AI, now is the time to ask not just what you want AI to do, but also how you expect it to operate, securely and sustainably, in the real world.