The CTO, the COO and the Coming Leadership Reckoning
As AI reshapes the operating model, are we asking the wrong people to lead the transition and do we even have enough time to find out?
Something quietly significant is happening in boardrooms and leadership teams across industries. The job descriptions have not changed. The org charts look largely the same. But the ground beneath two of the most critical executive roles in any technology-enabled business is shifting in ways that most organisations have not yet fully reckoned with.
The question I keep returning to is this: when AI agents, human employees, and context files all sit within the same operational framework, who is actually responsible for managing that team? And critically, do the people currently in those roles have what it takes to do it?
When the Tools Become the Workforce
For years, the division between the CTO and the COO made intuitive sense. The CTO managed the technology. The COO managed the people and processes that used it. Clean. Logical. Organisationally convenient.
Agentic AI dismantles that logic entirely.
EY's recent work on agentic AI describes what is coming with striking clarity. Enterprises will need to build what they call "enterprise knowledge graphs, structured maps of business logic, data relationships and context to guide agent reasoning, enable memory and ensure explainability." Their conclusion? These maps become the cognitive backbone of the enterprise.
Let that sit for a moment.
The cognitive backbone of the enterprise will be a structured file. Something that needs to be authored, governed, maintained, and updated as the business evolves. It will encode organisational values, decision-making logic, escalation thresholds, and operational constraints. It will, in effect, be the standing instructions for a significant portion of your workforce.
That is not a technology asset. That is a management responsibility dressed in technical clothing.
AWS, reflecting on the architectural shift required for agentic systems, described it in terms that are unmistakably managerial: instead of micromanaging agents, we orchestrate autonomous team members that make intelligent decisions within their areas of responsibility.
So we are already speaking the language of management. Of delegation, trust, oversight, and accountability. The question is whether the people holding the CTO title are equipped to operate in that register.
The People Management Problem Nobody Is Talking About Loudly Enough
Here is the uncomfortable truth that tends to get softened in polite strategy conversations.
A significant proportion of CTOs reached their positions through deep technical excellence. They were brilliant engineers, gifted architects, or exceptional product builders. They were promoted because of what they could build, not because of how they led. And in many organisations, that has worked well enough because the human workforce was managed by others, and the technology simply needed to be sound.
That arrangement is running out of road.
In an AI-native operating model, the CTO can no longer comfortably delegate the human dimension to engineering managers and focus on the architecture. The architecture is the workforce. The decisions about what agents know, how they are instructed, where their autonomy ends, and how they interact with human colleagues are simultaneously technical and organisational decisions. They cannot be separated.
Reskilling from deep technical orientation to genuine people and organisational leadership is not a training course. It is an identity shift. It requires someone to move from deriving satisfaction from building things to deriving satisfaction from enabling others, from solving technical problems to navigating human complexity. For some, that transition is natural. For others, it runs directly against the grain of why they went into technology in the first place.
The Google Cloud Office of the CTO observed that deploying agents has become less a software problem and more a governance challenge. Governance is, at its core, a human discipline. It requires judgement, communication, and the ability to set and enforce cultural expectations. These are not skills that emerge automatically from technical mastery.
This Is Not a Universal Argument
Before going further, it is worth being direct about something a tech founder raised with me recently, and it is a fair challenge.
The convergence of CTO and COO is not an argument that applies equally across every sector. In some industries it would be not just premature but genuinely dangerous.
In highly regulated environments such as financial services, pharmaceuticals, healthcare, defence, regulators frequently require clear, named accountability at leadership level. The lines between who owns technology decisions and who owns operational ones are not an organisational convenience. They are a governance requirement. Blurring them creates audit risk, compliance exposure, and in some cases personal liability for the individuals involved.
In safety-critical industries such as aerospace, energy, critical national infrastructure, the consequences of ambiguous leadership accountability are too severe to tolerate. These sectors need clarity precisely because the cost of getting it wrong is catastrophic.
In deep R&D businesses, the CTO role is often genuinely about scientific and engineering innovation rather than operational technology deployment. The two functions are not converging there because they were never really adjacent in the first place.
What the convergence thesis applies to most directly are technology-native businesses, SaaS, digital services, media and any organisation where the product is increasingly delivered by or through AI agents. In those contexts the operational and technical decisions are already inseparable. The governance question is not whether to maintain distinct lines but whether the people holding those titles understand that the nature of what they are governing has fundamentally changed.
The sectors where the debate is most alive are also, interestingly, the ones where the governance question is most complex. Not because regulation demands separation, but because the stakes of getting the human and agent oversight model wrong are high enough that clear accountability structures become a competitive necessity rather than a bureaucratic one.
The COO's Moment — But Only If They Move Fast Enough
If the CTO role is being pulled towards operational leadership, the inverse question is equally important. Can the COO develop sufficient AI literacy to credibly govern the hybrid workforce that is emerging?
McKinsey's analysis of how COOs are responding to generative AI describes a tripartite responsibility: defining the operating structure for AI deployment, shaping data governance, and overseeing change management so that people learn, use, and continuously improve the tools AI enables. That is a formidable brief. And notably, it places the COO at the centre of the AI transformation story, not the periphery.
There is a version of the future where operationally gifted COOs become the natural owners of the hybrid human and agent operating model. Not because they out-engineer the CTO, but because they understand that management is management, whether the team member is a person or a process.
But this raises its own questions. Is the average COO genuinely engaging with agentic infrastructure? Do they understand context files, orchestration layers, and agent governance deeply enough to own them accountably? Or are they still thinking about AI primarily as an efficiency tool sitting within existing operational frameworks?
History Repeating — But This Time for the Tech Industry Itself
There is a broader disruption story unfolding here that deserves more attention than it currently receives.
Technology and SaaS businesses spent the last two decades disrupting traditional industries. The advantage was clear: they were nimble, they had less bureaucracy, faster decision cycles, and native fluency with the tools reshaping markets. They moved before incumbents could. And incumbents, weighed down by legacy systems, legacy thinking, and legacy org structures, struggled to respond quickly enough.
That story is now turning on itself.
Native AI businesses are emerging with precisely the characteristics that made tech and SaaS companies so disruptive in the first place. They have no legacy architecture to migrate. No established operating model to unpick. No cultural debt accumulated across decades of doing things a particular way. They are building their teams, their processes, and their cognitive infrastructure from scratch with agents, context files, and human oversight baked in from day one rather than retrofitted around existing structures.
The established tech giants and SaaS businesses now face the same structural disadvantage they once exploited. Their scale, which was an asset, creates organisational inertia. Their experienced workforces, which were a competitive advantage, now carry assumptions about how work gets done that are difficult to unlearn. And their leadership teams, forged in a world where competitive advantage came from engineering talent and product velocity, may not be naturally configured to lead the kind of organisational transformation that the AI-native moment demands.
This is not a comfortable parallel for anyone in a mid-to-large technology business to sit with. But change in these businesses is happening, headcount and efficiency is becoming a focus.
The Problem That Sits Beneath All of This: Time
And here is where the conversation needs to get more honest than it typically does because there is a sequencing problem at the heart of all of this that very few organisations are addressing directly.
The capability gap around AI in businesses is not narrowing. It is widening. Just in the past week, Anthropic shipped Claude Opus 4.7 with meaningfully stronger agentic coding and reasoning capabilities, alongside Claude Managed Agents entering public beta — a fully managed agent harness for running autonomous workloads at scale. OpenAI simultaneously released a major Codex expansion, with the tool now capable of operating your computer independently, running persistent multi-day workflows, and proactively suggesting what to tackle next based on accumulated memory of your working patterns.
These are not incremental updates. They are step changes in what the technology can do. And they are arriving faster than most organisations are moving.
The pace of technology adoption is measured in months. Organisational change is measured in years. Leadership transformation takes longer still. And critically, it can only begin in earnest once the people who need to lead it have developed sufficient understanding of what they are leading.
You cannot redesign your operating model around AI agents if your senior leadership team is still developing foundational literacy about what agents are and how they behave. You cannot have an honest conversation about whether your CTO is the right person to lead a hybrid workforce if the business has not yet experienced what managing that workforce actually involves. And you cannot begin the cultural change required to integrate AI as a genuine team member if the leadership team is still treating it as a productivity feature.
EY's framing of a structured, cross-functional, multi-stage playbook for bringing agentic workforces to life is instructive, not because the stages are revelatory, but because the very need for such an approach reveals how much organisational work sits behind the technology deployment. The agents can be stood up relatively quickly. Building the leadership capability, the governance structures, and the cultural readiness to manage them well is a different order of challenge entirely.
The gap between what the technology is already capable of and what organisations are structurally ready to do with it is where real competitive divergence will happen. And that gap is growing, not shrinking.
The Questions Worth Sitting With
So where does this leave us?
Are your senior leaders actively developing the literacy they will need, not just about AI capabilities, but about what it means to manage a team that includes non-human members? Is your CTO energised by the prospect of becoming a workforce architect, or would they be more honest that what they really want is to stay close to the engineering? Is your COO building genuine technical fluency, or treating AI as something that sits within existing operational frameworks rather than something that reshapes them entirely?
And perhaps most pressingly: is your organisation at risk of being the next incumbent — not the tech giant disrupting others, but the established player being outmaneuvered by a native AI business that started with none of your legacy and all of your market opportunity?
The roles of CTO and COO as we have understood them were built for a world where technology and people occupied separate lanes. That world is ending. The harder, slower, more human work of leadership and organisational transformation cannot begin until the capability foundation is in place and the clock on that work is already running faster than most organisations realise.
What are you seeing in your own organisations? Are the right leadership conversations happening, or is the focus still almost entirely on the technology itself?