Developing a Shared Framework for Agentic Automation

Gilles Mazars, Group Chief AI Officer

August 13, 2025


Agentic AI is dominating industry conversations. While it may seem like a new buzzword, it’s rooted in a rich historical field of AI research: autonomous agent systems.

 

Today, AI technology has enabled these agents to reach unprecedented levels of autonomy, enabling them to plan and interact with unstructured data. In an enterprise context, AI agents are systems capable of managing certain business operations with minimal or no human intervention.

 

These agents integrate with an organization's data infrastructure and possess an interface to receive user inputs and deliver outputs. Consider a chatbot for financial services customer support, which can answer questions about the institution's offerings, retrieve customer information or documents and even execute financial transactions.

 

AI agents display various levels of autonomy based on their ability to interact with their environment (accessing data and executing actions), their planning capabilities and the degree of human control. As it stands today, AI Agents in the enterprise often have limited autonomy, but with technology advancing rapidly, fully autonomous agents likely in the future.

 

As the technology evolves, it’s clear that we need a common terminology for discussing autonomy. A shared language and understanding is crucial in enabling leaders to effectively assess the maturity of agentic AI systems, as well as the associated benefits, risks and trade-offs at each level.

 

A good starting point for this discussion is the widely adopted — although not formally standardized — framework for autonomous driving capability. Adapting this scale to reflect the levels of autonomy displayed by an AI agent is a useful way to begin benchmarking maturity.

Mapping agentic AI maturity

“We're announcing today that we are going to have fully autonomous vehicles in commercial operation by 2021”
2016, Mark Fields (Ford CEO)

 

Enterprise is currently having its self-driving car moment. There is a huge buzz about what agentic AI will be able to do, but, like cars, enterprises are not going to change from no autonomy to full autonomy one day to the next. It’s worth noting, some AI automation already existed in both cases - Advanced Driver Assistance Systems (ADAS)for cars and Optical Character Recognition (OCR) for enterprise. Like cars, enterprises will change one step at a time, iteratively

 

Below is an adaptation of the driving-automation framework for AI agents. Each level captures an agent’s task scope, planning sophistication and required human oversight.

Level Driving Automation AI Agent Autonomy Overview
0 No automation Manual Tasks are fully manual and human-driven. No autonomy.
1 Driver assistance Assisted AI agents assist humans on isolated workflow tasks, handling a single processing responsibility.
2 Partial automation Partial Partial workflow automation using a programmatic coordination of multiple AI agents. No action on the environment is possible without human validation.
3 Conditional automation Selective Full process automation, for some use cases, using a programmatic or AI coordination of multiple AI agents. No action on the environment is possible without human validation.
4 High automation Advanced Full process automation, for most use cases, using a programmatic or AI coordination of multiple AI agents. AI can act on the environment without human validation for simple use cases but needs human validation for complex ones.
5 Full automation Full Full process automation using a programmatic or AI reasoning coordination of multiple AI automated tasks. The system can act on the environment without human validation.

What are the criteria for AI autonomy?  

AI agent autonomy is assessed by examining how systems coordinate tasks, sequence their operations and involve humans in decision making. We use three dimensions to define levels of maturity:

●      Automation: The scope of tasks a system can coordinate

●      Planning: How tasks are sequenced, either programmatic (human-designed) or AI-driven

●      Human control: The level of oversight required for any action on the environment

By mapping these criteria across to the proposed levels of agent autonomy, we can see how each level delivers against these key areas.

Level AI Agent Autonomy Automation Planning Human control
0 Manual None None Full human control for every action
1 Assisted Isolated tasks None Full human control
2 Partial Part of workflow Programmatic sequencing Validation of any impactful action
3 Selective Full workflow for some cases Programmatic and/or AI planning Validation of any impactful action
4 Advanced Full workflow for most cases Programmatic and/or AI planning Validation of complex or high-risk actions
5 Full End-to-end workflows Programmatic and/or AI planning No human intervention required

Governing agentic AI by maturity level

This proposed structure doesn’t just provide a clearer way to talk about and assess agentic automation. It’s crucial in establishing an objective and tailored AI governance framework.

 

By linking each autonomy level to specific governance requirements, we can calibrate controls to match actual capabilities, balance innovation with oversight and build trust with stakeholders. This maturity-based approach creates a living framework that evolves alongside AI agents, ensuring compliance and strategic alignment.

Related posts

Uncover fresh perspectives with our handpicked blog posts on AI advancements.