Blog

/

🎓🎓 Giving "Agentic" a More Precise Definition: Three Levels of Intelligent Execution

🎓🎓 Giving "Agentic" a More Precise Definition: Three Levels of Intelligent Execution

Chris Jones

on Apr 21, 2025

Giving "Agentic" a more precise definition
Giving "Agentic" a more precise definition

Technical content rating: 🎓🎓 very technical

The words "agent", "agentic", "reasoning" get thrown around a lot these days and you probably have an intuitive idea of what they mean. But it can be difficult to formulate precise answers to basic questions such as:

  • Is a reasoning model, like OpenAI's o3, an Agent? Is it Agentic?

  • What's the difference between an Agent and an Assistant? Between an Agent and a Workflow? Agent and a Copilot?

  • Can I describe the AI application I just built as an "Agent", or is it something else?

Clearly, we as software developers are building new systems today that execute autonomously in ways that weren't possible just a few years ago. The raw new capabilities that we're harnessing are:

  • AI models can make high-quality human-like decisions, autonomously

  • AI models can creatively propose human-like plans, autonomously

So by combining these capabilities, we can implement three levels of intelligent executions, where AI autonomously drives control flow:

  • (Level 0: AI doesn't participate in control flow decisions)

  • Level 1: AI decides which among a set of potential branches to continue executing

  • Level 2: AI proposes a set of branches that could be executed next

  • Level 3: AI both proposes branches and decides which ones to execute

As an aside: it's tempting to think that Level 3 is all you need, right? Let's just let the AI models figure out what to do all the time. Au contraire! When you're building a reliable, explainable, observable system, you should strive to use the least amount of power to accomplish your task. But with these new powers available to us these days, the horizon of possible tasks has expanded further than anyone would have guessed 10 years ago.

Let's apply this taxonomy to some common patterns to sniff out their level of agentic-ness.

Routing pattern

In the routing pattern, execution will proceed along one of k branches. The choice may be, for example, which of a set of LLMs to throw a prompt at, where the LLMs have different tradeoffs of cost vs. quality vs. latency. Or the routing may be among different algorithms, for example in a multi-armed bandit system.

So here, we would be using an AI model to make a high-quality autonomous decision:

Level 1 intelligent execution.

Tool use

In the tool use pattern, an AI model is given k different tools to choose to invoke based on a given prompt. The AI model is free to choose to invoke zero or more tools. And in full generality, tools may be used iteratively in a multi-step generation.

So the AI model is dynamically formulating a plan, intelligently proposing branches to execute in the form of tool calls which may happen in parallel. This would be:

Level 2 intelligent execution. This is somewhat interesting — tool use itself is agentic!

Generate-and-validate pattern

In the generate-and-validate pattern, there is a series of iterated steps that look like the following:

let artifact = null
do {
  artifact = generate(context)
} while (!validate(artifact))

This might take the form of a AI model prompted to generate a SQL query, and then a validator that checks for syntactic and semantic correctness of the generated query. Or you might have an "author" model that generates a blog post, and an "editor" model that validates the post against a style and branding guide.

So in this case, the set of future branches to take is fixed — either generate() again or terminate the loop. And an AI model may be making the validate() decision. That means we have:

Level 1 intelligent execution — if the validator is an AI model. This is perhaps surprising, as generate-and-validate is an incredibly powerful pattern. The difference underscored in this pattern is between the raw generative creativity of the generate() step — still an amazing new capability — as compared to intelligent steering of control flow itself, the agentic behavior that we're trying to describe more precisely.

Reasoning model

A reasoning model is explicitly trained to externalize its "thinking" process, as opposed to relying on prompt engineering to tickle a latent model capability. So a reasoning model is:

  • Dynamically, creatively formulating the next steps of a plan

  • Dynamically deciding how to execute the plan, including tool use etc.

So a reasoning model itself is:

Level 3 intelligent execution. Just by prompting a reasoning model, you've already assembled an agentic system!

In conclusion

We hope this taxonomy of three levels of intelligent execution helps provide a more precise way to discuss agentic systems. The next time you build an AI application, we hope that you're able to ask yourself questions like "do I need Level 2 intelligent execution to accomplish this task?"

And is your AI application agentic? Well, if you're routing, using tools, using AI as a validator, or prompting a reasoning model, then the answer is already, "yes"!