Member-only story
AI Agents Explained (at 3 Levels of Agency)
Tools, Workflows, and LLMs in a Loop
This is the first article in a larger series on AI agents. Although 2025 is said to be the “year of AI agents”, for many, it’s still unclear what makes an AI system an “agent” and why we should care. In this post, I describe the key features of these systems and concrete examples at 3 levels of agency.

Companies are making big bets on AI agents. OpenAI is shipping models like Operator and DeepResearch. YC says vertical AI agents could be 10X bigger than SaaS [1]. And AI apps like Cursor and Windsurf have replaced their chat interfaces with agentic ones.
This has created lots of excitement around agents even beyond AI companies. However, for the uninitiated, it may not be clear what an AI agent (actually) is.
What are AI agents?
One of the causes of confusion is that no one agrees on a single definition of an AI agent. To demonstrate this, here are a few from leading organizations.
- OpenAI: a large language model (LLM), configured with instructions and tools [2]
- Hugging Face: system where a large language model (LLM) can execute more complex tasks through planning and using tools [3]
- Anthropic: systems where LLMs dynamically direct their own processes and tool usage, maintaining control over how they accomplish tasks [4]
While I won’t make matters worse by proposing yet another definition, I will discuss a few key features that span all of these definitions.
- LLM — Large language models play a central role in agentic systems due to features I discuss in the next section.
- Tool use — These allow agents to go beyond an LLM’s basic text generation and interact with the outside world (e.g. code interpreter, API calls, RAG, Memory).
- Autonomy — Agents (to various degrees) decide how to accomplish a given task, which can involve planning, reasoning…