Member-only story

How to Improve LLMs with Tools

Building Your First AI Agent (with OpenAI’s Agents SDK)

Shaw Talebi
7 min read4 days ago

This is the 2nd article in a series on AI agents. In the previous post, we discussed how agents combine three essential features: LLMs, tools, and reasoning. Here, we will discuss the simplest example of such a system: an LLM + tools. I’ll start with a high-level overview of how these systems work, then share example code for how to build one using OpenAI’s Agents SDK.

Image from Canva.

Getting computers to solve problems (typically) requires carefully deconstructing a task into distinct steps and then translating these steps into computer code. While this works well when you have predictable inputs and workflows, this is not always the case.

For example, a customer support bot might capture user input, match it to a known issue, and return a pre-defined solution. However, as you may have experienced, such rule-based systems leave much to be desired.

The problem is that users describe issues in unpredictable ways, and all possible troubleshooting scenarios cannot be anticipated at the outset. But what if there was another way?

AI Agents

Agents present a new way of thinking about software. Rather than explicitly defining rules and business logic, agents involve giving LLMs the tools they need to solve problems.

Returning to the customer support example, rather than defining a single rigid workflow for all cases, you might give an LLM context about your company, a search tool for support docs, and the ability to escalate issues.

Teach an LLM to Fish

This reminds me of the old adage: “Give a man a fish, and you feed him for a day. Teach a man to fish, and you feed him for life.” The point is that if you give someone the tools they need to be successful, they can thrive on their own — this is how I like to think about agents.

--

--

Shaw Talebi
Shaw Talebi

No responses yet

Write a response