If you strip away the hype, what actually is an AI agent?
At its core, an agent is just one loop running inside a request-response model.
The flow looks like this:
User Request -> [loop: LLM call <-> tool call] -> User Response
The logic is straightforward. You read input from the user, create a message history, and then enter a tool-calling loop. Inside each iteration of that loop, the system asks one question:
- Is a tool being run?
- If not, break from the loop and deliver the final result to the user.
- If yes, run the tool with the provided arguments and append the result to the message history.
Usually we append everything to the message history because it builds on previous context. That is the standard pattern, even if it is not the only possible implementation.
The Code
Here is what that looks like in pseudo-code:
messages = [user_request]
while True:
llm_response = call_llm(messages)
messages.append(llm_response)
if llm_response.has_tool_calls:
for tool_call in llm_response.tool_calls:
result = execute_tool(tool_call)
messages.append(result)
else:
return llm_response.text # Done, deliver to the user
Pretty simple, right?
From Chatbots to Agents
Think of it this way: when ChatGPT first came along, it was a linear interaction. The user sent a message, and ChatGPT provided the answer. Then the user sent a message, and ChatGPT sent one back.
Then came tools, often called function calling or structured data. This let us say to the LLM, whether Claude, ChatGPT, or Gemini:
I have these tools available. You tell me what to run and with what data, and I will do it and give you the results.
Suddenly, the LLM is not just talking. It is acting.
The Aha Moment
Let us look at a concrete example. Imagine we give an agent a Bash tool that can execute terminal commands.
The user says:
Write me a poem in my home directory.
The LLM thinks through the problem. It needs to generate text and save it to a file, so it might produce a tool call like this:
cat << 'EOF' > ~/poem.txt
The terminal glows soft at night,
Commands flow swift, the cursor bright,
Through pipes and streams the data flows,
Where it will stop, nobody knows.
EOF
Just like that, the LLM has written the command to save the file. But the model itself cannot touch your hard drive. It effectively hands a note to the loop saying, "run this."
The loop sees that note, grabs the command, runs it on your computer, and reports the result back into the conversation.
Building the Car Around the Engine
The exciting part is that LLMs are getting better at solving problems. When you give them access to tools, they try to use those tools to complete the problem they are solving.
There is a point where people get their "aha" moment. There are some great tutorials out there showing how easy it is to build the agentic loop, and I highly suggest you do it too. Once you see how easy it is to build the car around the engine, you start thinking of novel ways to play with it.
I am currently building a tool where I am stretching what I think I understand about these agents and guiding them through building it, learning as I go. More on that in a follow-up post.