Building agents with LangGraph

An agent is a graph where an LLM decides what happens next. It reasons about the task, calls tools when needed, and loops until done. LangGraph gives you prebuilt agents and the building blocks to create your own.

What makes something an agent?

In LangGraph, an agent is any graph where an LLM controls the flow of execution. The LLM examines the current state, decides which tool to call (or whether to call one at all), and determines what node to visit next.

The difference between an agent and a simple chain is the loop. A chain runs straight through: input in, output out. An agent loops. It reasons, acts, looks at the result, and reasons again until it has enough information to answer.

LangGraph makes this explicit. You can see the loop in the graph structure. The LLM node connects to a tools node, and the tools node connects back to the LLM node. Conditional edges decide when to break out of the loop and go to the END node.

Agent patterns

LangGraph supports several agent architectures. Each fits different use cases.

ReAct (Reasoning + Acting)

The most common pattern. The agent reasons about the task, decides whether to call a tool, executes the tool, and loops until it has a final answer. LangGraph provides this out of the box with create_react_agent.

Supervisor

A coordinator agent delegates work to specialist agents. The supervisor decides which agent to invoke next based on the current state. Each specialist handles a specific domain.

Tool-calling

The LLM decides which tools to call and with what arguments. LangGraph executes the tools and feeds results back. The LLM decides when it has enough information to respond.

Self-correcting

The agent checks its own output, detects errors, and retries. Useful for code generation, data validation, and any task where output quality can be verified programmatically.

Quick start with create_react_agent

The fastest way to get an agent running. Pass in a model and a list of tools, and LangGraph builds the ReAct loop for you. The agent will call tools as needed and return a final response.

agent.py
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool

@tool
def search(query: str) -> str:
    """Search the web for information."""
    return f"Results for: {query}"

@tool
def calculator(expression: str) -> str:
    """Evaluate a math expression."""
    return str(eval(expression))

# Create a ReAct agent with tools
model = ChatOpenAI(model="gpt-4o")
agent = create_react_agent(
    model,
    tools=[search, calculator],
    prompt="You are a helpful research assistant.",
)

# Run the agent
result = agent.invoke({
    "messages": [("user", "What is 15% of 847?")]
})

Building a custom agent

When you need more control, build the agent graph yourself. This example creates the same ReAct loop manually: an agent node calls the LLM with bound tools, tools_condition routes to either the tools node or END, and the tools node loops back to the agent.

custom_agent.py
from typing import Annotated, Literal
from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
from langchain_anthropic import ChatAnthropic
from langchain_core.tools import tool

@tool
def search(query: str) -> str:
    """Search for information on a topic."""
    return f"Results for: {query}"

class State(TypedDict):
    messages: Annotated[list, add_messages]

# Bind tools to the model
model = ChatAnthropic(model="claude-sonnet-4-20250514")
model_with_tools = model.bind_tools([search])

def agent_node(state: State):
    response = model_with_tools.invoke(state["messages"])
    return {"messages": [response]}

# Build the graph
graph = StateGraph(State)
graph.add_node("agent", agent_node)
graph.add_node("tools", ToolNode([search]))

graph.add_edge(START, "agent")
graph.add_conditional_edges("agent", tools_condition)
graph.add_edge("tools", "agent")

app = graph.compile()

Multi-agent systems

For complex tasks, you can build graphs where multiple agents collaborate. The supervisor pattern is the most common approach: a coordinator agent decides which specialist agent to invoke next. Each specialist focuses on a specific skill.

multi_agent.py
from typing import Annotated, Literal
from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langchain_anthropic import ChatAnthropic

class State(TypedDict):
    messages: Annotated[list, add_messages]
    next_agent: str

model = ChatAnthropic(model="claude-sonnet-4-20250514")

def supervisor(state: State):
    response = model.invoke([
        ("system", "Route to 'researcher' or 'writer'. "
         "Reply 'FINISH' when done."),
        *state["messages"],
    ])
    content = response.content
    if "FINISH" in content:
        return {"next_agent": "FINISH"}
    elif "researcher" in content.lower():
        return {"next_agent": "researcher"}
    return {"next_agent": "writer"}

def researcher(state: State):
    response = model.invoke([
        ("system", "You are a researcher. Find facts."),
        *state["messages"],
    ])
    return {"messages": [response]}

def writer(state: State):
    response = model.invoke([
        ("system", "You are a writer. Write content."),
        *state["messages"],
    ])
    return {"messages": [response]}

def route(state: State) -> Literal["researcher", "writer", "__end__"]:
    if state["next_agent"] == "FINISH":
        return "__end__"
    return state["next_agent"]

graph = StateGraph(State)
graph.add_node("supervisor", supervisor)
graph.add_node("researcher", researcher)
graph.add_node("writer", writer)

graph.add_edge(START, "supervisor")
graph.add_conditional_edges("supervisor", route)
graph.add_edge("researcher", "supervisor")
graph.add_edge("writer", "supervisor")

app = graph.compile()

Best practices

Start with create_react_agent

For most use cases, the prebuilt ReAct agent handles tool calling, message management, and the reasoning loop. Only build a custom graph when you need behavior that the prebuilt cannot support.

Limit tool count per agent

Each tool the agent can call adds complexity. Keep the tool list focused on what the agent actually needs. If an agent needs many tools, consider splitting it into multiple specialized agents.

Add a system prompt with clear instructions

The system prompt shapes the agent's behavior. Be specific about what it should do, what tools to prefer, and when to stop. Vague prompts lead to unpredictable behavior.

Set iteration limits

Agents can get stuck in loops. Use the recursion_limit parameter on compile() to cap the number of steps. The default is 25, which works for most cases.

Use streaming for long-running agents

Agents that make many tool calls can take a while. Stream results with app.stream() so users see progress in real time rather than waiting for the entire execution to finish.

Test with deterministic inputs first

LLM-based agents are non-deterministic. Start by testing with inputs that have predictable tool call patterns. This makes it easier to verify that your graph logic is correct.

FAQ

Agent questions

Deploy your agents to production

Built your agent? Deploy it with a single command.