What makes something an agent?
In LangGraph.js, an agent is any graph where an LLM controls the flow of execution. The LLM examines the current state, decides which tool to call (or whether to call one at all), and determines what node to visit next.
The difference between an agent and a simple chain is the loop. A chain runs straight through: input in, output out. An agent loops. It reasons, acts, looks at the result, and reasons again until it has enough information to answer.
LangGraph.js makes this explicit. You can see the loop in the graph structure. The LLM node connects to a tools node, and the tools node connects back to the LLM node. Conditional edges decide when to break out of the loop and go to the END node.
Agent patterns
LangGraph.js supports several agent architectures. Each fits different use cases.
ReAct (Reasoning + Acting)
The agent reasons about the task, decides whether to call a tool, executes the tool, and loops until it has a final answer. Use createReactAgent for a ready-made implementation.
Supervisor
A coordinator agent routes work to specialists. The supervisor reads the current state and picks which specialist to run next. Each specialist handles one domain.
Tool-calling
The LLM picks which tools to call and with what arguments. LangGraph.js runs the tools and feeds results back. The LLM stops when it has enough information to respond.
Self-correcting
The agent checks its own output, detects errors, and retries. Useful for code generation, data validation, and any task where output quality can be verified programmatically.
Quick start with createReactAgent
The fastest way to get an agent running. Pass in a model and a list of tools, and LangGraph.js builds the ReAct loop for you. The agent calls tools as needed and returns a final response.
import { createReactAgent } from "@langchain/langgraph/prebuilt"
import { ChatOpenAI } from "@langchain/openai"
import { tool } from "@langchain/core/tools"
import { z } from "zod"
const search = tool(
async ({ query }) => {
return `Results for: ${query}`
},
{
name: "search",
description: "Search the web for information.",
schema: z.object({ query: z.string() }),
}
)
const calculator = tool(
async ({ expression }) => {
return String(eval(expression))
},
{
name: "calculator",
description: "Evaluate a math expression.",
schema: z.object({ expression: z.string() }),
}
)
// Create a ReAct agent with tools
const model = new ChatOpenAI({ model: "gpt-4o" })
const agent = createReactAgent({
llm: model,
tools: [search, calculator],
})
// Run the agent
const result = await agent.invoke({
messages: [{ role: "user", content: "What is 15% of 847?" }],
})Building a custom agent
When you need more control, build the agent graph yourself. This example creates the same ReAct loop manually: an agent node calls the LLM with bound tools, toolsCondition routes to either the ToolNode or END, and the tools node loops back to the agent.
import { StateGraph, Annotation, START, END, MessagesAnnotation } from "@langchain/langgraph"
import { ToolNode, toolsCondition } from "@langchain/langgraph/prebuilt"
import { ChatAnthropic } from "@langchain/anthropic"
import { tool } from "@langchain/core/tools"
import { z } from "zod"
const search = tool(
async ({ query }) => `Results for: ${query}`,
{
name: "search",
description: "Search for information on a topic.",
schema: z.object({ query: z.string() }),
}
)
const StateAnnotation = Annotation.Root({
...MessagesAnnotation.spec,
})
// Bind tools to the model
const model = new ChatAnthropic({ model: "claude-sonnet-4-20250514" })
const modelWithTools = model.bindTools([search])
async function agentNode(state: typeof StateAnnotation.State) {
const response = await modelWithTools.invoke(state.messages)
return { messages: [response] }
}
// Build the graph
const graph = new StateGraph(StateAnnotation)
.addNode("agent", agentNode)
.addNode("tools", new ToolNode([search]))
.addEdge(START, "agent")
.addConditionalEdges("agent", toolsCondition)
.addEdge("tools", "agent")
const app = graph.compile()Multi-agent systems
For complex tasks, build graphs where multiple agents work together. In the supervisor pattern, a coordinator agent decides which specialist to invoke next. Each specialist handles one skill.
import { StateGraph, Annotation, START, END, MessagesAnnotation } from "@langchain/langgraph"
import { ChatAnthropic } from "@langchain/anthropic"
const StateAnnotation = Annotation.Root({
...MessagesAnnotation.spec,
nextAgent: Annotation<string>,
})
const model = new ChatAnthropic({ model: "claude-sonnet-4-20250514" })
async function supervisor(state: typeof StateAnnotation.State) {
const response = await model.invoke([
{ role: "system", content: "Route to 'researcher' or 'writer'. Reply 'FINISH' when done." },
...state.messages,
])
const content = response.content as string
if (content.includes("FINISH")) return { nextAgent: "FINISH" }
if (content.toLowerCase().includes("researcher")) return { nextAgent: "researcher" }
return { nextAgent: "writer" }
}
async function researcher(state: typeof StateAnnotation.State) {
const response = await model.invoke([
{ role: "system", content: "You are a researcher. Find facts." },
...state.messages,
])
return { messages: [response] }
}
async function writer(state: typeof StateAnnotation.State) {
const response = await model.invoke([
{ role: "system", content: "You are a writer. Write content." },
...state.messages,
])
return { messages: [response] }
}
function route(state: typeof StateAnnotation.State) {
if (state.nextAgent === "FINISH") return "__end__"
return state.nextAgent
}
const graph = new StateGraph(StateAnnotation)
.addNode("supervisor", supervisor)
.addNode("researcher", researcher)
.addNode("writer", writer)
.addEdge(START, "supervisor")
.addConditionalEdges("supervisor", route)
.addEdge("researcher", "supervisor")
.addEdge("writer", "supervisor")
const app = graph.compile()Best practices
Start with createReactAgent
The prebuilt ReAct agent handles tool calling, message management, and the reasoning loop. Build a custom graph only when you need control flow that the prebuilt does not support.
Limit tool count per agent
Each tool adds complexity for the LLM. Keep the tool list focused on what the agent actually needs. If an agent needs many tools, consider splitting it into multiple specialized agents.
Add a system prompt with clear instructions
The system prompt shapes what the agent does. Be specific about which tools to prefer and when to stop. Vague prompts lead to unpredictable results.
Set recursion limits
Agents can get stuck in loops. Use the recursionLimit option on compile() to cap the number of steps. The default is 25, which works for most cases.
Use streaming for long-running agents
Agents that make many tool calls can take a while. Stream results with app.stream() so users see progress in real time rather than waiting for the entire execution to finish.
Test with deterministic inputs first
LLM-based agents are non-deterministic. Start by testing with inputs that have predictable tool call patterns. This makes it easier to verify that your graph logic is correct.
FAQ
Agent questions
Deploy your agents to production
Built your agent? Deploy it with a single command.