How tool calling works
Tool calling in LangGraph.js follows a two-step pattern. First, the LLM decides which tool to call and with what arguments. It does not execute the tool itself. Instead, it returns a message containing tool call requests.
Second, a ToolNode executes those requests and returns the results as tool messages. The results flow back to the LLM, which can then decide to call more tools or produce a final response.
The LLM handles reasoning and tool selection. The ToolNode handles execution. Conditional edges between them create the agent loop.
The tool() function
Define tools with tool() from @langchain/core/tools. Pass an async function, a name, a description, and a Zod schema for the input parameters. The description tells the LLM what the tool does and when to use it.
import { tool } from "@langchain/core/tools"
import { z } from "zod"
const search = tool(
async ({ query }) => {
const results = await searchApi.search(query)
return formatResults(results)
},
{
name: "search",
description: "Search the web for information on a topic.",
schema: z.object({
query: z.string().describe("The search query"),
}),
}
)
const getWeather = tool(
async ({ city }) => {
const data = await weatherApi.get(city)
return `${data.temp}°F, ${data.condition}`
},
{
name: "get_weather",
description: "Get the current weather for a city.",
schema: z.object({
city: z.string().describe("City name"),
}),
}
)Using ToolNode in a graph
ToolNode is a prebuilt node that executes tool calls from the LLM. Combined with toolsCondition for routing, it gives you a complete agent loop in just a few lines.
import { StateGraph, Annotation, START, END, MessagesAnnotation } from "@langchain/langgraph"
import { ToolNode, toolsCondition } from "@langchain/langgraph/prebuilt"
import { ChatOpenAI } from "@langchain/openai"
const StateAnnotation = Annotation.Root({
...MessagesAnnotation.spec,
})
// Define tools
const tools = [search, getWeather]
// Create model with bound tools
const model = new ChatOpenAI({ model: "gpt-4o" }).bindTools(tools)
async function callModel(state: typeof StateAnnotation.State) {
const response = await model.invoke(state.messages)
return { messages: [response] }
}
// Build graph with ToolNode
const graph = new StateGraph(StateAnnotation)
.addNode("agent", callModel)
.addNode("tools", new ToolNode(tools))
.addEdge(START, "agent")
.addConditionalEdges("agent", toolsCondition)
.addEdge("tools", "agent")
const app = graph.compile() Routing with toolsCondition
toolsCondition is a prebuilt routing function. It checks if the last message has tool calls and routes accordingly. No need to write the routing logic yourself.
import { toolsCondition } from "@langchain/langgraph/prebuilt"
// toolsCondition checks if the last message has tool calls
// Routes to "tools" node if yes, END if no
graph.addConditionalEdges("agent", toolsCondition)
// Equivalent to writing this manually:
graph.addConditionalEdges("agent", (state) => {
const lastMessage = state.messages[state.messages.length - 1]
if (lastMessage.tool_calls?.length > 0) {
return "tools"
}
return "__end__"
})Tool error handling
Tools can fail. APIs time out, databases go down, inputs get malformed. ToolNode catches these errors by default and returns them as tool messages so the LLM can adjust its approach.
// ToolNode handles errors by default
const toolNode = new ToolNode(tools, {
handleToolErrors: true, // Default: returns error as tool message
})
// Or provide a custom error handler
const toolNodeCustom = new ToolNode(tools, {
handleToolErrors: (error) => {
return `Tool failed: ${error.message}. Try a different approach.`
},
})Running tools on Crewship
Browser profile for web tools
If your tools need to scrape websites or interact with web pages, set profile = "browser" in your crewship.toml. This includes Playwright and Chromium in the build.
Environment variables for API keys
Tools that call external APIs need credentials. Store them with crewship env set. They are encrypted and injected at runtime.
Artifact storage for file output
Tools that generate files can write to the artifacts/ directory. Files are automatically collected and downloadable via API.
Best practices
Write clear descriptions
The LLM reads your description to decide when to call the tool. Be specific about what the tool does and what it returns. The LLM cannot see your implementation code.
Use Zod schemas with .describe()
Add .describe() to each field in your Zod schema. This gives the LLM context about what each parameter means and what values are valid.
Return useful error messages
When a tool fails, return a descriptive error message instead of throwing. The LLM can use error information to adjust its approach and try again.
Keep tools focused
Each tool should do one thing. A "searchAndSummarize" tool should be two separate tools. The LLM can chain them itself.
Use handleToolErrors on ToolNode
ToolNode catches exceptions from tool execution and converts them to error messages. The graph keeps running instead of crashing, and the agent can try a different approach.
FAQ
Tools questions
Deploy your tools to production
Built your tools? Deploy them with your agent in a single command.