How tool calling works
Tool calling in LangGraph follows a two-step pattern. First, the LLM decides which tool to call and with what arguments. It does not execute the tool itself. Instead, it returns a message containing tool call requests.
Second, a separate tools node executes those requests and returns the results as tool messages. The results flow back to the LLM, which can then decide to call more tools or produce a final response.
The LLM handles reasoning and tool selection. The tools node handles execution. Conditional edges between them create the agent loop.
The @tool decorator
The simplest way to create a tool. Decorate a Python function with @tool from langchain_core.tools. The function's docstring tells the LLM what the tool does and when to use it.
from langchain_core.tools import tool
@tool
def search(query: str) -> str:
"""Search the web for information on a topic."""
results = search_api.search(query)
return format_results(results)
@tool
def get_weather(city: str) -> str:
"""Get the current weather for a city."""
data = weather_api.get(city)
return f"{data['temp']}°F, {data['condition']}" The BaseTool class
For tools that need structured input validation, use the BaseTool class with a Pydantic schema. This gives you type checking, field descriptions, and default values that the LLM can see.
from langchain_core.tools import BaseTool
from pydantic import BaseModel, Field
class DatabaseQueryInput(BaseModel):
query: str = Field(description="SQL query to execute")
database: str = Field(
default="main",
description="Database name to query",
)
class DatabaseQueryTool(BaseTool):
name: str = "database_query"
description: str = "Execute a SQL query against the database"
args_schema: type[BaseModel] = DatabaseQueryInput
def _run(self, query: str, database: str = "main") -> str:
result = db.execute(query, database=database)
return str(result)Binding tools to models
Before the LLM can call tools, it needs to know about them. Use model.bind_tools() to attach tool definitions to a chat model. The model will then include tool call requests in its responses when appropriate.
from langchain_anthropic import ChatAnthropic
from langchain_core.tools import tool
@tool
def search(query: str) -> str:
"""Search for information."""
return f"Results for: {query}"
@tool
def calculator(expression: str) -> str:
"""Evaluate a math expression."""
return str(eval(expression))
# Bind tools to the model
model = ChatAnthropic(model="claude-sonnet-4-20250514")
model_with_tools = model.bind_tools([search, calculator])
# Now the model can decide to call these tools
response = model_with_tools.invoke("What is 42 * 17?")Using ToolNode in a graph
ToolNode is a prebuilt node that executes tool calls from the LLM. Combined with tools_condition for routing, it gives you a complete agent loop in just a few lines.
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
from langchain_openai import ChatOpenAI
class State(TypedDict):
messages: Annotated[list, add_messages]
# Define tools
tools = [search, calculator]
# Create nodes
model = ChatOpenAI(model="gpt-4o").bind_tools(tools)
def call_model(state: State):
response = model.invoke(state["messages"])
return {"messages": [response]}
# Build graph with ToolNode
graph = StateGraph(State)
graph.add_node("agent", call_model)
graph.add_node("tools", ToolNode(tools))
# Wire the agent loop
graph.add_edge(START, "agent")
graph.add_conditional_edges("agent", tools_condition)
graph.add_edge("tools", "agent")
app = graph.compile()Tool error handling
Tools can fail. APIs time out, databases go down, inputs get malformed. ToolNode catches these errors by default and returns them as tool messages so the LLM can adjust its approach.
# ToolNode handles errors by default
tool_node = ToolNode(
tools=[search, calculator],
handle_tool_errors=True, # Default: returns error as message
)
# Or provide a custom error handler
def handle_error(error: Exception) -> str:
return f"Tool failed: {str(error)}. Try a different approach."
tool_node = ToolNode(
tools=[search, calculator],
handle_tool_errors=handle_error,
)Running tools on Crewship
Browser profile for web tools
If your tools need to scrape websites or interact with web pages, set profile = "browser" in your crewship.toml. This includes Playwright and Chromium in the build.
Environment variables for API keys
Tools that call external APIs need credentials. Store them with crewship env set. They are encrypted and injected at runtime.
Artifact storage for file output
Tools that generate files can write to the artifacts/ directory. Files are automatically collected and downloadable via API.
Best practices
Write clear docstrings
The docstring is what the LLM reads to decide when to call your tool. Be specific about what it does, what input it expects, and what it returns. The LLM cannot see your implementation.
Use type hints on all parameters
Type hints tell the LLM what type of value to pass. Without them, the LLM has to guess. Always annotate your function parameters with str, int, float, list, or a Pydantic model.
Return useful error messages
When a tool fails, return a descriptive error message instead of raising an exception. The LLM can use error information to adjust its approach and try again.
Keep tools focused
Each tool should do one thing. A "search_and_summarize" tool should be two separate tools. The LLM can chain them, and you get better reusability.
Use handle_tool_errors on ToolNode
ToolNode can catch exceptions from tool execution and convert them to error messages. This prevents the graph from crashing when a tool fails and lets the agent recover.
FAQ
Tools questions
Deploy your tools to production
Built your tools? Deploy them with your agent in a single command.