What is LangGraph?

LangGraph is an open-source Python library for building stateful, multi-step AI agent workflows. You define your logic as a graph of nodes and edges. LangGraph handles state management and persistence; you handle the logic.

Graph-based agent orchestration

LangGraph was built by the LangChain team and released in early 2024. It takes a different approach from most agent frameworks: instead of defining agents with roles and goals, you build a graph where nodes are functions and edges define how execution flows between them.

This means you control each step directly. You decide what happens at each node, how state gets updated, and which path the execution takes. Cycles and branches are easy to express, and so is parallel execution.

LangGraph works as a standalone library. You can use it without LangChain, though the two integrate well together. LangChain provides model abstractions and tool definitions. LangGraph provides the orchestration runtime that connects them.

Core concepts

LangGraph is built around a few primitives. They combine in different ways depending on what you are building.

Nodes

Python functions that do the actual work. Each node takes the current state, performs some action, and returns updates. Nodes can call LLMs, run tools, or execute any Python code.

Learn more

Edges

Connections between nodes that define the flow. Normal edges always go to the same next node. Conditional edges route to different nodes based on the current state.

Learn more

State

A shared data structure that flows through the graph. Defined as a TypedDict or Pydantic model. Each node reads from and writes to this state.

Learn more

Tools

Functions that agents can call during execution. Search APIs, query databases, read files, or interact with any external system. LangGraph provides a prebuilt ToolNode for easy integration.

Learn more

Checkpointers

Persistence backends that save graph state between runs. Required for multi-turn conversations, human-in-the-loop workflows, and fault tolerance.

Learn more

Interrupts

Pause execution at any point to collect human input, approval, or corrections. Resume the workflow with the user's response. Built on top of the checkpointer system.

How LangGraph works

A typical LangGraph project follows three steps: define your state, write node functions, and wire them together into a graph. Here is a minimal chatbot.

1. Define state

State is a TypedDict that holds the data flowing through your graph. The Annotated type with add_messages tells LangGraph to append new messages rather than overwrite them.

2. Write nodes

Each node is a Python function that takes the current state and returns updates. Here, the chatbot node calls an LLM and returns the response as a new message.

3. Build and compile

Create a StateGraph, add your nodes, connect them with edges, and call compile(). The compiled graph is ready to invoke with app.invoke().

graph.py
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langchain_anthropic import ChatAnthropic

# Define the state
class State(TypedDict):
    messages: Annotated[list, add_messages]

# Create an LLM
model = ChatAnthropic(model="claude-sonnet-4-20250514")

# Define nodes
def chatbot(state: State):
    response = model.invoke(state["messages"])
    return {"messages": [response]}

# Build the graph
graph = StateGraph(State)
graph.add_node("chatbot", chatbot)
graph.add_edge(START, "chatbot")
graph.add_edge("chatbot", END)

# Compile and run
app = graph.compile()
result = app.invoke({
    "messages": [("user", "What is LangGraph?")]
})

Common use cases

LangGraph is general-purpose. If your workflow needs explicit control flow or human oversight, it is probably a good fit.

Conversational Agents

Build chatbots that maintain state across turns, call tools, and handle multi-step reasoning with persistent memory.

Research Agents

Agents that search, gather, and synthesize information from multiple sources, with self-correction loops and quality checks.

Document Processing

Multi-step pipelines that extract, validate, transform, and summarize documents with routing based on content type.

Data Analysis

Agents that query databases, run calculations, generate visualizations, and produce reports with human approval steps.

Code Generation

Agents that write code, run tests, check results, and iterate until the code passes. Self-correcting loops are a natural fit for graphs.

E-commerce Workflows

Order processing, inventory checks, recommendation engines, and customer service bots with tool access and human escalation.

How LangGraph compares to other frameworks

LangGraph sits in a different part of the design space than frameworks like CrewAI or AutoGen. Where those frameworks provide high-level abstractions (agents with roles, crews, conversations), LangGraph gives you lower-level building blocks (nodes, edges, state).

This trade-off means more setup work, but also more control. You can implement any execution pattern: sequential pipelines, parallel fan-out, self-correcting loops, human approval gates, and multi-agent supervisor patterns. Nothing is hidden behind an abstraction you can't modify.

For teams that want to ship quickly with sensible defaults, CrewAI's role-based approach is often faster. For teams that need precise control over agent behavior and state transitions, LangGraph is the better fit. Crewship supports both, so you can choose the right tool for each project. For detailed breakdowns, see CrewAI vs LangGraph and LangGraph vs LangGraph.js.

FAQ

Common questions about LangGraph

Deploy LangGraph with Crewship

Built your graph? Crewship deploys it to production with a single command. You get a production API, real-time streaming, auto-scaling, and Slack integration.