Why persistence matters
A graph without a checkpointer is stateless. Each call to invoke() runs from the beginning with no memory of previous interactions. This is fine for single-shot tasks, but most real applications need conversation history.
Adding a checkpointer changes this. The graph saves its state after every node execution. When you invoke it again with the same thread ID, it picks up where it left off. The agent remembers what the user said, what tools it called, and what decisions it made.
This is also what makes human-in-the-loop work. When a graph pauses for human input (via interrupt()), the checkpointer saves the state. The human can respond hours later, and the graph resumes from where it stopped.
Types of memory
LangGraph.js separates memory into layers. Each one handles a different scope.
Short-term (Checkpointer)
Per threadSaves graph state after each node execution within a thread. The agent remembers earlier messages in the same thread. State is tied to a thread_id.
Long-term (Store)
Across threadsKey-value storage that persists across threads. Good for user preferences, learned facts, or anything that should survive beyond a single conversation.
Conversation history
Per threadThe messages array in your state, managed by MessagesAnnotation. Each turn appends new messages. With a checkpointer, the full conversation history persists between invocations.
Getting started with MemorySaver
The simplest checkpointer. Stores everything in process memory. Use it for development and testing. Pass a thread_id in the config to maintain separate conversation sessions.
import { MemorySaver } from "@langchain/langgraph"
import { StateGraph, Annotation, START, END, MessagesAnnotation } from "@langchain/langgraph"
const StateAnnotation = Annotation.Root({
...MessagesAnnotation.spec,
})
// Create a checkpointer (development only)
const checkpointer = new MemorySaver()
// Build and compile with the checkpointer
const graph = new StateGraph(StateAnnotation)
.addNode("chatbot", chatbot)
.addEdge(START, "chatbot")
.addEdge("chatbot", END)
const app = graph.compile({ checkpointer })
// Each thread maintains its own conversation history
const config = { configurable: { thread_id: "user-123" } }
const result = await app.invoke(
{ messages: [{ role: "user", content: "Hi, my name is Alice" }] },
config,
)
// Same thread_id continues the conversation
const result2 = await app.invoke(
{ messages: [{ role: "user", content: "What's my name?" }] },
config,
)
// The agent remembers: "Your name is Alice"Production checkpointers
For production, use a checkpointer backed by a real database. Install the corresponding npm package and swap out MemorySaver.
MemorySaver
@langchain/langgraphIn-process memory. Fast but gone when the process restarts. Development and testing only.
SqliteSaver
@langchain/langgraph-checkpoint-sqliteSQLite-backed. Lightweight, persistent, and good for single-server deployments.
PostgresSaver
@langchain/langgraph-checkpoint-postgresPostgreSQL-backed. Production-ready with connection pooling support.
MongoDBSaver
@langchain/langgraph-checkpoint-mongodbMongoDB-backed. Document storage for production.
import { SqliteSaver } from "@langchain/langgraph-checkpoint-sqlite"
// Connect to SQLite (production)
const checkpointer = SqliteSaver.fromConnString("./state.db")
// Or use Postgres
// import { PostgresSaver } from "@langchain/langgraph-checkpoint-postgres"
// const checkpointer = PostgresSaver.fromConnString(process.env.DATABASE_URL)
const app = graph.compile({ checkpointer })
// State survives process restarts
const config = { configurable: { thread_id: "user-123" } }
const result = await app.invoke(
{ messages: [{ role: "user", content: "Hello" }] },
config,
)Long-term memory with stores
Checkpointers save per-thread state. But sometimes you need memory that works across threads. Stores give you key-value storage for user preferences, learned facts, or any data that should persist beyond a single conversation.
import { MemorySaver, InMemoryStore } from "@langchain/langgraph"
// Short-term memory (per thread)
const checkpointer = new MemorySaver()
// Long-term memory (across threads)
const store = new InMemoryStore()
const app = graph.compile({ checkpointer, store })
// Inside a node, access the store via the config:
async function myNode(
state: typeof StateAnnotation.State,
config: RunnableConfig,
) {
const store = config.store
const userId = config.configurable?.user_id
// Read long-term memories
const memories = await store.search(["user", userId], {
query: "preferences",
})
// Save a new memory
await store.put(["user", userId], "favorite_color", {
value: "blue",
})
return { messages: [response] }
}Memory on Crewship
Short-term memory in runs
In-memory state works within each run. Agents maintain context during execution, but the memory is discarded when the run completes.
Persistent state with Threads
For persistent memory across interactions, use the Threads API. Threads maintain conversation state across multiple runs, so your agent remembers past interactions.
Structured data with Tables
Use Tables to store structured data that persists across executions. Agents can read from and write to tables during runs.
FAQ
Memory questions
Deploy with persistent memory
Get your memory-enabled agents running in production.