Why deploy LangGraph to production?
Running LangGraph locally works for development. Production is different. Your agent needs an API, it needs to handle concurrent requests, and secrets need to stay out of your code.
Self-hosting means Docker, infrastructure, an API layer, and scaling -- all on you. Crewship handles that with a single command. You keep writing Python.
Prerequisites
- Python 3.10+ installed on your machine
- A LangGraph project with a compiled graph
- A Crewship account — sign up free
Deploy in 6 steps
Each step builds on the previous one. The whole process takes a few minutes.
Install the CLI
One-line install on macOS and Linux.
curl -fsSL https://www.crewship.dev/install.sh | bashConfigure crewship.toml
Add a configuration file to your project root. Set the framework to "langgraph" and point the entrypoint to your compiled graph.
[deployment]
framework = "langgraph"
entrypoint = "my_agent.graph:app"
profile = "slim"
python = "3.10"
[build]
exclude = [ "tests" ]Set environment variables
Add API keys your agents need. Secrets are encrypted and injected at runtime.
crewship env set OPENAI_API_KEY sk-...
crewship env set ANTHROPIC_API_KEY sk-ant-...Deploy
A single command packages your code, builds the image, and deploys it.
$ crewship deploy
📦 Packaging agent...
⬆️ Uploading build context...
🔨 Building image...
✅ Deployed successfully!
Deployment: dep_abc123xyz
Version: 1
Console: console.crewship.dev/deployments/dep_abc123xyzRun via API
Trigger an execution with a POST request.
curl -X POST https://api.crewship.dev/v1/runs \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"deployment_id": "dep_abc123",
"input": {
"messages": [{"role": "user", "content": "Hello"}]
}
}'Stream execution with SSE
Add stream=true to watch agent output in real-time via Server-Sent Events.
curl -N -X POST https://api.crewship.dev/v1/runs?stream=true \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"deployment_id": "dep_abc123",
"input": {
"messages": [{"role": "user", "content": "Research AI agents"}]
}
}'
# SSE events:
# data: {"type":"node_start","node":"agent","content":"..."}
# data: {"type":"tool_call","tool":"search","input":"AI agents 2026"}
# data: {"type":"run_complete","output":"..."}Production features included
Every deployment includes monitoring, debugging, and production tooling.
Execution Traces
See every node execution, tool call, and LLM interaction in a timeline view.
Versioning
Track every deployment version. Roll back instantly if something breaks.
Threads
Multi-turn conversations with persistent state across messages.
Webhooks
Get notified when runs complete, fail, or hit milestones.
Artifacts
Collect files generated by your agents: reports, data, images.
Advanced configuration
Customize your deployment with these crewship.toml options.
Build profiles
Choose "slim" for lightweight agents or "browser" for agents that need Playwright and Chromium to scrape websites.
profile = "browser"Python versions
Specify Python 3.10, 3.11, or 3.12 depending on your dependencies.
python = "3.12"Graph entrypoints
Point to your compiled StateGraph. The entrypoint should be the compiled graph object.
entrypoint = "my_project.agent:app"Build exclusions
Keep your build lean by excluding test files, data, and other non-essential directories.
exclude = [ "tests", "data", "notebooks" ]FAQ
Deployment questions
Ready to deploy your agent?
Get started for free. No credit card required.