Why memory matters
Without memory, every crew execution starts from scratch. Agents can't reference what happened earlier in the run, can't learn from past executions, and can't build up knowledge about entities they encounter.
Memory changes this. With short-term memory, agents within a run can share context. With long-term memory, agents improve over time. With entity memory, they build a graph of knowledge about people, companies, and concepts.
Memory types
CrewAI supports four types of memory, each serving a different purpose.
Short-term memory
Single executionActive during a single crew execution. Lets agents reference what happened earlier in the run — what other agents said, what tools returned, and what decisions were made.
Long-term memory
Across executions (local only)Persists across executions. Stores lessons learned, successful strategies, and important findings so agents improve over time. Note: relies on local file storage, so it only works when running locally.
Entity memory
Single executionTracks information about specific entities — people, companies, products, concepts. Agents build a knowledge graph of entities and their relationships during execution.
User memory
Per userStores user-specific preferences, context, and interaction history. Useful for personalized agents that remember who they are talking to.
Enabling memory
Enable memory with a single parameter on your Crew. This activates short-term and entity memory by default.
from crewai import Crew, Process
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, writing_task],
process=Process.sequential,
memory=True, # Enable memory
verbose=True,
)Knowledge sources
Knowledge sources let you feed external documents to your agents. Unlike memory (which is built during execution), knowledge is pre-loaded information that agents can query.
CrewAI supports text files, PDFs, CSV files, and custom knowledge sources. The content is embedded and stored so agents can search it during task execution.
from crewai import Agent, Crew, Task
from crewai.knowledge.source.text_file_knowledge_source import (
TextFileKnowledgeSource,
)
# Load knowledge from text files
knowledge = TextFileKnowledgeSource(
file_paths=["company_policies.txt", "product_docs.txt"],
)
# Agent with access to knowledge
support_agent = Agent(
role="Support Agent",
goal="Answer customer questions accurately",
backstory="You are a knowledgeable support agent.",
knowledge_sources=[knowledge],
)Memory configuration
By default, CrewAI uses its built-in embedder for memory storage. You can configure a custom embedder for better performance or to use your preferred provider.
Local-only limitation
CrewAI's built-in long-term and entity memory use local file storage. This works fine during development but won't persist in containerized or serverless production environments. For production, use Crewship's Threads for conversation state and Tables for structured data instead.
from crewai import Crew
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, writing_task],
memory=True,
# Configure the embedder for memory storage
embedder={
"provider": "openai",
"config": {
"model": "text-embedding-3-small",
},
},
)Memory on Crewship
Short-term memory in runs
Short-term memory works within each run execution. Agents share context during the run, but the memory is discarded when the run completes.
Persistent state with Threads
For persistent memory across interactions, use the Threads API. Threads maintain conversation state — including message history and flow state — across multiple runs.
Structured data with Tables
Use Tables to store structured data that persists across executions. Agents can read from and write to tables during runs.
FAQ
Memory questions
Deploy memory-enabled crews
Get your memory-enabled crews running in production.