How to Add Persistent Memory to CrewAI Agents
CrewAI agents forget everything between runs. Learn how to add persistent, semantic memory with Aegis Memory in under 10 minutes.
The Problem: CrewAI Agents Have Amnesia
CrewAI is one of the best frameworks for building multi-agent systems. You define agents with roles, give them tools, and orchestrate them into crews that solve real problems. But there is a fundamental gap: every time your crew finishes a run, all the knowledge it accumulated disappears.
Run your research crew on Monday, and it discovers that your competitor just launched a new pricing tier. Run the same crew on Tuesday, and it has no idea that discovery ever happened. The context window resets. The memories are gone.
This is not a CrewAI bug — it is an architectural reality of LLM-based agents. The model’s context window is ephemeral by design. CrewAI’s built-in memory features provide some within-run persistence, but once the process exits, that state is lost.
The result is predictable: agents repeat work they have already done, fail to build on previous discoveries, and cannot learn from past mistakes. In production systems, this means wasted tokens, redundant API calls, and outputs that never improve.
Aegis Memory solves this by giving your CrewAI agents a persistent, semantic memory layer that survives across runs, supports scoped access control, and enables agents to share knowledge with each other.
Prerequisites
Before we start, you will need:
- Python 3.9+ installed
- Docker installed and running (for the Aegis Memory server)
- An OpenAI or Anthropic API key (for your CrewAI agents)
- Basic familiarity with CrewAI concepts (agents, tasks, crews)
Install the required packages:
pip install aegis-memory[crewai] crewai
Step 1: Start the Aegis Memory Server
Aegis Memory runs as a local service that your agents connect to. The fastest way to get it running is with Docker Compose.
Create a docker-compose.yml file (or use the one from the Aegis Memory repo):
version: "3.8"
services:
aegis:
image: ghcr.io/quantifylabs/aegis-memory:latest
ports:
- "8741:8741"
environment:
- AEGIS_API_KEY=your-api-key
volumes:
- aegis_data:/data
volumes:
aegis_data:
Start the server:
docker-compose up -d
Verify it is running:
curl http://localhost:8741/health
You should get a healthy response. The server is now ready to accept memory operations from your agents.
Step 2: Create Crew-Level Memory
The first step is creating a shared memory instance for your entire crew. This is the AegisCrewMemory object — it manages the connection to the Aegis server and provides a namespace that groups all memories for this particular crew.
from aegis_memory.integrations.crewai import AegisCrewMemory
crew_memory = AegisCrewMemory(
api_key="your-api-key",
namespace="research-crew",
default_scope="global"
)
Let us break down the parameters:
api_key: The API key you set in your Docker Compose environment.namespace: A logical grouping for memories. Use a descriptive name like"research-crew"or"customer-support-crew". Memories in different namespaces are isolated from each other.default_scope: Controls who can see memories by default."global"means all agents in this namespace can read them. Other options are"agent-shared"(agents in the same role) and"agent-private"(only the agent that created the memory).
Step 3: Wire Agent Memory
Now create individual memory instances for each agent in your crew. Each AegisAgentMemory wraps the crew memory and adds agent-specific context.
from aegis_memory.integrations.crewai import AegisAgentMemory
from crewai import Agent, Task, Crew
# Create agent-specific memory instances
researcher_memory = AegisAgentMemory(
crew_memory=crew_memory,
agent_id="Researcher",
scope="agent-shared"
)
writer_memory = AegisAgentMemory(
crew_memory=crew_memory,
agent_id="Writer",
scope="agent-shared"
)
# Define your CrewAI agents
researcher = Agent(
role="Senior Research Analyst",
goal="Uncover cutting-edge developments in AI memory systems",
backstory="You are an expert research analyst with a keen eye for emerging trends.",
verbose=True
)
writer = Agent(
role="Tech Content Writer",
goal="Write compelling technical content based on research findings",
backstory="You are a skilled writer who translates complex research into clear prose.",
verbose=True
)
Saving and Searching Memories
Inside your task logic or tool implementations, use the agent memory to store and retrieve knowledge:
# Store a discovery the researcher made
researcher_memory.save(
value="OpenAI released GPT-5 with native 1M token context on Jan 15, 2026. "
"Pricing is $5/1M input tokens. Key improvement: persistent memory built-in.",
metadata={"source": "openai-blog", "date": "2026-01-15", "topic": "competitor-intel"}
)
# Later, the writer searches for relevant context before drafting
results = writer_memory.search(
query="recent AI model releases and pricing",
limit=5
)
for memory in results:
print(f"Found: {memory}")
Because both agents share the "research-crew" namespace with "agent-shared" scope, the writer can find memories that the researcher stored. This is the key benefit: knowledge flows between agents automatically through semantic search.
Step 4: Run Your Crew (Twice)
Here is a complete example that demonstrates persistence across runs. Save this as crew_with_memory.py:
from aegis_memory.integrations.crewai import AegisCrewMemory, AegisAgentMemory
from crewai import Agent, Task, Crew
# Set up persistent memory
crew_memory = AegisCrewMemory(
api_key="your-api-key",
namespace="research-crew",
default_scope="global"
)
researcher_memory = AegisAgentMemory(
crew_memory=crew_memory,
agent_id="Researcher",
scope="agent-shared"
)
writer_memory = AegisAgentMemory(
crew_memory=crew_memory,
agent_id="Writer",
scope="agent-shared"
)
# Check for existing memories before starting
existing = researcher_memory.search(query="previous research findings", limit=3)
if existing:
print(f"Found {len(existing)} memories from previous runs!")
for mem in existing:
print(f" - {mem}")
else:
print("No previous memories found. This is a fresh start.")
# Define agents
researcher = Agent(
role="Senior Research Analyst",
goal="Uncover cutting-edge developments in AI memory systems",
backstory="You are an expert research analyst.",
verbose=True
)
writer = Agent(
role="Tech Content Writer",
goal="Write compelling content based on research",
backstory="You are a skilled technical writer.",
verbose=True
)
# Define tasks
research_task = Task(
description="Research the latest developments in AI agent memory systems. "
"Focus on new releases, pricing changes, and architectural patterns.",
expected_output="A detailed research brief with key findings.",
agent=researcher
)
writing_task = Task(
description="Write a short summary of the research findings.",
expected_output="A polished 200-word summary.",
agent=writer
)
# Run the crew
crew = Crew(agents=[researcher, writer], tasks=[research_task, writing_task])
result = crew.kickoff()
# Save key findings to persistent memory
researcher_memory.save(
value=f"Research run completed. Key finding: {result}",
metadata={"run_date": "2026-02-01", "type": "research-summary"}
)
print("\nMemories saved. Run this script again to see persistence in action.")
First run:
python crew_with_memory.py
# Output: "No previous memories found. This is a fresh start."
# ... crew runs and stores results ...
# Output: "Memories saved. Run this script again to see persistence in action."
Second run:
python crew_with_memory.py
# Output: "Found 1 memories from previous runs!"
# Output: " - Research run completed. Key finding: ..."
The second run starts with context from the first. Your agents now have persistent memory.
Why Aegis for CrewAI?
Here is how Aegis Memory compares to CrewAI’s built-in memory features:
| Feature | CrewAI Built-in | Aegis Memory |
|---|---|---|
| Within-run memory | Yes | Yes |
| Cross-run persistence | No | Yes |
| Semantic search | Basic | Full vector search, 30-80ms |
| Scope control | No | Private, shared, global |
| Memory voting | No | Yes (helpful/harmful tracking) |
| Agent handoffs | No | Yes (structured context transfer) |
| Reflections/Playbook | No | Yes (agents learn from mistakes) |
| Self-hosted | N/A | Yes, Docker/K8s |
| Observability | No | Prometheus metrics |
CrewAI’s built-in memory is useful for short, single-run workflows. Aegis Memory is for production systems where agents need to accumulate knowledge over time, share it across runs, and improve based on what worked and what did not.
Production Considerations
When moving from local development to production, keep these points in mind:
Infrastructure: The Aegis Memory server needs persistent storage. In Docker, use a named volume (as shown in the docker-compose example). In Kubernetes, use a PersistentVolumeClaim.
API Keys: Use environment variables for API keys, never hardcode them. Set AEGIS_API_KEY as an environment variable and read it in your code:
import os
crew_memory = AegisCrewMemory(
api_key=os.environ["AEGIS_API_KEY"],
namespace="research-crew",
default_scope="global"
)
Namespaces: Use separate namespaces for different environments (e.g., "research-crew-dev", "research-crew-prod"). This prevents development experiments from polluting production memories.
Memory Hygiene: Not every piece of information is worth remembering. Store conclusions and decisions, not raw intermediate outputs. Use metadata to tag memories with source and date so you can filter or expire them later.
Monitoring: Aegis Memory exposes Prometheus metrics out of the box. Monitor query latency, memory count, and error rates to catch issues before they affect your agents.
What’s Next
Now that your CrewAI agents have persistent memory, explore these advanced patterns:
- CrewAI Agents That Learn From Mistakes — Add reflections, voting, and playbook queries so your agents improve over time.
- Aegis Memory vs Mem0 — Understand the tradeoffs between Aegis and other memory solutions.
- Adding Memory to LangGraph Agents — If you are using LangGraph instead of CrewAI, see how to integrate Aegis directly.
The goal is not just persistence — it is agents that get smarter every time they run. Persistent memory is the foundation. Reflections, voting, and playbooks are what turn that foundation into genuine improvement.