Best Practices
Agent best practices
These patterns help agents use archival memory effectively during conversations.
1. Avoid over-insertion
The most common pitfall is inserting too many memories, creating clutter. Trust the agent to decide what’s worth storing long-term.
2. Use tags consistently
Establish a tag taxonomy and stick to it. Good language models typically handle tagging well.
3. Add context to insertions
❌ Don’t: “Likes replicants” ✅ Do: “Deckard shows unusual empathy toward replicants, particularly Rachael, suggesting possible replicant identity”
4. Let agents experiment
Agents can test different query styles to understand what works:
Important: Have the agent persist learnings from experimentation in a memory block (like archival_tracking
or archival_policies
), not in archival itself (avoid meta-clutter).
Developer best practices (SDK)
These patterns help developers configure and manage archival memory via the SDK.
Backfilling archives
Developers can pre-load archival memory with existing knowledge via the SDK:
Use cases for backfilling:
- Migrating knowledge bases to Letta
- Seeding specialized agents with domain knowledge
- Loading historical conversation logs
- Importing research libraries
Create an archival policies block
Help your agent learn how to use archival memory effectively by creating a dedicated memory block for archival usage policies:
You can improve this block through conversation with your agent:
You: “I noticed you didn’t store the fact that I prefer TypeScript for backend development. Update your archival policies block to ensure you capture language preferences in the future.”
Agent: Updates the archival_policies block to include “Programming language preferences” under “When to insert into archival”
This collaborative approach helps agents learn from mistakes and improve their archival memory usage over time.
Track query effectiveness
Build self-improving agents by having them track archival search effectiveness in a memory block:
The agent can update this block based on search results and continuously refine its archival strategy.
Enforcing archival usage with tool rules
If your agent forgets to use archival memory, you should first try prompting the agent to use it more consistently. If prompting alone doesn’t work, you can enforce archival usage with tool rules.
Force archival search at turn start:
Require archival insertion before exit:
Using the ADE: Tool rules can also be configured in the Agent Development Environment’s Tool Manager interface.
Note: Anthropic models don’t support strict structured output, so tool rules may not be enforced. Use OpenAI or Gemini models for guaranteed tool rule compliance.
When to use tool rules:
- Knowledge management agents that should always search context
- Agents that need to learn from every interaction
- Librarian/archivist agents focused on information storage
Latency considerations: Forcing archival search adds a tool call at the start of every turn. For latency-sensitive applications (like customer support), consider making archival search optional.
Modifying archival memories
While agents cannot modify archival memories, developers can update or delete them via the SDK:
This allows you to:
- Fix incorrect information
- Update outdated facts
- Remove sensitive or irrelevant data
- Reorganize tag structures
Troubleshooting
Why can’t my agent delete or modify archival memories?
Archival memory is designed to be agent-immutable by default. Agents can only insert and search, not modify or delete. This is intentional to prevent agents from “forgetting” important information.
Solution: If you need to modify or delete archival memories, use the SDK via client.agents.passages.update()
or client.agents.passages.delete()
.
When should I use the SDK vs letting the agent handle archival?
Let the agent handle it when:
- The agent needs to decide what’s worth remembering during conversations
- You want the agent to curate its own knowledge base
- Information emerges naturally from user interactions
Use the SDK when:
- Pre-loading knowledge before the agent starts (backfilling)
- Cleaning up incorrect or outdated information
- Bulk operations (importing documentation, migrating data)
- Managing memories outside of agent conversations
My agent isn’t using archival memory
Common causes:
- Agent doesn’t know to use it - Add guidance to the agent’s system prompt or create an
archival_policies
memory block - Agent doesn’t need it yet - With small amounts of information, agents may rely on conversation history instead
- Model limitations - Some models are better at tool use than others
Solutions:
- Add explicit instructions in the agent’s prompt about when to use archival
- Use tool rules to enforce archival usage (see “Enforcing archival usage with tool rules” above)
- Try a different model (OpenAI and Gemini models handle tool use well)
Search returns no results or wrong results
Common causes:
- Empty archive - Agent or developer hasn’t inserted any memories yet
- Query mismatch - Query doesn’t semantically match stored content
- Tag filters too restrictive - Filtering by tags that don’t exist or are too narrow
Solutions:
- Verify memories exist using
client.agents.passages.list()
(uses cursor-based pagination withafter
,before
, andlimit
parameters) - Try broader or rephrased queries
- Check tags by listing passages to see what’s actually stored
- Remove tag filters temporarily to see if that’s the issue
Agent inserting too many memories
Common causes:
- No guidance - Agent doesn’t know when to insert vs when not to
- Tool rules forcing insertion - Tool rules may require archival use
- Agent being overly cautious - Some models default to storing everything
Solutions:
- Create an
archival_policies
block with clear guidelines (see “Create an archival policies block” above) - Review and adjust tool rules if you’re using them
- Add explicit examples of what NOT to store in the agent’s prompt