Changelog
Release notes and version history for Letta Code
All notable changes to Letta Code are documented here.
0.13.5
Section titled “0.13.5”- Added automatic image resizing for clipboard paste (images larger than 2048x2048 are resized to fit API limits)
- Improved error feedback when image paste fails
- Fixed conversation ID being passed correctly to resume data retrieval
0.13.4
Section titled “0.13.4”Default startup behavior reverted to single-threaded experience. Based on user feedback, letta (with no flags) now resumes the agent’s “default” conversation instead of creating a new conversation each time.
| Command | 0.13.0 - 0.13.3 | 0.13.4+ |
|---|---|---|
letta | Creates new conversation each time | Uses “default” conversation |
letta --new | Error (was deprecated) | Creates a new conversation |
letta --continue (no session) | Silently creates new | Errors with helpful message |
- Changed
letta(no flags) to resume the “default” conversation with message history - Repurposed
--newflag to create a new conversation (for users who want concurrent sessions) - Changed
--continuefallback to error with helpful suggestions instead of silently creating new
0.13.3
Section titled “0.13.3”- Added
messaging-agentsbundled skill for sending messages to other agents - Added ability to deploy existing agents as subagents via the Task tool
- Fixed interrupt handling race condition when tool approvals are in flight
0.13.2
Section titled “0.13.2”- Added skills frontmatter pre-loading for subagents (skills defined in subagent configs are auto-loaded)
- Added output truncation for Task tool to prevent context overflow
- Added auto-cleanup for overflow files
- Fixed auto-allowed tool execution tracking for proper interrupt handling
- Fixed hardcoded embedding model (now uses server default)
0.13.1
Section titled “0.13.1”- Added
--defaultflag to access agent’s default conversation (alias for--conv default) - Added
--conv <agent-id>shorthand (e.g.,letta --conv agent-xyz→ uses that agent’s default conversation) - Added default conversation to
/resumeselector (appears at top of list) - Added
working-in-parallelbundled skill for coordinating parallel subagent tasks - Added conversation resume hint in exit stats
- Improved startup performance with reduced time-to-boot
- Fixed stale approvals being auto-cancelled on session resume
- Fixed auth type display on startup
- Fixed memory block retrieval for
/memorycommand
0.13.0
Section titled “0.13.0”This release introduces Conversations - a major change to how Letta Code manages chat sessions. Your agent can now have many parallel conversations, each contributing to its learning, memory, and shared history.
What Changed
Section titled “What Changed”Before 0.13.0:
- Each agent had a single conversation
- Starting Letta Code resumed the same conversation
/clearreset the agent’s context window
After 0.13.0 (updated in 0.13.4):
- Each startup resumes the default conversation (reverted from 0.13.0-0.13.3 behavior)
- Your agent’s memory is shared across all conversations
/newcreates a new conversation (for parallel sessions)/clearclears in-context messages- Use
/resumeto browse and switch between past conversations - Use
letta --newto create a new conversation for concurrent sessions
Migration Guide
Section titled “Migration Guide”If you’re upgrading from an earlier version, you may notice that starting Letta Code puts you in a new conversation instead of continuing where you left off. Here’s what happened:
Your old messages still exist! They’re in your agent’s default conversation - the original message history before conversations were introduced. You can find it at the top of the /resume selector, or access it directly with the commands below.
Easiest way to access them (0.13.1+):
# Use the --default flag with your agent nameletta -n "Your Agent Name" --default
# Or with agent IDletta --agent <your-agent-id> --default
# Or use the shorthand (agent ID only)letta --conv <your-agent-id>The default conversation also appears at the top of the /resume selector.
Alternative methods:
-
View them on the web at
https://app.letta.com/agents/<your-agent-id> -
Have your agent recall them using one of these prompts:
Using the
recallsubagent:Can you use the recall subagent to find our most recent messages? I'd like to continue where we left off.Using the
conversation_searchtool:Can you use conversation_search to find our most recent messages so we can continue where we left off?Using the
searching-messagesskill:Can you load the searching-messages skill and use it to find our most recent messages? -
Export and reference your agent file by running
/download(saves to<agent-id>.af), then ask your agent:I downloaded my agent file to ./agent-xxxx.af - can you read it and look through the "messages" array to find our most recent conversation?
Going forward, all new conversations will be accessible via /resume, and the default conversation is always available via --default.
Full Changelog
Section titled “Full Changelog”- Added Conversations support - each session creates an isolated conversation while sharing agent memory
- Added
/resumecommand to browse and switch between past conversations - Added
--resume(-r) and--continue(-c) flags to resume last session - Added
--conversation(-C,--conv) flag to resume a specific conversation by ID - Added default agents (Memo and Incognito) auto-created for new users
- Changed
/clearto start a new conversation (non-destructive) instead of deleting messages (reverted in 0.13.4) - Fixed Task tool rendering issues with parallel subagents
- Fixed ADE links to include conversation context
0.12.6
Section titled “0.12.6”- Added
memorysubagent for cleaning up and reorganizing memory blocks - Added
defragmenting-memorybuilt-in skill with backup/restore workflow - Added streaming output display for long-running bash commands
- Added line count summary for Read tool results
- Added network retry for transient LLM streaming errors
- Added Skill tool support in plan mode (load/unload/refresh are read-only)
- Fixed tool approval flow that was broken by ESC handling changes
- Improved Task tool and subagent display rendering
- Fixed UI flickering in Ghostty terminal
0.12.5
Section titled “0.12.5”- Added terminal title and progress indicator for approval screens
- Added
LETTA_DEBUG_TIMINGSenvironment variable for request timing diagnostics - Fixed “Create new agent” from selector being stuck in a loop
0.12.4
Section titled “0.12.4”- Fixed subagent display spacing and extra newlines
- Fixed subagent live streaming not updating during execution
0.12.3
Section titled “0.12.3”- Added LSP diagnostics to Read tool for TypeScript and Python files
- Added
refreshcommand to Task tool for rescanning custom subagents - Added file-based overflow for long tool outputs
- Fixed left/right arrow key cursor navigation in approval text inputs
- Fixed pre-stream approval desync errors with keep-alive recovery
- Fixed subagents not inheriting parent’s tool permission rules
0.12.2
Section titled “0.12.2”- Added
/ralphand/yolo-ralphcommands for autonomous agentic loop mode - Fixed read-only subagents (explore, plan, recall) to work in plan mode
- Fixed Windows PowerShell ENOENT errors with shell fallback
0.12.1
Section titled “0.12.1”- Added
recallsubagent for searching parent agent’s conversation history - Fixed agent selector not showing when LRU agent retrieval fails
- Fixed approval desync issues for slash commands and queued messages
- Fixed SDK retry race conditions on streaming requests
- Fixed pending approval denials not being cached on ESC interrupt
- Fixed stale processConversation calls affecting UI state after interrupts
0.12.0
Section titled “0.12.0”- Refactored to use new client-side tool calling via the messages endpoint
- Added
acquiring-skillsskill for discovering and installing skills from external repositories - Added
migrating-memoryskill for copying memory blocks between agents - Updated skills system (migrating-memory, finding-agents, searching-messages)
- Improved interrupt handling with better messaging
- Fixed ESC interrupt to properly stop streams
- Fixed skill scripts to work when installed via npm
- Fixed Task tool (subagent) rendering issues
- Fixed bash mode exit behavior after submitting commands
- Fixed binary file detection being overly aggressive
- Fixed approval results handling when auto-handling remaining approvals
- Fixed stream retry behavior after interrupts
0.11.1
Section titled “0.11.1”- Added system prompt and memory block configuration for headless mode
- Added
--input-format stream-jsonflag for programmatic input handling - Improved parallel tool call approval UI
0.11.0
Section titled “0.11.0”- Added inline dialogs for improved user experience
- Improved token counter display
- Fixed server-side tools incorrectly showing as interrupted
0.10.5
Section titled “0.10.5”- Fixed Windows installation issues
- Fixed keyboard shortcuts for Ctrl+C, Ctrl+V, and Shift+Enter
0.10.4
Section titled “0.10.4”- Fixed iTerm2 keybindings
- Fixed ESC and CTRL-C handling across all dialogs
0.10.3
Section titled “0.10.3”- Added desktop notifications when UI needs user attention
- Added read-only shell commands support in plan mode
0.10.2
Section titled “0.10.2”- Added Ctrl+V support for clipboard image paste in all terminals
- Fixed keybindings
- Fixed model name display in welcome screen
0.10.1
Section titled “0.10.1”- Added Shift+Enter multi-line input support
0.10.0
Section titled “0.10.0”- Added visual diffs for Edit/Write tool returns
- Added automatic retry for transient LLM API errors
- Added custom slash commands support (
/commands) - Added scrolling and manual ordering to command autocomplete
- Added toggle to show all agents in
/agentsview - Added per-resource queues for parallel tool execution
- Fixed plan mode on non-default toolsets
- Fixed CLI crash when browser auto-open fails in WSL
- Added GLM-4.7 model support
- Added
/newcommand for creating new agents - Added
/feedbackcommand improvements - Added memory reminders to improve memory usage
- Renamed
/resumeto/agents(with backwards-compatible alias) - Fixed plan mode path resolution on Windows
- Added support for bundled skills and multi-source skill discovery
- Increased loaded_skills block limit to 100k characters
- Added support for Claude Pro and Max plans
- Added optional telemetry
- Added
--systemflag for existing agents - Fixed Windows-specific issues
- Added
/helpcommand with interactive dialog - Added
/mcpcommand for MCP server management - Added
/compactcommand for message compaction - Added text search for all models
- Improved memory tool visibility with colored name and diff output
- Added BYOK (Bring Your Own Key) support - use your own API keys
- Added
/usagecommand to check usage and credits - Added
--infoflag to show project and agent info - Added naming dialog when pinning agents
- Added
/memorycommand to view agent memory blocks - Added ‘add-model’ skill for adding new LLM models
- Added Gemini 3 Flash model support
- Added feedback UI
- Added support for relative paths in all tools
- Added tab completion for slash commands
- Added Kimi K2 Thinking model
- Added personalized thinking prompts with agent name
- Added goodbye message on exit
- Renamed
/bashesto/bg
- Added stateless subagents via Task tool
- Added Kimi K2 Thinking model support
- Improved subagents UI
- Added autocomplete for slash commands
- Faster startup with cached tool initialization
- Added
exitandquitas aliases for/exit
- Added profile-based persistence with startup selector
- Added
/profilecommand for managing profiles - Added simplified welcome screen design
- Added double Ctrl+C to exit from approval screen
- Added paginated agent list in
/resume - Added
/descriptioncommand to update agent description - Added message search
- Added
/resumecommand with improved agent selector UI - Added
LETTA_DEBUGenvironment variable for debug logging - Added agent description support
- Added GPT-5.2 support
- Added Gemini 3 (Vertex) support
- Added startup status messages showing agent info
- Added
/initcommand for initializing memory blocks - Added system prompt swapping
- Changed default naming to PascalCase
- Added
/downloadcommand to export agent file locally - Added Skills omni-tool
- Added Claude Opus 4.5 support
- Added toolset switching UI
- Added
--toolsetflag - Added Gemini tools support
- Added model-based toolset switching
- Added eager cancel functionality
- Added sleeptime memory management
- Added
--sleeptimeCLI flag - Added GPT-5.1 models support
- Added Gemini-3 models support
- Added
--fresh-blocksflag for isolation - Added
/swapcommand for model switching
- Added
/linkand/unlinkcommands for managing agent tools - Added Skills support
- Added parallel tool calling
- Added multi-device sign in support
- Added agent renaming capability
0.1.16
Section titled “0.1.16”- Added Sonnet 4.5 with 180k context window
0.1.15
Section titled “0.1.15”- Added multiline input support
- Added
--newflag for creating new memory blocks - Added agent URL display in commands
0.1.11
Section titled “0.1.11”- Added Claude Haiku 4.5 to model selector
- Added project-level agent persistence with auto-resume
- Added API key caching
- Added
--modelflag - Added GLM-4.6 support
- Added autocomplete for commands
- Added up/down for history navigation
- Added
fetch_webto default tool list
0.1.10
Section titled “0.1.10”- Added
stream-jsonoutput format
- Added pretty preview for file listings in approval dialog
- Added
LETTA_BASE_URLenvironment variable support
- Added usage tracking
- Added ESC to cancel operations
- Added Ctrl-C exit with agent state dump
- Initial release of Letta Code, the memory-first coding agent