Skip to content
Letta Code Letta Code Letta Docs
Sign up
Reference

Changelog

Release notes and version history for Letta Code

All notable changes to Letta Code are documented here.

  • Added automatic image resizing for clipboard paste (images larger than 2048x2048 are resized to fit API limits)
  • Improved error feedback when image paste fails
  • Fixed conversation ID being passed correctly to resume data retrieval

Default startup behavior reverted to single-threaded experience. Based on user feedback, letta (with no flags) now resumes the agent’s “default” conversation instead of creating a new conversation each time.

Command0.13.0 - 0.13.30.13.4+
lettaCreates new conversation each timeUses “default” conversation
letta --newError (was deprecated)Creates a new conversation
letta --continue (no session)Silently creates newErrors with helpful message
  • Changed letta (no flags) to resume the “default” conversation with message history
  • Repurposed --new flag to create a new conversation (for users who want concurrent sessions)
  • Changed --continue fallback to error with helpful suggestions instead of silently creating new
  • Added messaging-agents bundled skill for sending messages to other agents
  • Added ability to deploy existing agents as subagents via the Task tool
  • Fixed interrupt handling race condition when tool approvals are in flight
  • Added skills frontmatter pre-loading for subagents (skills defined in subagent configs are auto-loaded)
  • Added output truncation for Task tool to prevent context overflow
  • Added auto-cleanup for overflow files
  • Fixed auto-allowed tool execution tracking for proper interrupt handling
  • Fixed hardcoded embedding model (now uses server default)
  • Added --default flag to access agent’s default conversation (alias for --conv default)
  • Added --conv <agent-id> shorthand (e.g., letta --conv agent-xyz → uses that agent’s default conversation)
  • Added default conversation to /resume selector (appears at top of list)
  • Added working-in-parallel bundled skill for coordinating parallel subagent tasks
  • Added conversation resume hint in exit stats
  • Improved startup performance with reduced time-to-boot
  • Fixed stale approvals being auto-cancelled on session resume
  • Fixed auth type display on startup
  • Fixed memory block retrieval for /memory command

This release introduces Conversations - a major change to how Letta Code manages chat sessions. Your agent can now have many parallel conversations, each contributing to its learning, memory, and shared history.

Before 0.13.0:

  • Each agent had a single conversation
  • Starting Letta Code resumed the same conversation
  • /clear reset the agent’s context window

After 0.13.0 (updated in 0.13.4):

  • Each startup resumes the default conversation (reverted from 0.13.0-0.13.3 behavior)
  • Your agent’s memory is shared across all conversations
  • /new creates a new conversation (for parallel sessions)
  • /clear clears in-context messages
  • Use /resume to browse and switch between past conversations
  • Use letta --new to create a new conversation for concurrent sessions

If you’re upgrading from an earlier version, you may notice that starting Letta Code puts you in a new conversation instead of continuing where you left off. Here’s what happened:

Your old messages still exist! They’re in your agent’s default conversation - the original message history before conversations were introduced. You can find it at the top of the /resume selector, or access it directly with the commands below.

Easiest way to access them (0.13.1+):

Terminal window
# Use the --default flag with your agent name
letta -n "Your Agent Name" --default
# Or with agent ID
letta --agent <your-agent-id> --default
# Or use the shorthand (agent ID only)
letta --conv <your-agent-id>

The default conversation also appears at the top of the /resume selector.

Alternative methods:

  1. View them on the web at https://app.letta.com/agents/<your-agent-id>

  2. Have your agent recall them using one of these prompts:

    Using the recall subagent:

    Can you use the recall subagent to find our most recent messages? I'd like to continue where we left off.

    Using the conversation_search tool:

    Can you use conversation_search to find our most recent messages so we can continue where we left off?

    Using the searching-messages skill:

    Can you load the searching-messages skill and use it to find our most recent messages?
  3. Export and reference your agent file by running /download (saves to <agent-id>.af), then ask your agent:

    I downloaded my agent file to ./agent-xxxx.af - can you read it and look through the "messages" array to find our most recent conversation?

Going forward, all new conversations will be accessible via /resume, and the default conversation is always available via --default.

  • Added Conversations support - each session creates an isolated conversation while sharing agent memory
  • Added /resume command to browse and switch between past conversations
  • Added --resume (-r) and --continue (-c) flags to resume last session
  • Added --conversation (-C, --conv) flag to resume a specific conversation by ID
  • Added default agents (Memo and Incognito) auto-created for new users
  • Changed /clear to start a new conversation (non-destructive) instead of deleting messages (reverted in 0.13.4)
  • Fixed Task tool rendering issues with parallel subagents
  • Fixed ADE links to include conversation context
  • Added memory subagent for cleaning up and reorganizing memory blocks
  • Added defragmenting-memory built-in skill with backup/restore workflow
  • Added streaming output display for long-running bash commands
  • Added line count summary for Read tool results
  • Added network retry for transient LLM streaming errors
  • Added Skill tool support in plan mode (load/unload/refresh are read-only)
  • Fixed tool approval flow that was broken by ESC handling changes
  • Improved Task tool and subagent display rendering
  • Fixed UI flickering in Ghostty terminal
  • Added terminal title and progress indicator for approval screens
  • Added LETTA_DEBUG_TIMINGS environment variable for request timing diagnostics
  • Fixed “Create new agent” from selector being stuck in a loop
  • Fixed subagent display spacing and extra newlines
  • Fixed subagent live streaming not updating during execution
  • Added LSP diagnostics to Read tool for TypeScript and Python files
  • Added refresh command to Task tool for rescanning custom subagents
  • Added file-based overflow for long tool outputs
  • Fixed left/right arrow key cursor navigation in approval text inputs
  • Fixed pre-stream approval desync errors with keep-alive recovery
  • Fixed subagents not inheriting parent’s tool permission rules
  • Added /ralph and /yolo-ralph commands for autonomous agentic loop mode
  • Fixed read-only subagents (explore, plan, recall) to work in plan mode
  • Fixed Windows PowerShell ENOENT errors with shell fallback
  • Added recall subagent for searching parent agent’s conversation history
  • Fixed agent selector not showing when LRU agent retrieval fails
  • Fixed approval desync issues for slash commands and queued messages
  • Fixed SDK retry race conditions on streaming requests
  • Fixed pending approval denials not being cached on ESC interrupt
  • Fixed stale processConversation calls affecting UI state after interrupts
  • Refactored to use new client-side tool calling via the messages endpoint
  • Added acquiring-skills skill for discovering and installing skills from external repositories
  • Added migrating-memory skill for copying memory blocks between agents
  • Updated skills system (migrating-memory, finding-agents, searching-messages)
  • Improved interrupt handling with better messaging
  • Fixed ESC interrupt to properly stop streams
  • Fixed skill scripts to work when installed via npm
  • Fixed Task tool (subagent) rendering issues
  • Fixed bash mode exit behavior after submitting commands
  • Fixed binary file detection being overly aggressive
  • Fixed approval results handling when auto-handling remaining approvals
  • Fixed stream retry behavior after interrupts
  • Added system prompt and memory block configuration for headless mode
  • Added --input-format stream-json flag for programmatic input handling
  • Improved parallel tool call approval UI
  • Added inline dialogs for improved user experience
  • Improved token counter display
  • Fixed server-side tools incorrectly showing as interrupted
  • Fixed Windows installation issues
  • Fixed keyboard shortcuts for Ctrl+C, Ctrl+V, and Shift+Enter
  • Fixed iTerm2 keybindings
  • Fixed ESC and CTRL-C handling across all dialogs
  • Added desktop notifications when UI needs user attention
  • Added read-only shell commands support in plan mode
  • Added Ctrl+V support for clipboard image paste in all terminals
  • Fixed keybindings
  • Fixed model name display in welcome screen
  • Added Shift+Enter multi-line input support
  • Added visual diffs for Edit/Write tool returns
  • Added automatic retry for transient LLM API errors
  • Added custom slash commands support (/commands)
  • Added scrolling and manual ordering to command autocomplete
  • Added toggle to show all agents in /agents view
  • Added per-resource queues for parallel tool execution
  • Fixed plan mode on non-default toolsets
  • Fixed CLI crash when browser auto-open fails in WSL
  • Added GLM-4.7 model support
  • Added /new command for creating new agents
  • Added /feedback command improvements
  • Added memory reminders to improve memory usage
  • Renamed /resume to /agents (with backwards-compatible alias)
  • Fixed plan mode path resolution on Windows
  • Added support for bundled skills and multi-source skill discovery
  • Increased loaded_skills block limit to 100k characters
  • Added support for Claude Pro and Max plans
  • Added optional telemetry
  • Added --system flag for existing agents
  • Fixed Windows-specific issues
  • Added /help command with interactive dialog
  • Added /mcp command for MCP server management
  • Added /compact command for message compaction
  • Added text search for all models
  • Improved memory tool visibility with colored name and diff output
  • Added BYOK (Bring Your Own Key) support - use your own API keys
  • Added /usage command to check usage and credits
  • Added --info flag to show project and agent info
  • Added naming dialog when pinning agents
  • Added /memory command to view agent memory blocks
  • Added ‘add-model’ skill for adding new LLM models
  • Added Gemini 3 Flash model support
  • Added feedback UI
  • Added support for relative paths in all tools
  • Added tab completion for slash commands
  • Added Kimi K2 Thinking model
  • Added personalized thinking prompts with agent name
  • Added goodbye message on exit
  • Renamed /bashes to /bg
  • Added stateless subagents via Task tool
  • Added Kimi K2 Thinking model support
  • Improved subagents UI
  • Added autocomplete for slash commands
  • Faster startup with cached tool initialization
  • Added exit and quit as aliases for /exit
  • Added profile-based persistence with startup selector
  • Added /profile command for managing profiles
  • Added simplified welcome screen design
  • Added double Ctrl+C to exit from approval screen
  • Added paginated agent list in /resume
  • Added /description command to update agent description
  • Added message search
  • Added /resume command with improved agent selector UI
  • Added LETTA_DEBUG environment variable for debug logging
  • Added agent description support
  • Added GPT-5.2 support
  • Added Gemini 3 (Vertex) support
  • Added startup status messages showing agent info
  • Added /init command for initializing memory blocks
  • Added system prompt swapping
  • Changed default naming to PascalCase
  • Added /download command to export agent file locally
  • Added Skills omni-tool
  • Added Claude Opus 4.5 support
  • Added toolset switching UI
  • Added --toolset flag
  • Added Gemini tools support
  • Added model-based toolset switching
  • Added eager cancel functionality
  • Added sleeptime memory management
  • Added --sleeptime CLI flag
  • Added GPT-5.1 models support
  • Added Gemini-3 models support
  • Added --fresh-blocks flag for isolation
  • Added /swap command for model switching
  • Added /link and /unlink commands for managing agent tools
  • Added Skills support
  • Added parallel tool calling
  • Added multi-device sign in support
  • Added agent renaming capability
  • Added Sonnet 4.5 with 180k context window
  • Added multiline input support
  • Added --new flag for creating new memory blocks
  • Added agent URL display in commands
  • Added Claude Haiku 4.5 to model selector
  • Added project-level agent persistence with auto-resume
  • Added API key caching
  • Added --model flag
  • Added GLM-4.6 support
  • Added autocomplete for commands
  • Added up/down for history navigation
  • Added fetch_web to default tool list
  • Added stream-json output format
  • Added pretty preview for file listings in approval dialog
  • Added LETTA_BASE_URL environment variable support
  • Added usage tracking
  • Added ESC to cancel operations
  • Added Ctrl-C exit with agent state dump
  • Initial release of Letta Code, the memory-first coding agent