Skip to content
Letta Code Letta Code Letta Docs
Sign up
Reference

Changelog

Release notes and version history for Letta Code

All notable changes to Letta Code are documented here.

  • Renamed letta remote to letta server (with remote kept as an alias)
  • Added letta/auto and letta/auto-fast model support in model selection
  • Added max (xhigh) reasoning tiers for Sonnet 4.6 and Opus 4.6
  • Added none/low/medium/high reasoning tiers for Opus 4.5
  • Added auto-init memory bootstrap on first message for new MemFS agents
  • Changed /init to use shallow and deep initialization tiers
  • Added subagent type tags when creating new subagents
  • Added agent_id, conversation_id, and last_run_id to /statusline payloads
  • Changed app URL generation to use centralized /chat links
  • Fixed headless 409 CONFLICT busy retries to use exponential backoff
  • Fixed startup guard for missing default conversation IDs
  • Improved readability of command-IO and toolset-change reminders
  • Added unified provider normalization for /connect and letta connect
  • Added chatgpt as canonical connect provider token (codex remains an alias)
  • Added background agent indicator in the footer
  • Added background_agents in /statusline payloads
  • Added listener version metadata in letta remote registration
  • Changed default model from Sonnet 4.5 to Sonnet 4.6
  • Fixed Shift+Tab to enter plan mode before auto-approve
  • Fixed letta remote re-registration to recover instead of crash
  • Fixed /model selection persistence for default conversations
  • Improved idle-time flushing of background subagent notifications
  • Improved approve-always pattern generation for read-only gh commands
  • Added /compaction command for interactively selecting compaction mode
  • Added --debug flag to letta remote for plain-text console logging
  • Added always-on session log written to ~/.letta/logs/remote/ for every letta remote session
  • Added debug log file written to ~/.letta/logs/debug/ for diagnostics
  • Changed /init to run as a background subagent (non-blocking initialization)
  • Changed reflection subagent output to be silenced from the primary agent’s context
  • Fixed model selection not persisting when starting new conversations
  • Fixed Bash tool git commit and PR instructions for Codex and Gemini toolsets
  • Added update availability notifications in TUI and footer when a newer version is released
  • Added /compact self_compact_all and /compact self_compact_sliding_window modes for agent-driven compaction
  • Added /reasoning-tab command to toggle Tab key cycling through reasoning effort levels (opt-in, disabled by default)
  • Added session usage details persistence shown in exit summary
  • Removed plan subagent (plan mode no longer delegates to a dedicated subagent)
  • Removed defragmenting-memory skill (memory subagent is now self-contained)
  • Fixed /clear command to skip server-side message reset for named conversations
  • Fixed /model and /reasoning commands to stay scoped to the current conversation
  • Fixed plan mode permission level being reliably restored after exiting
  • Fixed plan mode path handling with quote-aware shell parsing for apply_patch and scoped paths
  • Fixed Cloudflare HTML 5xx errors handled gracefully with user-friendly messages
  • Fixed ChatGPT usage_limit_reached errors to display the reset time
  • Fixed API key caching to survive keychain failures mid-session
  • Fixed clickable ADE and usage links in agent info bar
  • Fixed /model selector to deduplicate models by handle
  • Fixed model handle display removing s:/t: suffixes from status bar
  • Fixed auto-open file viewers being skipped in SSH sessions
  • Fixed memfs git credential helper value being redacted from debug logs
  • Improved streaming resilience in letta remote mode
  • Renamed letta listen command to letta remote (old name still works as an alias)
  • Added automatic retry on empty LLM responses
  • Fixed --conv default working alongside --new-agent flag
  • Fixed approval data being preserved across stream resume boundaries
  • Fixed TUI reflow glitches in footer and on terminal resize
  • Fixed ADE link and streaming elapsed timer stability across terminal resizes
  • Added GPT-5.3 Codex model tiers support
  • Fixed WebSocket connectivity issue
  • Fixed cosmetic display issue for newly created agents
  • Simplified letta listen command: removed explicit binding flags and added auto-generated session names
  • Fixed API key detection to read from environment variables rather than checking repo secrets
  • Fixed memory defrag flow for git-backed memfs
  • Fixed settings collision when running Letta Code from the home directory (project vs. global settings conflict)
  • Updated featured models list to show only the latest frontier model per provider
  • Fixed hooks not logging a full error stack trace when project settings are absent
  • Added /install-github-app setup wizard for GitHub App integration
  • Added bootstrap_session_state headless API for pre-configuring session state and memfs startup policy
  • Fixed startup performance by skipping no-op preset refresh when resuming existing agents
  • Fixed duplicate frontmatter blocks appearing in memory subagent prompt
  • Fixed interactive tools from being auto-approved in listen mode
  • Fixed headless list_messages to scope correctly to the default conversation
  • Fixed last-character clipping on deferred-wrap terminals
  • Fixed user permission settings path to use ~/.letta instead of ~/.config/letta
  • Fixed default conversation sentinel handling in headless startup
  • Added headless list-messages protocol command
  • Fixed reasoning tag display when reasoning is set to none
  • Disabled xhigh reasoning tier for Anthropic models
  • Added /palace command alias for Memory Palace
  • Added plan viewer with browser preview (opens plan markdown in browser)
  • Added symlink support for skills installed in ~/.letta/skills/
  • Fixed Node 18 compatibility with node:crypto import
  • Fixed bypassPermissions mode not persisting across transient retries
  • Fixed transcript backfill on conversation resume
  • Fixed assistant anchor recency on resume
  • Fixed default conversation for freshly created subagents
  • Fixed compaction reflection trigger for legacy summary format
  • Fixed model preset settings not refreshing on resume
  • Improved startup performance by eliminating redundant API calls
  • Fixed slash commands blocked after interrupt during tool execution
  • Fixed Memory Palace auto-open in tmux on macOS
  • Fixed shared reminders disabled for subagents
  • Fixed env var resolution in reflection/history-analyzer commit trailers
  • Fixed Memory Palace nesting level display
  • Fixed Memory Palace handling of $ in memory content
  • Fixed retry on quota limit errors
  • Added Memory Palace static HTML viewer (opens memory visualization in browser from /memfs)
  • Added auto-enable memfs from server-side tag on new machines
  • Fixed error formatting for /init
  • Fixed version reporting in feedback and telemetry
  • Fixed headless mode defaulting to new conversation to prevent 409 race
  • Fixed /memory command label to say “memory” instead of “memory blocks”
  • Fixed max_output_tokens for GPT-5 reasoning variants
  • Fixed LLM streaming error provider mapping
  • Added reasoning settings step to /model selector for choosing reasoning tier after model selection
  • Added Tab key cycling of reasoning tiers from the input area
  • Added Sonnet 4.6 1M context window model variant
  • Added Gemini 3.1 Pro Preview model support
  • Added auto option for toolset mode with persistence across sessions
  • Added LETTA_MEMFS_LOCAL env var to enable memfs on self-hosted servers
  • Added Edit tool start line number in code diff display
  • Added elapsed time display for running shell tools
  • Added memory application guidance to system prompts
  • Aligned TaskOutput display format with Bash output
  • Fixed slash commands incorrectly opening state when no arguments are accepted
  • Fixed ADE links for default conversation
  • Fixed user settings being clobbered on save
  • Fixed Gemini/GLM tools not working in plan mode
  • Fixed permission mode desyncs in YOLO approval handling
  • Fixed reasoning effort display in footer and model selector
  • Fixed conversation routing after creating new agent in /agents
  • Fixed plan file apply_patch paths not allowed in plan mode
  • Fixed auto toolset detection for ChatGPT OAuth as Codex
  • Fixed agents limit exceeded retry behavior
  • Fixed default agent creation when base tools are missing
  • Disabled hidden SDK retries for streaming POSTs
  • Removed Sonnet 4.5 from featured models
  • Added sonnet shortcut for Sonnet 4.6 model in -m flag
  • Added Sonnet 4.6 model support (set as new default model)
  • Changed featured model from Sonnet 4.5 to Sonnet 4.6
  • Added --skill-sources flag to control which skill sources are enabled (comma-separated: bundled, global, agent, project, or all)
  • Added --no-bundled-skills flag to disable only bundled skills while keeping other sources
  • Added headless reflection settings: --reflection-trigger, --reflection-behavior, --reflection-step-count
  • Added --no-system-info-reminder flag to suppress first-turn environment context reminder
  • Added MEMORY_DIR and AGENT_ID environment variables exposed in shell tools
  • Fixed ChatGPT connection and OAuth error retry classification
  • Fixed slash command queueing while agent is running
  • Fixed memfs/standard prompt section reconciliation
  • Fixed auto-update execution and release gating
  • Fixed interrupt timer reset on eager cancel
  • Fixed skills reminders appearing in headless sessions
  • Removed max_turns option from Task tool
  • Added /skills command to browse available skills by source (bundled, global, agent, project)
  • Added memfs_enabled status in headless init messages
  • Fixed agent not being reused on directory switch (no longer creates unnecessary new agents)
  • Fixed over-escaped strings in Edit tool
  • Fixed memfs git pull authentication and credential URL normalization
  • Fixed shell auto-approval path checks
  • Fixed Windows absolute file-rule matching in permissions
  • Fixed bundled JS subagent launcher on Windows
  • Fixed model tier selection when model ID is selected directly
  • Added --no-skills flag to disable bundled skills
  • Added --tags flag for headless mode agent tagging
  • Added MiniMax M2.5 model support
  • Added specific retry messages for known LLM provider errors
  • Improved reflection subagent to autonomously complete merge and push operations
  • Removed conversation_search from default toolset
  • Fixed ghost assistant messages when Anthropic returns [text, thinking, text] block order
  • Fixed CONFLICT errors after interrupt during tool execution
  • Fixed interrupt recovery to only trigger on real user interrupts
  • Fixed memfs init steps in headless mode
  • Fixed ADE default conversation link
  • Added git-backed memory filesystem sync with automatic commit and push
  • Added reflection subagent for background memory analysis and updates
  • Added history-analyzer subagent and migrating-from-codex-and-claude-code skill for importing history from Claude Code and Codex CLIs
  • Added /statusline command for configurable CLI footer status lines
  • Added /sleeptime command for client-side reflection trigger settings
  • Added /compact [all|sliding_window] mode options
  • Added GLM-5 model support
  • Renamed /download to /export and --from-af to --import (old names still work as aliases)
  • Changed compaction to set summary as first message in conversation
  • Fixed @ file browser deep recursive search when browsing parent directories
  • Fixed /context double rendering on short terminals
  • Fixed system-reminder tag rendering in message backfill
  • Fixed stream ID accumulator collisions
  • Added custom tools support for SDK-side tool registration and execution in bidirectional mode
  • Added agent registry import support (@author/name format for --from-af)
  • Added PreCompact hooks now fire during server-side automatic compaction
  • Fixed OpenAI encrypted content organization mismatch error handling
  • Fixed headless pre-stream approval conflict recovery
  • Fixed slash-prefixed messages without trailing space not being sendable
  • Fixed model updates not sending max_tokens configuration to cloud
  • Fixed memfs not detaching all memory tool variants when enabled
  • Improved headless mode interactive tool behavior for bidirectional parity
  • Added brand accent color for markdown link text
  • Improved rendering stability and flicker prevention across footer and streaming status
  • Improved always-allow behavior for skill script permissions
  • Improved error guidance for model availability and credit issues
  • Fixed memory defrag subagent to run in background
  • Fixed MiniMax errors at higher token counts
  • Fixed duplicate feedback command output on submit
  • Fixed oversized shell tool output clipping in collapsed tool results
  • Fixed task transcript propagation for max-step failures
  • Removed ChatGPT OAuth Pro plan restriction for Codex connect
  • Fixed --max-turns flag not accepted at top-level CLI
  • Fixed subagent keychain migration churn
  • Added GPT-5.3 Codex model support (ChatGPT Plus/Pro)
  • Added permission mode restoration when exiting plan mode
  • Fixed headless permission wait deadlock
  • Fixed /model selection for shared-handle model tiers with different reasoning efforts
  • Fixed subagent model display consistency
  • Rewrote the Skill tool from load/unload commands to direct invocation (skill: "name" with optional args)
  • Removed skills and loaded_skills memory blocks (skills are now listed in system-reminder messages)
  • Added --embedding flag to specify embedding model when creating agents in headless mode
  • Added /context command token usage breakdown by category (system, core memory, functions, messages)
  • Added showCompactions setting to hide compaction messages (defaults to false)
  • Added LETTA_PACKAGE_MANAGER env var to override detected package manager for auto-updates
  • Changed featured model from Opus 4.5 to Opus 4.6
  • Improved auto-updater to detect package manager (npm/bun/pnpm) instead of hardcoding npm
  • Improved subagent auth to inherit credentials from parent, avoiding keychain contention
  • Improved keychain availability check to use a non-mutating probe instead of set/delete
  • Fixed UserPromptSubmit hook firing repeatedly on message dequeue
  • Fixed Stop hook using first user message instead of most recent
  • Removed Setup hook event type
  • Updated memfs skill and system prompt
  • Added prompt-based hooks that use an LLM to evaluate whether actions should be allowed or blocked
  • Added /context command braille area chart showing token usage history across turns
  • Added skills extraction and packaging for --from-af agent file imports
  • Added max_turns parameter to Task tool for limiting subagent turn count
  • Improved message queue to wait for tool completion instead of using a 15-second timeout
  • Fixed Shift+Enter by normalizing newlines before keypress parsing
  • Fixed keyboard protocol report filtering and scoped Linux Enter key handling
  • Fixed loaded_skills block not resetting on new conversation
  • Fixed memFS system prompt not updating based on --memfs/--no-memfs CLI flags
  • Fixed empty assistant message bullets rendering
  • Fixed skills directory path shown in extraction message
  • Fixed subagent static promotion race during tool result reentry
  • Added /context command to show context window usage with visual token bar
  • Added background task support for Task and Bash tools via run_in_background parameter
  • Added TaskOutput tool to retrieve output from background tasks
  • Added TaskStop tool to stop running background tasks
  • Added background task completion notifications injected into conversation
  • Added additionalContext support for PostToolUse hooks (JSON output parsed for context injection)
  • Added Claude Opus 4.6 model support
  • Improved subagent status display with aligned dots, headers, and dimming for running agents
  • Improved tool call dot phases and colors for clearer execution feedback
  • Fixed /download command to pass conversation_id for non-default conversations
  • Added number key support (1-9) to approval dialogs for quick selection
  • Enabled memfs in headless mode when using --agent flag
  • Fixed Enter key handling on Linux terminals that emit \n instead of \r
  • Fixed error handling in headless bidirectional mode
  • Fixed MCP skill templates with corrected paths
  • Fixed malformed AskUserQuestion falling through to generic approval
  • Added converting-mcps-to-skills bundled skill for connecting to MCP servers
  • Added PostToolUseFailure hook that runs after tool failures (feeds stderr back to agent)
  • Added SessionEnd hooks now run on Ctrl+C (SIGINT)
  • Added conversation renaming when renaming agents via /rename
  • Improved SessionStart hooks with feedback injection
  • Improved post tool use feedback injection
  • Fixed compaction display to show simple “Conversation compacted” message
  • Fixed context windows fetched from server instead of hardcoded
  • Fixed Windows PATH handling and PowerShell quoting
  • Fixed thinking/assistant block spacing preservation during streaming
  • Fixed logo resetting to flat frame when loading completes
  • Fixed duplicate rendering of auto-approved file tools
  • Fixed 409 “conversation busy” errors with exponential backoff
  • Fixed flicker on tall approval dialogs
  • Added trajectory stats tracking and completion summary on exit
  • Improved /memory viewer to prioritize system/ directory at top
  • Fixed input area collapsing during approvals and selector overlays
  • Fixed slash command menu render flicker
  • Fixed rendering instability that caused line flicker
  • Added alien art to command preview and exit message
  • Added BYOK-aware model resolution with fallback for subagents
  • Added network phase arrows to streaming status indicator
  • Fixed handling of malformed AskUserQuestion data from LLM
  • Fixed /usage command formatting
  • Fixed /memfs position in command autocomplete order
  • Fixed mojibake detection to preserve valid Unicode characters
  • Fixed loading state layout consistency during startup
  • Fixed autocomplete to show “No matching commands” instead of hiding
  • Fixed <Text> encoding non-ASCII characters in Bun
  • Added --from-agent flag for agent-to-agent communication in headless mode
  • Refactored skill scripts into CLI subcommands (letta memfs, letta blocks, etc.)
  • Added compaction messages display and new summary message type handling
  • Fixed skill diffing code
  • Fixed memfs skill scripts
  • Fixed memfs frontmatter round-trip to preserve block metadata
  • Fixed extra vertical spacing between memory block tabs
  • Fixed Task tool approval dialogs to show full prompt
  • Improved memfs sync performance
  • Enabled Memory Filesystem (memfs) by default for newly created agents
  • Added --memfs / --no-memfs CLI flags to control memfs on agent creation
  • Fixed Bun string encoding issues

This release introduces Memory Filesystem (experimental) - your agent’s memory blocks now sync with local files in .letta/memory/, enabling direct editing and version control of agent memory.

  • Added Memory Filesystem (memfs) that syncs memory blocks with .letta/memory/ directory
  • Added agent-driven conflict resolution for memfs sync conflicts
  • Added /memfs command to view sync status and resolve conflicts
  • Added owner tags for tracking block ownership (system vs agent-created)
  • Added hierarchical memory organization with system/ prefix for core blocks
  • Added syncing-memory-filesystem built-in skill for conflict resolution guidance
  • Updated /init to create hierarchically organized memory blocks
  • Updated defragmenting-memory skill to use memfs instead of backup/restore scripts
  • Added MiniMax M2.1 model support
  • Added Kimi K2.5 model support
  • Added OpenRouter BYOK support via /connect
  • Added AWS Bedrock profile authentication method
  • Added Bedrock Opus 4.5 fallback suggestion for Anthropic API errors
  • Added UserPromptSubmit hook that fires when user submits a prompt
  • Added reasoning and assistant_message capture in PostToolUse and Stop hooks
  • Added LETTA_AGENT_ID environment variable injection into hooks
  • Added “Disable all hooks” toggle in /hooks command
  • Added memory log hook example script
  • Added agent-scoped skills directory (~/.letta/agents/{id}/skills/)
  • Added user prompt message highlighting
  • Added permissions status script
  • Changed conversation_search to no longer be a default tool (use recall subagent instead)
  • Fixed up/down arrow navigation with newlines in multi-line input
  • Fixed cursor visibility on newline characters in multi-line input
  • Fixed @file search to exclude venv and dependency directories
  • Fixed /feedback command formatting and context
  • Fixed approval dialog horizontal lines to extend full terminal width
  • Fixed default agent creation on first bootup
  • Fixed paste support in hooks TUI inputs
  • Fixed hooks TUI with Enter to delete and better spacing
  • Fixed help text for letta --new flag
  • Fixed UserPromptSubmit hooks to not fire for slash commands
  • Fixed subagents to be marked as hidden on creation
  • Fixed LLM error retry to not retry 4xx client errors
  • Fixed model selector display on self-hosted when default model unavailable
  • Fixed agents limit exceeded error and added deletion support in /agents
  • Fixed -m flag to correctly apply model variants with same handle
  • Added AWS Bedrock support to /connect command
  • Added multi-server support with settings indexed by server URL
  • Added regex tool name matching for hooks (e.g., "Edit|Write")
  • Added message retry on premature interrupt
  • Added desktop notification hook script
  • Added rm -rf block hook script example
  • Improved /connect command and model selector UX
  • Disabled Incognito agent creation by default
  • Fixed localhost connection improvements
  • Fixed login screen styling to match other menus
  • Fixed error message formatting
  • Added Stop hook continuation on blocking (hook can keep agent working)
  • Added example hook scripts for common patterns
  • Improved message queueing for smoother UX
  • Fixed backfill failures to be handled gracefully instead of crashing
  • Added search field to model selector (both Supported and All Available tabs)
  • Fixed /compact to use correct conversations endpoint
  • Fixed agent info bar layout to prevent overflow
  • Added Claude Code-compatible hooks system with /hooks command for automating workflows
  • Added cross-platform support for hooks executor (Windows, macOS, Linux)
  • Added ViewImage tool for attaching local images to conversation context
  • Added search field to model selector on both tabs
  • Added Bedrock Opus 4.5 model
  • Added conversation ID display in agent info bar
  • Added immediate mode for interactive commands
  • Improved cancellation with graceful 30s timeout before force-abort
  • Fixed bash mode input locking, ESC cancellation, and removed timeout
  • Fixed bash mode process group spawn/kill for proper cleanup
  • Fixed bash mode Ctrl+C interrupt handling
  • Fixed toolset switching to be atomic (prevents tool desync race)
  • Fixed hooks config state to use settings as source of truth
  • Fixed @ file selection during search debounce
  • Fixed 5MB image size limit with progressive compression
  • Fixed invalid tool call ID recovery
  • Fixed stale queued approvals after successful approval flow
  • Fixed Skill tool isolated blocks in conversation context
  • Fixed messages starting with / to be sent to agent when unknown command
  • Fixed auto-update ENOTEMPTY errors with cleanup and retry
  • Added image reading support to Read tool (PNG, JPG, GIF, WEBP, BMP files are visually displayed)
  • Added shell alias expansion in bash mode (sources from .zshrc, .bashrc, etc.)
  • Added query prefill support for /search command (/search [query])
  • Added arrow key navigation for tab switching in /models
  • Improved Skill tool output with more explicit success messages
  • Added automatic retry for 409 “conversation busy” errors
  • Added message restoration to input field after queue errors
  • Fixed agent name consistency using single source of truth
  • Fixed /clear command output message to clarify messages are moved to history
  • Fixed streaming flicker with aggressive static content promotion
  • Fixed cursor position placed at end when navigating command history
  • Fixed ADE links to work in tmux
  • Reduced image resize limit to 2000px for multi-image requests
  • Fixed queue-cancel hang and stuck queue issues
  • Fixed premature cancellation of server-side tools in mixed execution
  • Added automatic image resizing for clipboard paste (images larger than 2048x2048 are resized to fit API limits)
  • Improved error feedback when image paste fails
  • Fixed conversation ID being passed correctly to resume data retrieval

Default startup behavior reverted to single-threaded experience. Based on user feedback, letta (with no flags) now resumes the agent’s “default” conversation instead of creating a new conversation each time.

Command0.13.0 - 0.13.30.13.4+
lettaCreates new conversation each timeUses “default” conversation
letta --newError (was deprecated)Creates a new conversation
letta --continue (no session)Silently creates newErrors with helpful message
  • Changed letta (no flags) to resume the “default” conversation with message history
  • Repurposed --new flag to create a new conversation (for users who want concurrent sessions)
  • Changed --continue fallback to error with helpful suggestions instead of silently creating new
  • Added messaging-agents bundled skill for sending messages to other agents
  • Added ability to deploy existing agents as subagents via the Task tool
  • Fixed interrupt handling race condition when tool approvals are in flight
  • Added skills frontmatter pre-loading for subagents (skills defined in subagent configs are auto-loaded)
  • Added output truncation for Task tool to prevent context overflow
  • Added auto-cleanup for overflow files
  • Fixed auto-allowed tool execution tracking for proper interrupt handling
  • Fixed hardcoded embedding model (now uses server default)
  • Added --default flag to access agent’s default conversation (alias for --conv default)
  • Added --conv <agent-id> shorthand (e.g., letta --conv agent-xyz → uses that agent’s default conversation)
  • Added default conversation to /resume selector (appears at top of list)
  • Added working-in-parallel bundled skill for coordinating parallel subagent tasks
  • Added conversation resume hint in exit stats
  • Improved startup performance with reduced time-to-boot
  • Fixed stale approvals being auto-cancelled on session resume
  • Fixed auth type display on startup
  • Fixed memory block retrieval for /memory command

This release introduces Conversations - a major change to how Letta Code manages chat sessions. Your agent can now have many parallel conversations, each contributing to its learning, memory, and shared history.

Before 0.13.0:

  • Each agent had a single conversation
  • Starting Letta Code resumed the same conversation
  • /clear reset the agent’s context window

After 0.13.0 (updated in 0.13.4):

  • Each startup resumes the default conversation (reverted from 0.13.0-0.13.3 behavior)
  • Your agent’s memory is shared across all conversations
  • /new creates a new conversation (for parallel sessions)
  • /clear clears in-context messages
  • Use /resume to browse and switch between past conversations
  • Use letta --new to create a new conversation for concurrent sessions

If you’re upgrading from an earlier version, you may notice that starting Letta Code puts you in a new conversation instead of continuing where you left off. Here’s what happened:

Your old messages still exist! They’re in your agent’s default conversation - the original message history before conversations were introduced. You can find it at the top of the /resume selector, or access it directly with the commands below.

Easiest way to access them (0.13.1+):

Terminal window
# Use the --default flag with your agent name
letta -n "Your Agent Name" --default
# Or with agent ID
letta --agent <your-agent-id> --default
# Or use the shorthand (agent ID only)
letta --conv <your-agent-id>

The default conversation also appears at the top of the /resume selector.

Alternative methods:

  1. View them on the web at https://app.letta.com/agents/<your-agent-id>

  2. Have your agent recall them using one of these prompts:

    Using the recall subagent:

    Can you use the recall subagent to find our most recent messages? I'd like to continue where we left off.

    Using the conversation_search tool:

    Can you use conversation_search to find our most recent messages so we can continue where we left off?

    Using the searching-messages skill:

    Can you load the searching-messages skill and use it to find our most recent messages?
  3. Export and reference your agent file by running /export (saves to <agent-id>.af), then ask your agent:

    I downloaded my agent file to ./agent-xxxx.af - can you read it and look through the "messages" array to find our most recent conversation?

Going forward, all new conversations will be accessible via /resume, and the default conversation is always available via --default.

  • Added Conversations support - each session creates an isolated conversation while sharing agent memory
  • Added /resume command to browse and switch between past conversations
  • Added --resume (-r) and --continue (-c) flags to resume last session
  • Added --conversation (-C, --conv) flag to resume a specific conversation by ID
  • Added default agents (Memo and Incognito) auto-created for new users
  • Changed /clear to start a new conversation (non-destructive) instead of deleting messages (reverted in 0.13.4)
  • Fixed Task tool rendering issues with parallel subagents
  • Fixed ADE links to include conversation context
  • Fixed text wrapping in collapsed bash output display
  • Renamed memory-defrag skill to defragmenting-memory to follow naming conventions
  • Added automatic retry for transient network errors during LLM streaming
  • Improved plan mode flexibility for writing plan files
  • Added memory subagent for cleaning up and reorganizing memory blocks
  • Added defragmenting-memory built-in skill with backup/restore workflow
  • Added streaming output display for long-running bash commands
  • Added line count summary for Read tool results
  • Added network retry for transient LLM streaming errors
  • Added Skill tool support in plan mode (load/unload/refresh are read-only)
  • Fixed tool approval flow that was broken by ESC handling changes
  • Improved Task tool and subagent display rendering
  • Fixed UI flickering in Ghostty terminal
  • Added terminal title and progress indicator for approval screens
  • Added LETTA_DEBUG_TIMINGS environment variable for request timing diagnostics
  • Fixed “Create new agent” from selector being stuck in a loop
  • Fixed subagent display spacing and extra newlines
  • Fixed subagent live streaming not updating during execution
  • Added LSP diagnostics to Read tool for TypeScript and Python files
  • Added refresh command to Task tool for rescanning custom subagents
  • Added file-based overflow for long tool outputs
  • Fixed left/right arrow key cursor navigation in approval text inputs
  • Fixed pre-stream approval desync errors with keep-alive recovery
  • Fixed subagents not inheriting parent’s tool permission rules
  • Added /ralph and /yolo-ralph commands for autonomous agentic loop mode
  • Fixed read-only subagents (explore, plan, recall) to work in plan mode
  • Fixed Windows PowerShell ENOENT errors with shell fallback
  • Added recall subagent for searching parent agent’s conversation history
  • Fixed agent selector not showing when LRU agent retrieval fails
  • Fixed approval desync issues for slash commands and queued messages
  • Fixed SDK retry race conditions on streaming requests
  • Fixed pending approval denials not being cached on ESC interrupt
  • Fixed stale processConversation calls affecting UI state after interrupts
  • Refactored to use new client-side tool calling via the messages endpoint
  • Added acquiring-skills skill for discovering and installing skills from external repositories
  • Added migrating-memory skill for copying memory blocks between agents
  • Updated skills system (migrating-memory, finding-agents, searching-messages)
  • Improved interrupt handling with better messaging
  • Fixed ESC interrupt to properly stop streams
  • Fixed skill scripts to work when installed via npm
  • Fixed Task tool (subagent) rendering issues
  • Fixed bash mode exit behavior after submitting commands
  • Fixed binary file detection being overly aggressive
  • Fixed approval results handling when auto-handling remaining approvals
  • Fixed stream retry behavior after interrupts
  • Added system prompt and memory block configuration for headless mode
  • Added --input-format stream-json flag for programmatic input handling
  • Improved parallel tool call approval UI
  • Added inline dialogs for improved user experience
  • Improved token counter display
  • Fixed server-side tools incorrectly showing as interrupted
  • Fixed Windows installation issues
  • Fixed keyboard shortcuts for Ctrl+C, Ctrl+V, and Shift+Enter
  • Fixed iTerm2 keybindings
  • Fixed ESC and CTRL-C handling across all dialogs
  • Added desktop notifications when UI needs user attention
  • Added read-only shell commands support in plan mode
  • Added Ctrl+V support for clipboard image paste in all terminals
  • Fixed keybindings
  • Fixed model name display in welcome screen
  • Added Shift+Enter multi-line input support
  • Added visual diffs for Edit/Write tool returns
  • Added automatic retry for transient LLM API errors
  • Added custom slash commands support (/commands)
  • Added scrolling and manual ordering to command autocomplete
  • Added toggle to show all agents in /agents view
  • Added per-resource queues for parallel tool execution
  • Fixed plan mode on non-default toolsets
  • Fixed CLI crash when browser auto-open fails in WSL
  • Added GLM-4.7 model support
  • Added /new command for creating new agents
  • Added /feedback command improvements
  • Added memory reminders to improve memory usage
  • Renamed /resume to /agents (with backwards-compatible alias)
  • Fixed plan mode path resolution on Windows
  • Added support for bundled skills and multi-source skill discovery
  • Increased loaded_skills block limit to 100k characters
  • Added support for Claude Pro and Max plans
  • Added optional telemetry
  • Added --system flag for existing agents
  • Fixed Windows-specific issues
  • Added /help command with interactive dialog
  • Added /mcp command for MCP server management
  • Added /compact command for message compaction
  • Added text search for all models
  • Improved memory tool visibility with colored name and diff output
  • Added BYOK (Bring Your Own Key) support - use your own API keys
  • Added /usage command to check usage and credits
  • Added --info flag to show project and agent info
  • Added naming dialog when pinning agents
  • Added /memory command to view agent memory blocks
  • Added ‘add-model’ skill for adding new LLM models
  • Added Gemini 3 Flash model support
  • Added feedback UI
  • Added support for relative paths in all tools
  • Added tab completion for slash commands
  • Added Kimi K2 Thinking model
  • Added personalized thinking prompts with agent name
  • Added goodbye message on exit
  • Renamed /bashes to /bg
  • Added stateless subagents via Task tool
  • Added Kimi K2 Thinking model support
  • Improved subagents UI
  • Added autocomplete for slash commands
  • Faster startup with cached tool initialization
  • Added exit and quit as aliases for /exit
  • Added profile-based persistence with startup selector
  • Added /profile command for managing profiles
  • Added simplified welcome screen design
  • Added double Ctrl+C to exit from approval screen
  • Added paginated agent list in /resume
  • Added /description command to update agent description
  • Added message search
  • Added /resume command with improved agent selector UI
  • Added LETTA_DEBUG environment variable for debug logging
  • Added agent description support
  • Added GPT-5.2 support
  • Added Gemini 3 (Vertex) support
  • Added startup status messages showing agent info
  • Added /init command for initializing memory blocks
  • Added system prompt swapping
  • Changed default naming to PascalCase
  • Added /download command to export agent file locally
  • Added Skills omni-tool
  • Added Claude Opus 4.5 support
  • Added toolset switching UI
  • Added --toolset flag
  • Added Gemini tools support
  • Added model-based toolset switching
  • Added eager cancel functionality
  • Added sleeptime memory management
  • Added --sleeptime CLI flag
  • Added GPT-5.1 models support
  • Added Gemini-3 models support
  • Added --fresh-blocks flag for isolation
  • Added /swap command for model switching
  • Added /link and /unlink commands for managing agent tools
  • Added Skills support
  • Added parallel tool calling
  • Added multi-device sign in support
  • Added agent renaming capability
  • Added Sonnet 4.5 with 180k context window
  • Added multiline input support
  • Added --new flag for creating new memory blocks
  • Added agent URL display in commands
  • Added Claude Haiku 4.5 to model selector
  • Added project-level agent persistence with auto-resume
  • Added API key caching
  • Added --model flag
  • Added GLM-4.6 support
  • Added autocomplete for commands
  • Added up/down for history navigation
  • Added fetch_web to default tool list
  • Added stream-json output format
  • Added pretty preview for file listings in approval dialog
  • Added LETTA_BASE_URL environment variable support
  • Added usage tracking
  • Added ESC to cancel operations
  • Added Ctrl-C exit with agent state dump
  • Initial release of Letta Code, the memory-first coding agent