Changelog
Release notes and version history for Letta Code
All notable changes to Letta Code are documented here.
0.17.0
Section titled “0.17.0”- Renamed
letta remotetoletta server(withremotekept as an alias) - Added
letta/autoandletta/auto-fastmodel support in model selection - Added max (
xhigh) reasoning tiers for Sonnet 4.6 and Opus 4.6 - Added none/low/medium/high reasoning tiers for Opus 4.5
- Added auto-init memory bootstrap on first message for new MemFS agents
- Changed
/initto use shallow and deep initialization tiers - Added subagent type tags when creating new subagents
- Added
agent_id,conversation_id, andlast_run_idto/statuslinepayloads - Changed app URL generation to use centralized
/chatlinks - Fixed headless
409 CONFLICTbusy retries to use exponential backoff - Fixed startup guard for missing default conversation IDs
- Improved readability of command-IO and toolset-change reminders
0.16.15
Section titled “0.16.15”- Added unified provider normalization for
/connectandletta connect - Added
chatgptas canonical connect provider token (codexremains an alias) - Added background agent indicator in the footer
- Added
background_agentsin/statuslinepayloads - Added listener version metadata in
letta remoteregistration - Changed default model from Sonnet 4.5 to Sonnet 4.6
- Fixed
Shift+Tabto enter plan mode before auto-approve - Fixed
letta remotere-registration to recover instead of crash - Fixed
/modelselection persistence for default conversations - Improved idle-time flushing of background subagent notifications
- Improved
approve-alwayspattern generation for read-onlyghcommands
0.16.14
Section titled “0.16.14”- Added
/compactioncommand for interactively selecting compaction mode - Added
--debugflag toletta remotefor plain-text console logging - Added always-on session log written to
~/.letta/logs/remote/for everyletta remotesession - Added debug log file written to
~/.letta/logs/debug/for diagnostics - Changed
/initto run as a background subagent (non-blocking initialization) - Changed reflection subagent output to be silenced from the primary agent’s context
- Fixed model selection not persisting when starting new conversations
- Fixed Bash tool git commit and PR instructions for Codex and Gemini toolsets
0.16.13
Section titled “0.16.13”- Added update availability notifications in TUI and footer when a newer version is released
- Added
/compact self_compact_alland/compact self_compact_sliding_windowmodes for agent-driven compaction - Added
/reasoning-tabcommand to toggle Tab key cycling through reasoning effort levels (opt-in, disabled by default) - Added session usage details persistence shown in exit summary
- Removed
plansubagent (plan mode no longer delegates to a dedicated subagent) - Removed
defragmenting-memoryskill (memory subagent is now self-contained) - Fixed
/clearcommand to skip server-side message reset for named conversations - Fixed
/modeland/reasoningcommands to stay scoped to the current conversation - Fixed plan mode permission level being reliably restored after exiting
- Fixed plan mode path handling with quote-aware shell parsing for
apply_patchand scoped paths - Fixed Cloudflare HTML 5xx errors handled gracefully with user-friendly messages
- Fixed ChatGPT
usage_limit_reachederrors to display the reset time - Fixed API key caching to survive keychain failures mid-session
- Fixed clickable ADE and usage links in agent info bar
- Fixed
/modelselector to deduplicate models by handle - Fixed model handle display removing
s:/t:suffixes from status bar - Fixed auto-open file viewers being skipped in SSH sessions
- Fixed memfs git credential helper value being redacted from debug logs
- Improved streaming resilience in
letta remotemode
0.16.12
Section titled “0.16.12”- Renamed
letta listencommand toletta remote(old name still works as an alias) - Added automatic retry on empty LLM responses
- Fixed
--conv defaultworking alongside--new-agentflag - Fixed approval data being preserved across stream resume boundaries
- Fixed TUI reflow glitches in footer and on terminal resize
- Fixed ADE link and streaming elapsed timer stability across terminal resizes
0.16.11
Section titled “0.16.11”- Added GPT-5.3 Codex model tiers support
- Fixed WebSocket connectivity issue
- Fixed cosmetic display issue for newly created agents
0.16.10
Section titled “0.16.10”- Simplified
letta listencommand: removed explicit binding flags and added auto-generated session names - Fixed API key detection to read from environment variables rather than checking repo secrets
- Fixed memory defrag flow for git-backed memfs
0.16.9
Section titled “0.16.9”- Fixed settings collision when running Letta Code from the home directory (project vs. global settings conflict)
- Updated featured models list to show only the latest frontier model per provider
0.16.8
Section titled “0.16.8”- Fixed hooks not logging a full error stack trace when project settings are absent
0.16.7
Section titled “0.16.7”- Added
/install-github-appsetup wizard for GitHub App integration - Added
bootstrap_session_stateheadless API for pre-configuring session state and memfs startup policy - Fixed startup performance by skipping no-op preset refresh when resuming existing agents
- Fixed duplicate frontmatter blocks appearing in memory subagent prompt
- Fixed interactive tools from being auto-approved in listen mode
- Fixed headless
list_messagesto scope correctly to the default conversation - Fixed last-character clipping on deferred-wrap terminals
0.16.6
Section titled “0.16.6”- Fixed user permission settings path to use
~/.lettainstead of~/.config/letta
0.16.5
Section titled “0.16.5”- Fixed default conversation sentinel handling in headless startup
0.16.4
Section titled “0.16.4”- Added headless
list-messagesprotocol command - Fixed reasoning tag display when reasoning is set to none
- Disabled xhigh reasoning tier for Anthropic models
0.16.3
Section titled “0.16.3”- Added
/palacecommand alias for Memory Palace - Added plan viewer with browser preview (opens plan markdown in browser)
- Added symlink support for skills installed in
~/.letta/skills/ - Fixed Node 18 compatibility with
node:cryptoimport - Fixed
bypassPermissionsmode not persisting across transient retries - Fixed transcript backfill on conversation resume
- Fixed assistant anchor recency on resume
- Fixed default conversation for freshly created subagents
- Fixed compaction reflection trigger for legacy summary format
- Fixed model preset settings not refreshing on resume
0.16.2
Section titled “0.16.2”- Improved startup performance by eliminating redundant API calls
- Fixed slash commands blocked after interrupt during tool execution
- Fixed Memory Palace auto-open in tmux on macOS
- Fixed shared reminders disabled for subagents
- Fixed env var resolution in reflection/history-analyzer commit trailers
- Fixed Memory Palace nesting level display
- Fixed Memory Palace handling of
$in memory content - Fixed retry on quota limit errors
0.16.1
Section titled “0.16.1”- Added Memory Palace static HTML viewer (opens memory visualization in browser from
/memfs) - Added auto-enable memfs from server-side tag on new machines
- Fixed error formatting for
/init - Fixed version reporting in feedback and telemetry
- Fixed headless mode defaulting to new conversation to prevent 409 race
- Fixed
/memorycommand label to say “memory” instead of “memory blocks” - Fixed
max_output_tokensfor GPT-5 reasoning variants - Fixed LLM streaming error provider mapping
0.16.0
Section titled “0.16.0”- Added reasoning settings step to
/modelselector for choosing reasoning tier after model selection - Added Tab key cycling of reasoning tiers from the input area
- Added Sonnet 4.6 1M context window model variant
- Added Gemini 3.1 Pro Preview model support
- Added
autooption for toolset mode with persistence across sessions - Added
LETTA_MEMFS_LOCALenv var to enable memfs on self-hosted servers - Added Edit tool start line number in code diff display
- Added elapsed time display for running shell tools
- Added memory application guidance to system prompts
- Aligned TaskOutput display format with Bash output
- Fixed slash commands incorrectly opening state when no arguments are accepted
- Fixed ADE links for default conversation
- Fixed user settings being clobbered on save
- Fixed Gemini/GLM tools not working in plan mode
- Fixed permission mode desyncs in YOLO approval handling
- Fixed reasoning effort display in footer and model selector
- Fixed conversation routing after creating new agent in
/agents - Fixed plan file
apply_patchpaths not allowed in plan mode - Fixed auto toolset detection for ChatGPT OAuth as Codex
- Fixed agents limit exceeded retry behavior
- Fixed default agent creation when base tools are missing
- Disabled hidden SDK retries for streaming POSTs
- Removed Sonnet 4.5 from featured models
0.15.6
Section titled “0.15.6”- Added
sonnetshortcut for Sonnet 4.6 model in-mflag
0.15.5
Section titled “0.15.5”- Added Sonnet 4.6 model support (set as new default model)
- Changed featured model from Sonnet 4.5 to Sonnet 4.6
0.15.4
Section titled “0.15.4”- Added
--skill-sourcesflag to control which skill sources are enabled (comma-separated:bundled,global,agent,project, orall) - Added
--no-bundled-skillsflag to disable only bundled skills while keeping other sources - Added headless reflection settings:
--reflection-trigger,--reflection-behavior,--reflection-step-count - Added
--no-system-info-reminderflag to suppress first-turn environment context reminder
0.15.3
Section titled “0.15.3”- Added
MEMORY_DIRandAGENT_IDenvironment variables exposed in shell tools - Fixed ChatGPT connection and OAuth error retry classification
- Fixed slash command queueing while agent is running
- Fixed memfs/standard prompt section reconciliation
- Fixed auto-update execution and release gating
- Fixed interrupt timer reset on eager cancel
- Fixed skills reminders appearing in headless sessions
- Removed
max_turnsoption from Task tool
0.15.2
Section titled “0.15.2”- Added
/skillscommand to browse available skills by source (bundled, global, agent, project) - Added
memfs_enabledstatus in headless init messages - Fixed agent not being reused on directory switch (no longer creates unnecessary new agents)
- Fixed over-escaped strings in Edit tool
- Fixed memfs git pull authentication and credential URL normalization
- Fixed shell auto-approval path checks
- Fixed Windows absolute file-rule matching in permissions
- Fixed bundled JS subagent launcher on Windows
- Fixed model tier selection when model ID is selected directly
0.15.1
Section titled “0.15.1”- Added
--no-skillsflag to disable bundled skills - Added
--tagsflag for headless mode agent tagging - Added MiniMax M2.5 model support
- Added specific retry messages for known LLM provider errors
- Improved reflection subagent to autonomously complete merge and push operations
- Removed
conversation_searchfrom default toolset - Fixed ghost assistant messages when Anthropic returns
[text, thinking, text]block order - Fixed CONFLICT errors after interrupt during tool execution
- Fixed interrupt recovery to only trigger on real user interrupts
- Fixed memfs init steps in headless mode
- Fixed ADE default conversation link
0.15.0
Section titled “0.15.0”- Added git-backed memory filesystem sync with automatic commit and push
- Added
reflectionsubagent for background memory analysis and updates - Added
history-analyzersubagent andmigrating-from-codex-and-claude-codeskill for importing history from Claude Code and Codex CLIs - Added
/statuslinecommand for configurable CLI footer status lines - Added
/sleeptimecommand for client-side reflection trigger settings - Added
/compact [all|sliding_window]mode options - Added GLM-5 model support
- Renamed
/downloadto/exportand--from-afto--import(old names still work as aliases) - Changed compaction to set summary as first message in conversation
- Fixed
@file browser deep recursive search when browsing parent directories - Fixed
/contextdouble rendering on short terminals - Fixed system-reminder tag rendering in message backfill
- Fixed stream ID accumulator collisions
0.14.16
Section titled “0.14.16”- Added custom tools support for SDK-side tool registration and execution in bidirectional mode
- Added agent registry import support (
@author/nameformat for--from-af) - Added
PreCompacthooks now fire during server-side automatic compaction - Fixed OpenAI encrypted content organization mismatch error handling
- Fixed headless pre-stream approval conflict recovery
- Fixed slash-prefixed messages without trailing space not being sendable
- Fixed model updates not sending
max_tokensconfiguration to cloud - Fixed memfs not detaching all memory tool variants when enabled
- Improved headless mode interactive tool behavior for bidirectional parity
0.14.15
Section titled “0.14.15”- Added brand accent color for markdown link text
- Improved rendering stability and flicker prevention across footer and streaming status
- Improved always-allow behavior for skill script permissions
- Improved error guidance for model availability and credit issues
- Fixed
memorydefrag subagent to run in background - Fixed MiniMax errors at higher token counts
- Fixed duplicate feedback command output on submit
- Fixed oversized shell tool output clipping in collapsed tool results
- Fixed task transcript propagation for max-step failures
- Removed ChatGPT OAuth Pro plan restriction for Codex connect
0.14.14
Section titled “0.14.14”- Fixed
--max-turnsflag not accepted at top-level CLI - Fixed subagent keychain migration churn
0.14.13
Section titled “0.14.13”- Added GPT-5.3 Codex model support (ChatGPT Plus/Pro)
- Added permission mode restoration when exiting plan mode
- Fixed headless permission wait deadlock
- Fixed
/modelselection for shared-handle model tiers with different reasoning efforts - Fixed subagent model display consistency
0.14.12
Section titled “0.14.12”- Rewrote the Skill tool from load/unload commands to direct invocation (
skill: "name"with optionalargs) - Removed
skillsandloaded_skillsmemory blocks (skills are now listed in system-reminder messages) - Added
--embeddingflag to specify embedding model when creating agents in headless mode - Added
/contextcommand token usage breakdown by category (system, core memory, functions, messages) - Added
showCompactionssetting to hide compaction messages (defaults tofalse) - Added
LETTA_PACKAGE_MANAGERenv var to override detected package manager for auto-updates - Changed featured model from Opus 4.5 to Opus 4.6
- Improved auto-updater to detect package manager (npm/bun/pnpm) instead of hardcoding npm
- Improved subagent auth to inherit credentials from parent, avoiding keychain contention
- Improved keychain availability check to use a non-mutating probe instead of set/delete
- Fixed
UserPromptSubmithook firing repeatedly on message dequeue - Fixed
Stophook using first user message instead of most recent - Removed
Setuphook event type - Updated memfs skill and system prompt
0.14.11
Section titled “0.14.11”- Added prompt-based hooks that use an LLM to evaluate whether actions should be allowed or blocked
- Added
/contextcommand braille area chart showing token usage history across turns - Added skills extraction and packaging for
--from-afagent file imports - Added
max_turnsparameter to Task tool for limiting subagent turn count - Improved message queue to wait for tool completion instead of using a 15-second timeout
- Fixed Shift+Enter by normalizing newlines before keypress parsing
- Fixed keyboard protocol report filtering and scoped Linux Enter key handling
- Fixed
loaded_skillsblock not resetting on new conversation - Fixed memFS system prompt not updating based on
--memfs/--no-memfsCLI flags - Fixed empty assistant message bullets rendering
- Fixed skills directory path shown in extraction message
- Fixed subagent static promotion race during tool result reentry
0.14.10
Section titled “0.14.10”- Added
/contextcommand to show context window usage with visual token bar - Added background task support for
TaskandBashtools viarun_in_backgroundparameter - Added
TaskOutputtool to retrieve output from background tasks - Added
TaskStoptool to stop running background tasks - Added background task completion notifications injected into conversation
- Added
additionalContextsupport forPostToolUsehooks (JSON output parsed for context injection) - Added Claude Opus 4.6 model support
- Improved subagent status display with aligned dots, headers, and dimming for running agents
- Improved tool call dot phases and colors for clearer execution feedback
- Fixed
/downloadcommand to passconversation_idfor non-default conversations
0.14.9
Section titled “0.14.9”- Added number key support (1-9) to approval dialogs for quick selection
- Enabled memfs in headless mode when using
--agentflag - Fixed Enter key handling on Linux terminals that emit
\ninstead of\r - Fixed error handling in headless bidirectional mode
- Fixed MCP skill templates with corrected paths
- Fixed malformed
AskUserQuestionfalling through to generic approval
0.14.8
Section titled “0.14.8”- Added
converting-mcps-to-skillsbundled skill for connecting to MCP servers - Added
PostToolUseFailurehook that runs after tool failures (feeds stderr back to agent) - Added
SessionEndhooks now run on Ctrl+C (SIGINT) - Added conversation renaming when renaming agents via
/rename - Improved
SessionStarthooks with feedback injection - Improved post tool use feedback injection
- Fixed compaction display to show simple “Conversation compacted” message
- Fixed context windows fetched from server instead of hardcoded
- Fixed Windows PATH handling and PowerShell quoting
- Fixed thinking/assistant block spacing preservation during streaming
- Fixed logo resetting to flat frame when loading completes
- Fixed duplicate rendering of auto-approved file tools
- Fixed 409 “conversation busy” errors with exponential backoff
- Fixed flicker on tall approval dialogs
0.14.7
Section titled “0.14.7”- Added trajectory stats tracking and completion summary on exit
- Improved
/memoryviewer to prioritizesystem/directory at top - Fixed input area collapsing during approvals and selector overlays
- Fixed slash command menu render flicker
- Fixed rendering instability that caused line flicker
0.14.6
Section titled “0.14.6”- Added alien art to command preview and exit message
- Added BYOK-aware model resolution with fallback for subagents
- Added network phase arrows to streaming status indicator
- Fixed handling of malformed
AskUserQuestiondata from LLM - Fixed
/usagecommand formatting - Fixed
/memfsposition in command autocomplete order - Fixed mojibake detection to preserve valid Unicode characters
- Fixed loading state layout consistency during startup
- Fixed autocomplete to show “No matching commands” instead of hiding
- Fixed
<Text>encoding non-ASCII characters in Bun
0.14.5
Section titled “0.14.5”- Added
--from-agentflag for agent-to-agent communication in headless mode - Refactored skill scripts into CLI subcommands (
letta memfs,letta blocks, etc.)
0.14.4
Section titled “0.14.4”- Added compaction messages display and new summary message type handling
- Fixed skill diffing code
- Fixed memfs skill scripts
0.14.3
Section titled “0.14.3”- Fixed memfs frontmatter round-trip to preserve block metadata
0.14.2
Section titled “0.14.2”- Fixed extra vertical spacing between memory block tabs
- Fixed Task tool approval dialogs to show full prompt
- Improved memfs sync performance
0.14.1
Section titled “0.14.1”- Enabled Memory Filesystem (memfs) by default for newly created agents
- Added
--memfs/--no-memfsCLI flags to control memfs on agent creation - Fixed Bun string encoding issues
0.14.0
Section titled “0.14.0”This release introduces Memory Filesystem (experimental) - your agent’s memory blocks now sync with local files in .letta/memory/, enabling direct editing and version control of agent memory.
Memory Filesystem (experimental)
Section titled “Memory Filesystem (experimental)”- Added Memory Filesystem (memfs) that syncs memory blocks with
.letta/memory/directory - Added agent-driven conflict resolution for memfs sync conflicts
- Added
/memfscommand to view sync status and resolve conflicts - Added owner tags for tracking block ownership (system vs agent-created)
- Added hierarchical memory organization with
system/prefix for core blocks - Added
syncing-memory-filesystembuilt-in skill for conflict resolution guidance - Updated
/initto create hierarchically organized memory blocks - Updated
defragmenting-memoryskill to use memfs instead of backup/restore scripts
New Models and Providers
Section titled “New Models and Providers”- Added MiniMax M2.1 model support
- Added Kimi K2.5 model support
- Added OpenRouter BYOK support via
/connect - Added AWS Bedrock profile authentication method
- Added Bedrock Opus 4.5 fallback suggestion for Anthropic API errors
Hooks Enhancements
Section titled “Hooks Enhancements”- Added
UserPromptSubmithook that fires when user submits a prompt - Added
reasoningandassistant_messagecapture inPostToolUseandStophooks - Added
LETTA_AGENT_IDenvironment variable injection into hooks - Added “Disable all hooks” toggle in
/hookscommand - Added memory log hook example script
Other Features
Section titled “Other Features”- Added agent-scoped skills directory (
~/.letta/agents/{id}/skills/) - Added user prompt message highlighting
- Added permissions status script
- Changed
conversation_searchto no longer be a default tool (userecallsubagent instead)
- Fixed up/down arrow navigation with newlines in multi-line input
- Fixed cursor visibility on newline characters in multi-line input
- Fixed
@filesearch to excludevenvand dependency directories - Fixed
/feedbackcommand formatting and context - Fixed approval dialog horizontal lines to extend full terminal width
- Fixed default agent creation on first bootup
- Fixed paste support in hooks TUI inputs
- Fixed hooks TUI with Enter to delete and better spacing
- Fixed help text for
letta --newflag - Fixed
UserPromptSubmithooks to not fire for slash commands - Fixed subagents to be marked as hidden on creation
- Fixed LLM error retry to not retry 4xx client errors
- Fixed model selector display on self-hosted when default model unavailable
0.13.11
Section titled “0.13.11”- Fixed agents limit exceeded error and added deletion support in
/agents - Fixed
-mflag to correctly apply model variants with same handle
0.13.10
Section titled “0.13.10”- Added AWS Bedrock support to
/connectcommand - Added multi-server support with settings indexed by server URL
- Added regex tool name matching for hooks (e.g.,
"Edit|Write") - Added message retry on premature interrupt
- Added desktop notification hook script
- Added
rm -rfblock hook script example - Improved
/connectcommand and model selector UX - Disabled Incognito agent creation by default
- Fixed localhost connection improvements
- Fixed login screen styling to match other menus
- Fixed error message formatting
0.13.9
Section titled “0.13.9”- Added Stop hook continuation on blocking (hook can keep agent working)
- Added example hook scripts for common patterns
- Improved message queueing for smoother UX
- Fixed backfill failures to be handled gracefully instead of crashing
0.13.8
Section titled “0.13.8”- Added search field to model selector (both Supported and All Available tabs)
- Fixed
/compactto use correct conversations endpoint - Fixed agent info bar layout to prevent overflow
0.13.7
Section titled “0.13.7”- Added Claude Code-compatible hooks system with
/hookscommand for automating workflows - Added cross-platform support for hooks executor (Windows, macOS, Linux)
- Added ViewImage tool for attaching local images to conversation context
- Added search field to model selector on both tabs
- Added Bedrock Opus 4.5 model
- Added conversation ID display in agent info bar
- Added immediate mode for interactive commands
- Improved cancellation with graceful 30s timeout before force-abort
- Fixed bash mode input locking, ESC cancellation, and removed timeout
- Fixed bash mode process group spawn/kill for proper cleanup
- Fixed bash mode Ctrl+C interrupt handling
- Fixed toolset switching to be atomic (prevents tool desync race)
- Fixed hooks config state to use settings as source of truth
- Fixed @ file selection during search debounce
- Fixed 5MB image size limit with progressive compression
- Fixed invalid tool call ID recovery
- Fixed stale queued approvals after successful approval flow
- Fixed Skill tool isolated blocks in conversation context
- Fixed messages starting with
/to be sent to agent when unknown command - Fixed auto-update ENOTEMPTY errors with cleanup and retry
0.13.6
Section titled “0.13.6”- Added image reading support to Read tool (PNG, JPG, GIF, WEBP, BMP files are visually displayed)
- Added shell alias expansion in bash mode (sources from
.zshrc,.bashrc, etc.) - Added query prefill support for
/searchcommand (/search [query]) - Added arrow key navigation for tab switching in
/models - Improved Skill tool output with more explicit success messages
- Added automatic retry for 409 “conversation busy” errors
- Added message restoration to input field after queue errors
- Fixed agent name consistency using single source of truth
- Fixed
/clearcommand output message to clarify messages are moved to history - Fixed streaming flicker with aggressive static content promotion
- Fixed cursor position placed at end when navigating command history
- Fixed ADE links to work in tmux
- Reduced image resize limit to 2000px for multi-image requests
- Fixed queue-cancel hang and stuck queue issues
- Fixed premature cancellation of server-side tools in mixed execution
0.13.5
Section titled “0.13.5”- Added automatic image resizing for clipboard paste (images larger than 2048x2048 are resized to fit API limits)
- Improved error feedback when image paste fails
- Fixed conversation ID being passed correctly to resume data retrieval
0.13.4
Section titled “0.13.4”Default startup behavior reverted to single-threaded experience. Based on user feedback, letta (with no flags) now resumes the agent’s “default” conversation instead of creating a new conversation each time.
| Command | 0.13.0 - 0.13.3 | 0.13.4+ |
|---|---|---|
letta | Creates new conversation each time | Uses “default” conversation |
letta --new | Error (was deprecated) | Creates a new conversation |
letta --continue (no session) | Silently creates new | Errors with helpful message |
- Changed
letta(no flags) to resume the “default” conversation with message history - Repurposed
--newflag to create a new conversation (for users who want concurrent sessions) - Changed
--continuefallback to error with helpful suggestions instead of silently creating new
0.13.3
Section titled “0.13.3”- Added
messaging-agentsbundled skill for sending messages to other agents - Added ability to deploy existing agents as subagents via the Task tool
- Fixed interrupt handling race condition when tool approvals are in flight
0.13.2
Section titled “0.13.2”- Added skills frontmatter pre-loading for subagents (skills defined in subagent configs are auto-loaded)
- Added output truncation for Task tool to prevent context overflow
- Added auto-cleanup for overflow files
- Fixed auto-allowed tool execution tracking for proper interrupt handling
- Fixed hardcoded embedding model (now uses server default)
0.13.1
Section titled “0.13.1”- Added
--defaultflag to access agent’s default conversation (alias for--conv default) - Added
--conv <agent-id>shorthand (e.g.,letta --conv agent-xyz→ uses that agent’s default conversation) - Added default conversation to
/resumeselector (appears at top of list) - Added
working-in-parallelbundled skill for coordinating parallel subagent tasks - Added conversation resume hint in exit stats
- Improved startup performance with reduced time-to-boot
- Fixed stale approvals being auto-cancelled on session resume
- Fixed auth type display on startup
- Fixed memory block retrieval for
/memorycommand
0.13.0
Section titled “0.13.0”This release introduces Conversations - a major change to how Letta Code manages chat sessions. Your agent can now have many parallel conversations, each contributing to its learning, memory, and shared history.
What Changed
Section titled “What Changed”Before 0.13.0:
- Each agent had a single conversation
- Starting Letta Code resumed the same conversation
/clearreset the agent’s context window
After 0.13.0 (updated in 0.13.4):
- Each startup resumes the default conversation (reverted from 0.13.0-0.13.3 behavior)
- Your agent’s memory is shared across all conversations
/newcreates a new conversation (for parallel sessions)/clearclears in-context messages- Use
/resumeto browse and switch between past conversations - Use
letta --newto create a new conversation for concurrent sessions
Migration Guide
Section titled “Migration Guide”If you’re upgrading from an earlier version, you may notice that starting Letta Code puts you in a new conversation instead of continuing where you left off. Here’s what happened:
Your old messages still exist! They’re in your agent’s default conversation - the original message history before conversations were introduced. You can find it at the top of the /resume selector, or access it directly with the commands below.
Easiest way to access them (0.13.1+):
# Use the --default flag with your agent nameletta -n "Your Agent Name" --default
# Or with agent IDletta --agent <your-agent-id> --default
# Or use the shorthand (agent ID only)letta --conv <your-agent-id>The default conversation also appears at the top of the /resume selector.
Alternative methods:
-
View them on the web at
https://app.letta.com/agents/<your-agent-id> -
Have your agent recall them using one of these prompts:
Using the
recallsubagent:Can you use the recall subagent to find our most recent messages? I'd like to continue where we left off.Using the
conversation_searchtool:Can you use conversation_search to find our most recent messages so we can continue where we left off?Using the
searching-messagesskill:Can you load the searching-messages skill and use it to find our most recent messages? -
Export and reference your agent file by running
/export(saves to<agent-id>.af), then ask your agent:I downloaded my agent file to ./agent-xxxx.af - can you read it and look through the "messages" array to find our most recent conversation?
Going forward, all new conversations will be accessible via /resume, and the default conversation is always available via --default.
Full Changelog
Section titled “Full Changelog”- Added Conversations support - each session creates an isolated conversation while sharing agent memory
- Added
/resumecommand to browse and switch between past conversations - Added
--resume(-r) and--continue(-c) flags to resume last session - Added
--conversation(-C,--conv) flag to resume a specific conversation by ID - Added default agents (Memo and Incognito) auto-created for new users
- Changed
/clearto start a new conversation (non-destructive) instead of deleting messages (reverted in 0.13.4) - Fixed Task tool rendering issues with parallel subagents
- Fixed ADE links to include conversation context
0.12.7
Section titled “0.12.7”- Fixed text wrapping in collapsed bash output display
- Renamed
memory-defragskill todefragmenting-memoryto follow naming conventions - Added automatic retry for transient network errors during LLM streaming
- Improved plan mode flexibility for writing plan files
0.12.6
Section titled “0.12.6”- Added
memorysubagent for cleaning up and reorganizing memory blocks - Added
defragmenting-memorybuilt-in skill with backup/restore workflow - Added streaming output display for long-running bash commands
- Added line count summary for Read tool results
- Added network retry for transient LLM streaming errors
- Added Skill tool support in plan mode (load/unload/refresh are read-only)
- Fixed tool approval flow that was broken by ESC handling changes
- Improved Task tool and subagent display rendering
- Fixed UI flickering in Ghostty terminal
0.12.5
Section titled “0.12.5”- Added terminal title and progress indicator for approval screens
- Added
LETTA_DEBUG_TIMINGSenvironment variable for request timing diagnostics - Fixed “Create new agent” from selector being stuck in a loop
0.12.4
Section titled “0.12.4”- Fixed subagent display spacing and extra newlines
- Fixed subagent live streaming not updating during execution
0.12.3
Section titled “0.12.3”- Added LSP diagnostics to Read tool for TypeScript and Python files
- Added
refreshcommand to Task tool for rescanning custom subagents - Added file-based overflow for long tool outputs
- Fixed left/right arrow key cursor navigation in approval text inputs
- Fixed pre-stream approval desync errors with keep-alive recovery
- Fixed subagents not inheriting parent’s tool permission rules
0.12.2
Section titled “0.12.2”- Added
/ralphand/yolo-ralphcommands for autonomous agentic loop mode - Fixed read-only subagents (explore, plan, recall) to work in plan mode
- Fixed Windows PowerShell ENOENT errors with shell fallback
0.12.1
Section titled “0.12.1”- Added
recallsubagent for searching parent agent’s conversation history - Fixed agent selector not showing when LRU agent retrieval fails
- Fixed approval desync issues for slash commands and queued messages
- Fixed SDK retry race conditions on streaming requests
- Fixed pending approval denials not being cached on ESC interrupt
- Fixed stale processConversation calls affecting UI state after interrupts
0.12.0
Section titled “0.12.0”- Refactored to use new client-side tool calling via the messages endpoint
- Added
acquiring-skillsskill for discovering and installing skills from external repositories - Added
migrating-memoryskill for copying memory blocks between agents - Updated skills system (migrating-memory, finding-agents, searching-messages)
- Improved interrupt handling with better messaging
- Fixed ESC interrupt to properly stop streams
- Fixed skill scripts to work when installed via npm
- Fixed Task tool (subagent) rendering issues
- Fixed bash mode exit behavior after submitting commands
- Fixed binary file detection being overly aggressive
- Fixed approval results handling when auto-handling remaining approvals
- Fixed stream retry behavior after interrupts
0.11.1
Section titled “0.11.1”- Added system prompt and memory block configuration for headless mode
- Added
--input-format stream-jsonflag for programmatic input handling - Improved parallel tool call approval UI
0.11.0
Section titled “0.11.0”- Added inline dialogs for improved user experience
- Improved token counter display
- Fixed server-side tools incorrectly showing as interrupted
0.10.5
Section titled “0.10.5”- Fixed Windows installation issues
- Fixed keyboard shortcuts for Ctrl+C, Ctrl+V, and Shift+Enter
0.10.4
Section titled “0.10.4”- Fixed iTerm2 keybindings
- Fixed ESC and CTRL-C handling across all dialogs
0.10.3
Section titled “0.10.3”- Added desktop notifications when UI needs user attention
- Added read-only shell commands support in plan mode
0.10.2
Section titled “0.10.2”- Added Ctrl+V support for clipboard image paste in all terminals
- Fixed keybindings
- Fixed model name display in welcome screen
0.10.1
Section titled “0.10.1”- Added Shift+Enter multi-line input support
0.10.0
Section titled “0.10.0”- Added visual diffs for Edit/Write tool returns
- Added automatic retry for transient LLM API errors
- Added custom slash commands support (
/commands) - Added scrolling and manual ordering to command autocomplete
- Added toggle to show all agents in
/agentsview - Added per-resource queues for parallel tool execution
- Fixed plan mode on non-default toolsets
- Fixed CLI crash when browser auto-open fails in WSL
- Added GLM-4.7 model support
- Added
/newcommand for creating new agents - Added
/feedbackcommand improvements - Added memory reminders to improve memory usage
- Renamed
/resumeto/agents(with backwards-compatible alias) - Fixed plan mode path resolution on Windows
- Added support for bundled skills and multi-source skill discovery
- Increased loaded_skills block limit to 100k characters
- Added support for Claude Pro and Max plans
- Added optional telemetry
- Added
--systemflag for existing agents - Fixed Windows-specific issues
- Added
/helpcommand with interactive dialog - Added
/mcpcommand for MCP server management - Added
/compactcommand for message compaction - Added text search for all models
- Improved memory tool visibility with colored name and diff output
- Added BYOK (Bring Your Own Key) support - use your own API keys
- Added
/usagecommand to check usage and credits - Added
--infoflag to show project and agent info - Added naming dialog when pinning agents
- Added
/memorycommand to view agent memory blocks - Added ‘add-model’ skill for adding new LLM models
- Added Gemini 3 Flash model support
- Added feedback UI
- Added support for relative paths in all tools
- Added tab completion for slash commands
- Added Kimi K2 Thinking model
- Added personalized thinking prompts with agent name
- Added goodbye message on exit
- Renamed
/bashesto/bg
- Added stateless subagents via Task tool
- Added Kimi K2 Thinking model support
- Improved subagents UI
- Added autocomplete for slash commands
- Faster startup with cached tool initialization
- Added
exitandquitas aliases for/exit
- Added profile-based persistence with startup selector
- Added
/profilecommand for managing profiles - Added simplified welcome screen design
- Added double Ctrl+C to exit from approval screen
- Added paginated agent list in
/resume - Added
/descriptioncommand to update agent description - Added message search
- Added
/resumecommand with improved agent selector UI - Added
LETTA_DEBUGenvironment variable for debug logging - Added agent description support
- Added GPT-5.2 support
- Added Gemini 3 (Vertex) support
- Added startup status messages showing agent info
- Added
/initcommand for initializing memory blocks - Added system prompt swapping
- Changed default naming to PascalCase
- Added
/downloadcommand to export agent file locally - Added Skills omni-tool
- Added Claude Opus 4.5 support
- Added toolset switching UI
- Added
--toolsetflag - Added Gemini tools support
- Added model-based toolset switching
- Added eager cancel functionality
- Added sleeptime memory management
- Added
--sleeptimeCLI flag - Added GPT-5.1 models support
- Added Gemini-3 models support
- Added
--fresh-blocksflag for isolation - Added
/swapcommand for model switching
- Added
/linkand/unlinkcommands for managing agent tools - Added Skills support
- Added parallel tool calling
- Added multi-device sign in support
- Added agent renaming capability
0.1.16
Section titled “0.1.16”- Added Sonnet 4.5 with 180k context window
0.1.15
Section titled “0.1.15”- Added multiline input support
- Added
--newflag for creating new memory blocks - Added agent URL display in commands
0.1.11
Section titled “0.1.11”- Added Claude Haiku 4.5 to model selector
- Added project-level agent persistence with auto-resume
- Added API key caching
- Added
--modelflag - Added GLM-4.6 support
- Added autocomplete for commands
- Added up/down for history navigation
- Added
fetch_webto default tool list
0.1.10
Section titled “0.1.10”- Added
stream-jsonoutput format
- Added pretty preview for file listings in approval dialog
- Added
LETTA_BASE_URLenvironment variable support
- Added usage tracking
- Added ESC to cancel operations
- Added Ctrl-C exit with agent state dump
- Initial release of Letta Code, the memory-first coding agent