Configuring the open source Docker image
Use Letta Code with a Letta API proxy server running in Docker
Letta Code can connect to a Letta API proxy server. You can run a Letta API proxy server using the open source Docker image.
Prerequisites
Section titled “Prerequisites”Before connecting Letta Code, you need to have a running Letta API proxy service. See the Docker setup guide for detailed instructions.
This example command configures a Letta API proxy server with an OpenAI API key and an Anthropic API key (for Claude models):
docker run \ -v ~/.letta/.persist/pgdata:/var/lib/postgresql/data \ -p 8283:8283 \ -e OPENAI_API_KEY="your_openai_api_key" \ -e LETTA_MEMFS_SERVICE_URL=local \ -e ANTHROPIC_API_KEY="your_anthropic_api_key" \ -e OLLAMA_BASE_URL="http://host.docker.internal:11434/v1" \ letta/letta:latestConnecting Letta Code to your proxy server
Section titled “Connecting Letta Code to your proxy server”Set the LETTA_BASE_URL environment variable to point to your server:
export LETTA_BASE_URL="http://localhost:8283"export LETTA_MEMFS_BASE_URL="http://localhost:8283"export LETTA_MEMFS_LOCAL=1Then run Letta Code normally:
lettaYou can also set it inline:
LETTA_BASE_URL="http://localhost:8283" LETTA_MEMFS_BASE_URL="https://localhost:8283" LETTA_MEMFS_LOCAL=1 lettaSwapping between models
Section titled “Swapping between models”Models available on your proxy server will be shown in /model.
The different LLM API keys you provided on startup will limit the amount of models shown in the model selector.
Note that Letta Code is only tested with the open source Docker image using a limited set of popular models from OpenAI and Anthropic. Attempting to use Letta Code with models not listed in the official models.json file may result in unexpected behavior or bugs.
For unofficial community support, please visit our Discord server.
Using Letta Code with local LLM inference
Section titled “Using Letta Code with local LLM inference”To configure Letta Code to point at a local Ollama server, use the OLLAMA_BASE_URL when starting the server:
docker run \ -v ~/.letta/.persist/pgdata:/var/lib/postgresql/data \ -p 8283:8283 \ -e LETTA_MEMFS_SERVICE_URL=local \ -e OLLAMA_BASE_URL="http://host.docker.internal:11434/v1" \ letta/letta:latestSee our full Docker model provider page for instructions on LM Studio.