Deploy a Letta server with Docker
Mangage your own Letta server using Docker
To run the Letta server with Docker, run the command:
# replace `~/.letta/.persist/pgdata` with wherever you want to store your agent datadocker run \ -v ~/.letta/.persist/pgdata:/var/lib/postgresql/data \ -p 8283:8283 \ -e OPENAI_API_KEY="your_openai_api_key" \ letta/letta:latestThis will run the Letta server with the OpenAI provider enabled, and store all data in the folder ~/.letta/.persist/pgdata.
If you have many different LLM API keys, you can also set up a .env file instead and pass that to docker run:
# using a .env file instead of passing environment variablesdocker run \ -v ~/.letta/.persist/pgdata:/var/lib/postgresql/data \ -p 8283:8283 \ --env-file .env \ letta/letta:latestOnce the Letta server is running, you can access it via port 8283 (e.g. sending REST API requests to http://localhost:8283/v1).
Enabling additional model providers
Section titled “Enabling additional model providers”The Letta server can be connected to various LLM API backends (OpenAI, Anthropic, Ollama, etc.). To enable access to these LLM API providers, set the appropriate environment variables when you use docker run:
# replace `~/.letta/.persist/pgdata` with wherever you want to store your agent datadocker run \ -v ~/.letta/.persist/pgdata:/var/lib/postgresql/data \ -p 8283:8283 \ -e OPENAI_API_KEY="your_openai_api_key" \ -e ANTHROPIC_API_KEY="your_anthropic_api_key" \ -e OLLAMA_BASE_URL="http://host.docker.internal:11434/v1" \ letta/letta:latestThe example above will make all compatible models running on OpenAI, Anthropic, and Ollama available to your Letta server.
Configuring embedding models
Section titled “Configuring embedding models”When using Docker, you must specify an embedding model when creating agents. Letta uses embeddings for archival memory search and retrieval.
Supported embedding providers
Section titled “Supported embedding providers”When creating agents on your Docker server, specify the embedding parameter:
from letta_client import Lettaimport os
# Connect to your Docker serverclient = Letta(base_url="http://localhost:8283")
# Create agent with explicit embedding configurationagent = client.agents.create( model="openai/gpt-4o-mini", embedding="openai/text-embedding-3-small", # Required when using Docker)import Letta from "@letta-ai/letta-client";
// Connect to your Docker serverconst client = new Letta({ baseURL: "http://localhost:8283",});
// Create agent with explicit embedding configurationconst agent = await client.agents.create({ model: "openai/gpt-4o-mini", embedding: "openai/text-embedding-3-small", // Required when using Docker});Password protection
Section titled “Password protection”To password protect your server, include SECURE=true and LETTA_SERVER_PASSWORD=yourpassword in your docker run command:
# If LETTA_SERVER_PASSWORD isn't set, the server will autogenerate a passworddocker run \ -v ~/.letta/.persist/pgdata:/var/lib/postgresql/data \ -p 8283:8283 \ --env-file .env \ -e SECURE=true \ -e LETTA_SERVER_PASSWORD=yourpassword \ letta/letta:latestWith password protection enabled, you will have to provide your password in the bearer token header in your API requests:
// install letta-client with `npm install @letta-ai/letta-client`import Letta from "@letta-ai/letta-client";
// create the client with the token set to your passwordconst client = new Letta({ baseURL: "http://localhost:8283", apiKey: "yourpassword",});# install letta_client with `pip install letta-client`from letta_client import Lettaimport os
# create the client with the token set to your passwordclient = Letta( base_url="http://localhost:8283", api_key="yourpassword")curl --request POST \ --url http://localhost:8283/v1/agents/$AGENT_ID/messages \ --header 'Content-Type: application/json' \ --header 'Authorization: Bearer yourpassword' \ --data '{ "messages": [ { "role": "user", "text": "hows it going????" } ]}'Tool sandboxing
Section titled “Tool sandboxing”To enable tool sandboxing, set the E2B_API_KEY and E2B_SANDBOX_TEMPLATE_ID environment variables (via E2B) when you use docker run.
When sandboxing is enabled, all custom tools (created by users from source code) will be executed in a sandboxed environment.
This does not include MCP tools, which are executed outside of the Letta server (on the MCP server itself), or built-in tools (like memory_insert), whose code cannot be modified after server startup.
Connecting your own Postgres instance
Section titled “Connecting your own Postgres instance”You can set LETTA_PG_URI to connect your own Postgres instance to Letta. Your database must have the pgvector vector extension installed.
You can enable this extension by running the following SQL command:
CREATE EXTENSION IF NOT EXISTS vector;Connecting the ADE to your server
Section titled “Connecting the ADE to your server”The ADE is only able to connect to remote servers running on https - the only exception is localhost, for which http is allowed (except for Safari and newer versions of Chrome, where it is also blocked).
Most cloud services have ingress tools that will handle certificate management for you and you will automatically be provisioned an https address (for example Railway will automatically generate a static https address for your deployment).
Using a reverse proxy to generate an https address
Section titled “Using a reverse proxy to generate an https address”If you are running your Letta server on self-hosted infrastructure, you may need to manually create an https address for your server.
This can be done in numerous ways using reverse proxies:
- Use a service like ngrok to get an
httpsaddress (on ngrok) for your server - Use Caddy or Traefik as a reverse proxy (which will manage the certificates for you)
- Use nginx with Let’s Encrypt as a reverse proxy (manage the certificates yourself)
Port forwarding to localhost
Section titled “Port forwarding to localhost”Alternatively, you can also forward your server’s http address to localhost, since the https restriction does not apply to localhost (on browsers other than Safari):
ssh -L 8283:localhost:8283 your_server_username@your_server_ipIf you use the port forwarding approach, then you will not need to “Add remote server” in the ADE, instead the server will be accessible under “Local server”.