The recommended way to use Letta locally is with Docker. To install Docker, see Docker’s installation guide. For issues with installing Docker, see Docker’s troubleshooting guide. You can also install Letta using pip (see instructions here).

Running the Letta Server

The Letta server can be connected to various LLM API backends (OpenAI, Anthropic, vLLM, Ollama, etc.). To enable access to these LLM API providers, set the appropriate environment variables when you use docker run:

1# replace `~/.letta/.persist/pgdata` with wherever you want to store your agent data
2docker run \
3 -v ~/.letta/.persist/pgdata:/var/lib/postgresql/data \
4 -p 8283:8283 \
5 -e OPENAI_API_KEY="your_openai_api_key" \
6 letta/letta:latest

Environment variables will determine which LLM and embedding providers are enabled on your Letta server. For example, if you set OPENAI_API_KEY, then your Letta server will attempt to connect to OpenAI as a model provider. Similarly, if you set OLLAMA_BASE_URL, then your Letta server will attempt to connect to an Ollama server to provide local models as LLM options on the server.

If you have many different LLM API keys, you can also set up a .env file instead and pass that to docker run:

1# using a .env file instead of passing environment variables
2docker run \
3 -v ~/.letta/.persist/pgdata:/var/lib/postgresql/data \
4 -p 8283:8283 \
5 --env-file .env \
6 letta/letta:latest

Once the Letta server is running, you can access it via port 8283 (e.g. sending REST API requests to http://localhost:8283/v1). You can also connect your server to the Letta ADE to access and manage your agents in a web interface.

Setting environment variables

If you are using a .env file, it should contain environment variables for each of the LLM providers you wish to use (replace ... with your actual API keys and endpoint URLs):

.env file
1# To use OpenAI
2OPENAI_API_KEY=...
3
4# To use Anthropic
5ANTHROPIC_API_KEY=...
6
7# To use with Ollama (replace with Ollama server URL)
8OLLAMA_BASE_URL=...
9
10# To use with Google AI
11GEMINI_API_KEY=...
12
13# To use with Azure
14AZURE_API_KEY=...
15AZURE_BASE_URL=...
16
17# To use with vLLM (replace with vLLM server URL)
18VLLM_API_BASE=...

Using the development (nightly) image

When you use the latest tag, you will get the latest stable release of Letta.

The nightly image is a development image that is updated frequently off of main (it is not recommended for production use). If you would like to use the development image, you can use the nightly tag instead of latest:

1docker run \
2 -v ~/.letta/.persist/pgdata:/var/lib/postgresql/data \
3 -p 8283:8283 \
4 -e OPENAI_API_KEY="your_openai_api_key" \
5 letta/letta:nightly
Built with