Run Letta with Docker
The recommended way to use Letta locally is with Docker.
To install Docker, see Docker’s installation guide.
For issues with installing Docker, see Docker’s troubleshooting guide.
You can also install Letta using pip
(see instructions here).
Running the Letta Server
The Letta server can be connected to various LLM API backends (OpenAI, Anthropic, vLLM, Ollama, etc.). To enable access to these LLM API providers, set the appropriate environment variables when you use docker run
:
Environment variables will determine which LLM and embedding providers are enabled on your Letta server.
For example, if you set OPENAI_API_KEY
, then your Letta server will attempt to connect to OpenAI as a model provider.
Similarly, if you set OLLAMA_BASE_URL
, then your Letta server will attempt to connect to an Ollama server to provide local models as LLM options on the server.
If you have many different LLM API keys, you can also set up a .env
file instead and pass that to docker run
:
Once the Letta server is running, you can access it via port 8283
(e.g. sending REST API requests to http://localhost:8283/v1
). You can also connect your server to the Letta ADE to access and manage your agents in a web interface.
Setting environment variables
If you are using a .env
file, it should contain environment variables for each of the LLM providers you wish to use (replace ...
with your actual API keys and endpoint URLs):
Using the development (nightly) image
When you use the latest
tag, you will get the latest stable release of Letta.
The nightly
image is a development image that is updated frequently off of main
(it is not recommended for production use).
If you would like to use the development image, you can use the nightly
tag instead of latest
: