Run Letta with pip

1

Install using PIP

To install Letta, run:

pip install letta
2

Configure model providers

Set environment variables to enable model providers, e.g. OpenAI:


# To use OpenAI
export OPENAI_API_KEY=...

# To use Anthropic 
export ANTHROPIC_API_KEY=...

# To use with Ollama
export OLLAMA_BASE_URL=...

# To use with Google AI
export GEMINI_API_KEY=...

# To use with Azure
export AZURE_API_KEY=...
export AZURE_BASE_URL=...

# To use with vLLM 
export VLLM_API_BASE=...
3

Run the Letta server

To run the Letta server, run:

letta server [--debug]

You can now access the ADE (in your browser) and REST API server at http://localhost:8283.

Run Letta with Docker

1

Download the docker container

To run the docker container, first clone the reposity for pull the contianer.

git clone https://github.com/letta-ai/letta
2

Set environment variables

Either set the environment variables in your shell on in a .env file.

3

(Optional) View and modify the compose YAML file

You can view and modify the compose.yaml file, for example, if you would like to change the default ports or pgvector version.

4

Run Docker Compose

To start the Letta server, we use Docker Compose, which runs both the Letta container and the database container:

docker compose up 

You can now access the ADE (in your browser) and REST API server at http://localhost:8083.