vLLM
To use Letta with vLLM, set the environment variable 
VLLM_API_BASE to point to your vLLM ChatCompletions server.Setting up vLLM
- Download + install vLLM
 - Launch a vLLM OpenAI-compatible API server using the official vLLM documentation
 
For example, if we want to use the model dolphin-2.2.1-mistral-7b from HuggingFace, we would run:
vLLM will automatically download the model (if it’s not already downloaded) and store it in your HuggingFace cache directory.
Enabling vLLM with Docker
To enable vLLM models when running the Letta server with Docker, set the VLLM_API_BASE environment variable.
macOS/Windows:
Since vLLM is running on the host network, you will need to use host.docker.internal to connect to the vLLM server instead of localhost.
Linux:
Use --network host and localhost:
See the self-hosting guide for more information on running Letta with Docker.