To use Letta with Ollama, set the environment variable OLLAMA_BASE_URL=http://localhost:11434.

⚠️ Make sure to use tags when downloading Ollama models!

Don’t do ollama pull dolphin2.2-mistral, instead do ollama pull dolphin2.2-mistral:7b-q6_K.

If you don’t specify a tag, Ollama may default to using a highly compressed model variant (e.g. Q4). We highly recommend NOT using a compression level below Q5 when using GGUF (stick to Q6 or Q8 if possible). In our testing, certain models start to become extremely unstable (when used with MemGPT) below Q6.

Setup Ollama

  1. Download + install Ollama and the model you want to test with
  2. Download a model to test with by running ollama pull <MODEL_NAME> in the terminal (check the Ollama model library for available models)

For example, if we want to use Dolphin 2.2.1 Mistral, we can download it by running:

# Let's use the q6_K variant
ollama pull dolphin2.2-mistral:7b-q6_K
pulling manifest
pulling d8a5ee4aba09... 100% |█████████████████████████████████████████████████████████████████████████| (4.1/4.1 GB, 20 MB/s)
pulling a47b02e00552... 100% |██████████████████████████████████████████████████████████████████████████████| (106/106 B, 77 B/s)
pulling 9640c2212a51... 100% |████████████████████████████████████████████████████████████████████████████████| (41/41 B, 22 B/s)
pulling de6bcd73f9b4... 100% |████████████████████████████████████████████████████████████████████████████████| (58/58 B, 28 B/s)
pulling 95c3d8d4429f... 100% |█████████████████████████████████████████████████████████████████████████████| (455/455 B, 330 B/s)
verifying sha256 digest
writing manifest
removing any unused layers
success

Enabling Ollama as a provider

To enable the Ollama provider, you must set the OLLAMA_BASE_URL environment variable. When this is set, Letta will use available LLM and embedding models running on Ollama.