LM Studio support is currently experimental. If things aren’t working as expected, please reach out to us on Discord!

Models marked as “native tool use” on LM Studio are more likely to work well with Letta.

Setup LM Studio

  1. Download + install LM Studio and the model you want to test with
  2. Make sure to start the LM Studio server

Enabling LM Studio as a provider

To enable the LM Studio provider, you must set the LMSTUDIO_BASE_URL environment variable. When this is set, Letta will use available LLM and embedding models running on LM Studio.

Using the docker run server with LM Studio

Since LM Studio is running on the host network, you will need to use host.docker.internal to connect to the LM Studio server instead of localhost.

$# replace `~/.letta/.persist/pgdata` with wherever you want to store your agent data
>docker run \
> -v ~/.letta/.persist/pgdata:/var/lib/postgresql/data \
> -p 8283:8283 \
> -e LMSTUDIO_BASE_URL="http://host.docker.internal:1234" \
> letta/letta:latest

Using letta run and letta server with LM Studio

To chat with an agent, run:

$export LMSTUDIO_BASE_URL="http://localhost:1234"
>letta run

To run the Letta server, run:

$export LMSTIUDIO_BASE_URL="http://localhost:1234"
>letta server

To select the model used by the server, use the dropdown in the ADE or specify a LLMConfig object in the Python SDK.

Built with