OpenAI
OPENAI_API_KEY
in your environment variables. You can use Letta with OpenAI if you have an OpenAI account and API key. Once you have set your OPENAI_API_KEY
in your environment variables, you can select what model and configure the context window size.
Currently, Letta supports the following OpenAI models:
gpt-4
(recommended for advanced reasoning)gpt-4o-mini
(recommended for low latency and cost)gpt-4o
gpt-4-turbo
(not recommended, should usegpt-4o-mini
instead)gpt-3.5-turbo
(not recommended, should usegpt-4o-mini
instead)
Enabling OpenAI models
To enable the OpenAI provider, set your key as an environment variable:
Now, OpenAI models will be enabled with you run letta run
or the letta service.
Using the docker run
server with OpenAI
To enable OpenAI models, simply set your OPENAI_API_KEY
as an environment variable:
CLI (pypi only)
Using letta run
and letta server
with OpenAI
To chat with an agent, run:
This will prompt you to select an OpenAI model.
To run the Letta server, run:
To select the model used by the server, use the dropdown in the ADE or specify a LLMConfig
object in the Python SDK.
Configuring OpenAI models in the Python SDK
When creating agents, you can specify the exactly model configurations to use, such as the model name and context window size (which can be less than the maximum size).
You can also configure a default LLMConfig
to use for all agents created by the client.
Similarly, you can override the default embedding config by providing a new EmbeddingConfig
object to the set_default_embedding_config
method.