OpenAI
OPENAI_API_KEY
in your environment variables. You can use Letta with OpenAI if you have an OpenAI account and API key. Once you have set your OPENAI_API_KEY
in your environment variables, you can select what model and configure the context window size.
Currently, Letta supports the following OpenAI models:
gpt-4
(recommended for advanced reasoning)gpt-4o-mini
(recommended for low latency and cost)gpt-4o
gpt-4-turbo
(not recommended, should usegpt-4o-mini
instead)gpt-3.5-turbo
(not recommended, should usegpt-4o-mini
instead)
Enabling OpenAI models
To enable the OpenAI provider, set your key as an environment variable:
Now, OpenAI models will be enabled with you run letta run
or the letta service.
Using the docker run
server with OpenAI
To enable OpenAI models, simply set your OPENAI_API_KEY
as an environment variable:
CLI (pypi only)
Using letta run
and letta server
with OpenAI
To chat with an agent, run:
This will prompt you to select an OpenAI model.
To run the Letta server, run:
To select the model used by the server, use the dropdown in the ADE or specify a LLMConfig
object in the Python SDK.
Configuring OpenAI models in the Python SDK
When creating agents, you must specify the LLM and embedding models to use. You can additionally specify a context window limit (which must be less than or equal to the maximum size).