OpenAI
OPENAI_API_KEY in your environment variables. You can use Letta with OpenAI if you have an OpenAI account and API key. Once you have set your OPENAI_API_KEY in your environment variables, you can select what model and configure the context window size.
Currently, Letta supports the following OpenAI models:
gpt-4(recommended for advanced reasoning)gpt-4o-mini(recommended for low latency and cost)gpt-4ogpt-4-turbo(not recommended, should usegpt-4o-miniinstead)gpt-3.5-turbo(not recommended, should usegpt-4o-miniinstead)
Enabling OpenAI models with Docker
To enable OpenAI models when running the Letta server with Docker, set your OPENAI_API_KEY as an environment variable:
See the self-hosting guide for more information on running Letta with Docker.
Specifying agent models
When creating agents on your self-hosted server, you must specify both the LLM and embedding models to use via a handle. You can additionally specify a context window limit (which must be less than or equal to the maximum size).
For Letta Cloud usage, see the quickstart guide. Cloud deployments manage embeddings automatically and don’t require provider configuration.