Azure OpenAI
To use Letta with Azure OpenAI, set the environment variables
AZURE_API_KEY
and AZURE_BASE_URL
. You can also optionally specify AZURE_API_VERION
(default is 2024-09-01-preview
) You can use Letta with OpenAI if you have an OpenAI account and API key. Once you have set your AZURE_API_KEY
and AZURE_BASE_URL
specified in your environment variables, you can select what model and configure the context window size
Currently, Letta supports the following OpenAI models:
gpt-4
(recommended for advanced reasoning)gpt-4o-mini
(recommended for low latency and cost)gpt-4o
gpt-4-turbo
(not recommended, should usegpt-4o-mini
instead)gpt-3.5-turbo
(not recommended, should usegpt-4o-mini
instead)
Enabling Azure OpenAI models
To enable the Azure provider, set your key as an environment variable:
Now, Azure OpenAI models will be enabled with you run letta run
or the letta service.
Using the docker run
server with OpenAI
To enable Azure OpenAI models, simply set your AZURE_API_KEY
and AZURE_BASE_URL
as an environment variables:
CLI (pypi only)
Using letta run
and letta server
with Azure OpenAI
To chat with an agent, run:
To run the Letta server, run:
To select the model used by the server, use the dropdown in the ADE or specify a LLMConfig
object in the Python SDK.