xAI (Grok)
XAI_API_KEY
in your environment variables. Enabling xAI (Grok) models
To enable the xAI provider, set your key as an environment variable:
Now, xAI models will be enabled with you run letta run
or start the Letta Server.
Using the docker run
server with xAI
To enable xAI models, simply set your XAI_API_KEY
as an environment variable:
CLI (pypi only)
Using letta run
and letta server
with xAI
To chat with an agent, run:
This will prompt you to select an Anthropic model.
To run the Letta Server, run:
To select the model used by the server, use the dropdown in the ADE or specify a LLMConfig
object in the Python SDK.
Configuring xAI (Grok) models
When creating agents, you must specify the LLM and embedding models to use. You can additionally specify a context window limit (which must be less than or equal to the maximum size). Note that xAI does not have embedding models, so you will need to use another provider.
xAI (Grok) models have very large context windows, which will be very expensive and high latency. We recommend setting a lower context_window_limit
when using xAI (Grok) models.