xAI / Grok Now Supported
We’ve added xAI support in the latest SDK version. To enable xAI models, set your XAI_API_KEY
as an environment variable: export XAI_API_KEY="..."
.
We’ve added xAI support in the latest SDK version. To enable xAI models, set your XAI_API_KEY
as an environment variable: export XAI_API_KEY="..."
.
Given the confusion around our advanced functionality for managing memory, we’ve renamed the Core Memory SDK API to blocks
and the Archival Memory SDK API to passages
so that our API naming reflects the unit of memory stored. This change only affects our SDK, and does not affect Letta’s Rest API.
1 from letta_client import CreateBlock, Letta 2 client = Letta( 3 token="YOUR_API_KEY", 4 ) 5 agent = client.agents.create( 6 model="gpt-4o-mini", 7 embedding="openai/text-embedding-ada-002" 8 memory_blocks=[ 9 CreateBlock( 10 "label": "human", 11 "value": "name: Caren" 12 ), 13 ], 14 ) 15 blocks = client.agents.core_memory.list_blocks(agent_id=agent.id) 16 client.agents.core_memory.detach_block(agent_id=agent.id, block_id=blocks[0].id)
1 from letta_client import CreateBlock, Letta 2 client = Letta( 3 token="YOUR_API_KEY", 4 ) 5 agent = client.agents.create( 6 model="gpt-4o-mini", 7 embedding="openai/text-embedding-ada-002" 8 memory_blocks=[ 9 CreateBlock( 10 "label": "human", 11 "value": "name: Caren" 12 ), 13 ], 14 ) 15 blocks = client.agents.blocks.list(agent_id=agent.id) 16 client.agents.blocks.detach(agent_id=agent.id, block_id=blocks[0].id)
We’ve added a new Identities feature that helps you manage users in your multi-user Letta application. Each Identity can represent a user or organization in your system and store their metadata.
You can associate an Identity with one or more agents, making it easy to track which agents belong to which users. Agents can also be associated with multiple identities, enabling shared access across different users. This release includes full CRUD (Create, Read, Update, Delete) operations for managing Identities through our API.
For more information on usage, visit our Identities documentation and usage guide.
Project slug can now be specified via request header X-Project
for agent creation. The existing project
parameter will soon be deprecated.
1 curl -X POST https://app.letta.com/v1/agents \ 2 -H 'Content-Type: application/json' \ 3 -H 'Authorization: Bearer YOUR_API_KEY' \ 4 -d '{ 5 "project":"YOUR_PROJECT_SLUG" 6 "model":"gpt-4o-mini", 7 "embedding":"openai/text-embedding-ada-002" 8 "memory_blocks": [ 9 { 10 "label": "human", 11 "value": "name: Caren" 12 } 13 ], 14 }'
1 curl -X POST https://app.letta.com/v1/agents \ 2 -H 'Content-Type: application/json' \ 3 -H 'Authorization: Bearer YOUR_API_KEY' \ 4 -H 'X-Project: YOUR_PROJECT_SLUG' \ 5 -d '{ 6 "model":"gpt-4o-mini", 7 "embedding":"openai/text-embedding-ada-002" 8 "memory_blocks": [ 9 { 10 "label": "human", 11 "value": "name: Caren" 12 } 13 ], 14 }'
Google Vertex is now a supported endpoint type for Letta agents.
Letta agents now have an optional message_buffer_autoclear
flag. If set to True (default False), the message history will not be persisted in-context between requests (though the agent will still have access to core, archival, and recall memory).
These values are now configurable when creating and modifying agents via llm_config
parameter for subsequent LLM requests.
The /v1/agents/search
API has been updated to support pagination via after
query parameter
The /v1/templates/
creation API has been updated to support adding tags
at creation time
The List Tools
API now supports querying by tool name.
1 send_message_tool_id = client.agents.tools.list(tool_name="secret_message")[0].id
For self-deployed instances of Letta that are password-protected, the Authorization
header now supports parsing passwords in addition to API keys. X-BARE-PASSWORD
will still be supported as legacy, but will be deprecated in a future release.
1 curl --request POST \ 2 --url https://MYSERVER.up.railway.app/v1/agents/ \ 3 --header 'X-BARE-PASSWORD: password banana' \ 4 --header 'Content-Type: application/json' \ 5 --data '{ 6 ... 7 }'
1 curl --request POST \ 2 --url https://MYSERVER.up.railway.app/v1/agents/ \ 3 --header 'AUTHORIZATION: Bearer banana' \ 4 --header 'Content-Type: application/json' \ 5 --data '{ 6 ... 7 }'
Password can now be passed via the token
field when initializing the Letta client:
1 client = LettaClient( 2 base_url="https://MYSERVER.up.railway.app", 3 token="banana", 4 )
ToolRule objects no longer should specify a type
at instantiation, as this field is now immutable.
1 rule = InitToolRule( 2 tool_name="secret_message", 3 type="run_first" 4 )
1 rule = InitToolRule(tool_name="secret_message")
Letta also now supports smarter retry behavior for tool rules in the case of unrecoverable failures.
The List Steps
and Retrieve Step
routes have been added to enable querying for additional metadata around agent execution.
UserMessage
contentThe content field on UserMessage
objects returned by our Messages endpoints have been simplified to flat strings containing raw message text, rather than JSON strings with message text nested inside.
1 { 2 "id": "message-dea2ceab-0863-44ea-86dc-70cf02c05946", 3 "date": "2025-01-28T01:18:18+00:00", 4 "message_type": "user_message", 5 "content": "{\n \"type\": \"user_message\",\n \"message\": \"Hello, how are you?\",\n \"time\": \"2025-01-28 01:18:18 AM UTC+0000\"\n}" 6 }
1 { 2 "id": "message-dea2ceab-0863-44ea-86dc-70cf02c05946", 3 "date": "2025-01-28T01:18:18+00:00", 4 "message_type": "user_message", 5 "content": "Hello, how are you?" 6 }
use_assistant_message
parameter defaults to TrueAll message related APIs now include a top-level use_assistant_message
parameter, which defaults to True
if not specified. This parameter controls whether the endpoint should parse specific tool call arguments (default send_message
) as AssistantMessage objects rather than ToolCallMessage objects.
1 response = client.agents.messages.create( 2 agent_id=agent.id, 3 messages=[ 4 MessageCreate( 5 role="user", 6 content="call the big_return function", 7 ), 8 ], 9 config=LettaRequestConfig(use_assistant_message=False), 10 )
1 response = client.agents.messages.create( 2 agent_id=agent.id, 3 messages=[ 4 MessageCreate( 5 role="user", 6 content="call the big_return function", 7 ), 8 ], 9 use_assistant_message=False, 10 )
Previously, the List Messages
endpoint defaulted to False internally, so this change may cause unexpected behavior in your code. To fix this, you can set the use_assistant_message
parameter to False
in your request.
1 messages = client.agents.messages.list( 2 limit=10, 3 use_assistant_message=False, 4 )
All message related APIs return LettaMessage
objects now, which are simplified versions of Message
objects stored in the database backend. Previously, our List Messages
endpoint returned Message
objects by default, which is no longer an option.