Skip to content
Letta Platform Letta Platform Letta Docs
Sign up

Create Message Async

post/v1/agents/{agent_id}/messages/async

Asynchronously process a user message and return a run object. The actual processing happens in the background, and the status can be checked using the run ID.

This is "asynchronous" in the sense that it's a background run and explicitly must be fetched by the run ID.

Note: Sending multiple concurrent requests to the same agent can lead to undefined behavior. Each agent processes messages sequentially, and concurrent requests may interleave in unexpected ways. Wait for each request to complete before sending the next one. Use separate agents or conversations for parallel processing.

Path ParametersExpand Collapse
agent_id: string

The ID of the agent in the format 'agent-'

minLength42
maxLength42
Body ParametersExpand Collapse
Deprecatedassistant_message_tool_kwarg: optional string

The name of the message argument in the designated message tool. Still supported for legacy agent types, but deprecated for letta_v1_agent onward.

Deprecatedassistant_message_tool_name: optional string

The name of the designated message tool. Still supported for legacy agent types, but deprecated for letta_v1_agent onward.

callback_url: optional string

Optional callback URL to POST to when the job completes

client_tools: optional array of object { name, description, parameters }

Client-side tools that the agent can call. When the agent calls a client-side tool, execution pauses and returns control to the client to execute the tool and provide the result via a ToolReturn.

name: string

The name of the tool function

description: optional string

Description of what the tool does

parameters: optional map[unknown]

JSON Schema for the function parameters

Deprecatedenable_thinking: optional string

If set to True, enables reasoning before responses or tool calls from the agent.

include_compaction_messages: optional boolean

If True, compaction events emit structured SummaryMessage and EventMessage types. If False (default), compaction messages are not included in the response.

include_return_message_types: optional array of MessageType

Only return specified message types in the response. If None (default) returns all messages.

Accepts one of the following:
"system_message"
"user_message"
"assistant_message"
"reasoning_message"
"hidden_reasoning_message"
"tool_call_message"
"tool_return_message"
"approval_request_message"
"approval_response_message"
"summary_message"
"event_message"
input: optional string or array of TextContent { text, signature, type } or ImageContent { source, type } or ToolCallContent { id, input, name, 2 more } or 5 more

Syntactic sugar for a single user message. Equivalent to messages=[{'role': 'user', 'content': input}].

Accepts one of the following:
UnionMember0 = string
UnionMember1 = array of TextContent { text, signature, type } or ImageContent { source, type } or ToolCallContent { id, input, name, 2 more } or 5 more
Accepts one of the following:
TextContent = object { text, signature, type }
text: string

The text content of the message.

signature: optional string

Stores a unique identifier for any reasoning associated with this text content.

type: optional "text"

The type of the message.

ImageContent = object { source, type }
source: object { url, type } or object { data, media_type, detail, type } or object { file_id, data, detail, 2 more }

The source of the image.

Accepts one of the following:
URL = object { url, type }
url: string

The URL of the image.

type: optional "url"

The source type for the image.

Base64 = object { data, media_type, detail, type }
data: string

The base64 encoded image data.

media_type: string

The media type for the image.

detail: optional string

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

type: optional "base64"

The source type for the image.

Letta = object { file_id, data, detail, 2 more }
file_id: string

The unique identifier of the image file persisted in storage.

data: optional string

The base64 encoded image data.

detail: optional string

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

media_type: optional string

The media type for the image.

type: optional "letta"

The source type for the image.

type: optional "image"

The type of the message.

ToolCallContent = object { id, input, name, 2 more }
id: string

A unique identifier for this specific tool call instance.

input: map[unknown]

The parameters being passed to the tool, structured as a dictionary of parameter names to values.

name: string

The name of the tool being called.

signature: optional string

Stores a unique identifier for any reasoning associated with this tool call.

type: optional "tool_call"

Indicates this content represents a tool call event.

ToolReturnContent = object { content, is_error, tool_call_id, type }
content: string

The content returned by the tool execution.

is_error: boolean

Indicates whether the tool execution resulted in an error.

tool_call_id: string

References the ID of the ToolCallContent that initiated this tool call.

type: optional "tool_return"

Indicates this content represents a tool return event.

ReasoningContent = object { is_native, reasoning, signature, type }

Sent via the Anthropic Messages API

is_native: boolean

Whether the reasoning content was generated by a reasoner model that processed this step.

reasoning: string

The intermediate reasoning or thought process content.

signature: optional string

A unique identifier for this reasoning step.

type: optional "reasoning"

Indicates this is a reasoning/intermediate step.

RedactedReasoningContent = object { data, type }

Sent via the Anthropic Messages API

data: string

The redacted or filtered intermediate reasoning content.

type: optional "redacted_reasoning"

Indicates this is a redacted thinking step.

OmittedReasoningContent = object { signature, type }

A placeholder for reasoning content we know is present, but isn't returned by the provider (e.g. OpenAI GPT-5 on ChatCompletions)

signature: optional string

A unique identifier for this reasoning step.

type: optional "omitted_reasoning"

Indicates this is an omitted reasoning step.

SummarizedReasoning = object { id, summary, encrypted_content, type }

The style of reasoning content returned by the OpenAI Responses API

id: string

The unique identifier for this reasoning step.

summary: array of object { index, text }

Summaries of the reasoning content.

index: number

The index of the summary part.

text: string

The text of the summary part.

encrypted_content: optional string

The encrypted reasoning content.

type: optional "summarized_reasoning"

Indicates this is a summarized reasoning step.

max_steps: optional number

Maximum number of steps the agent should take to process the request.

messages: optional array of MessageCreate { content, role, batch_item_id, 5 more } or ApprovalCreate { approval_request_id, approvals, approve, 3 more }

The messages to be sent to the agent.

Accepts one of the following:
MessageCreate = object { content, role, batch_item_id, 5 more }

Request to create a message

content: array of LettaMessageContentUnion or string

The content of the message.

Accepts one of the following:
UnionMember0 = array of LettaMessageContentUnion
Accepts one of the following:
TextContent = object { text, signature, type }
text: string

The text content of the message.

signature: optional string

Stores a unique identifier for any reasoning associated with this text content.

type: optional "text"

The type of the message.

ImageContent = object { source, type }
source: object { url, type } or object { data, media_type, detail, type } or object { file_id, data, detail, 2 more }

The source of the image.

Accepts one of the following:
URL = object { url, type }
url: string

The URL of the image.

type: optional "url"

The source type for the image.

Base64 = object { data, media_type, detail, type }
data: string

The base64 encoded image data.

media_type: string

The media type for the image.

detail: optional string

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

type: optional "base64"

The source type for the image.

Letta = object { file_id, data, detail, 2 more }
file_id: string

The unique identifier of the image file persisted in storage.

data: optional string

The base64 encoded image data.

detail: optional string

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

media_type: optional string

The media type for the image.

type: optional "letta"

The source type for the image.

type: optional "image"

The type of the message.

ToolCallContent = object { id, input, name, 2 more }
id: string

A unique identifier for this specific tool call instance.

input: map[unknown]

The parameters being passed to the tool, structured as a dictionary of parameter names to values.

name: string

The name of the tool being called.

signature: optional string

Stores a unique identifier for any reasoning associated with this tool call.

type: optional "tool_call"

Indicates this content represents a tool call event.

ToolReturnContent = object { content, is_error, tool_call_id, type }
content: string

The content returned by the tool execution.

is_error: boolean

Indicates whether the tool execution resulted in an error.

tool_call_id: string

References the ID of the ToolCallContent that initiated this tool call.

type: optional "tool_return"

Indicates this content represents a tool return event.

ReasoningContent = object { is_native, reasoning, signature, type }

Sent via the Anthropic Messages API

is_native: boolean

Whether the reasoning content was generated by a reasoner model that processed this step.

reasoning: string

The intermediate reasoning or thought process content.

signature: optional string

A unique identifier for this reasoning step.

type: optional "reasoning"

Indicates this is a reasoning/intermediate step.

RedactedReasoningContent = object { data, type }

Sent via the Anthropic Messages API

data: string

The redacted or filtered intermediate reasoning content.

type: optional "redacted_reasoning"

Indicates this is a redacted thinking step.

OmittedReasoningContent = object { signature, type }

A placeholder for reasoning content we know is present, but isn't returned by the provider (e.g. OpenAI GPT-5 on ChatCompletions)

signature: optional string

A unique identifier for this reasoning step.

type: optional "omitted_reasoning"

Indicates this is an omitted reasoning step.

UnionMember1 = string
role: "user" or "system" or "assistant"

The role of the participant.

Accepts one of the following:
"user"
"system"
"assistant"
batch_item_id: optional string

The id of the LLMBatchItem that this message is associated with

group_id: optional string

The multi-agent group that the message was sent in

name: optional string

The name of the participant.

otid: optional string

The offline threading id associated with this message

sender_id: optional string

The id of the sender of the message, can be an identity id or agent id

type: optional "message"

The message type to be created.

ApprovalCreate = object { approval_request_id, approvals, approve, 3 more }

Input to approve or deny a tool call request

Deprecatedapproval_request_id: optional string

The message ID of the approval request

approvals: optional array of ApprovalReturn { approve, tool_call_id, reason, type } or ToolReturn { status, tool_call_id, tool_return, 3 more }

The list of approval responses

Accepts one of the following:
ApprovalReturn = object { approve, tool_call_id, reason, type }
approve: boolean

Whether the tool has been approved

tool_call_id: string

The ID of the tool call that corresponds to this approval

reason: optional string

An optional explanation for the provided approval status

type: optional "approval"

The message type to be created.

ToolReturn = object { status, tool_call_id, tool_return, 3 more }
status: "success" or "error"
Accepts one of the following:
"success"
"error"
tool_call_id: string
tool_return: array of TextContent { text, signature, type } or ImageContent { source, type } or string

The tool return value - either a string or list of content parts (text/image)

Accepts one of the following:
UnionMember0 = array of TextContent { text, signature, type } or ImageContent { source, type }
Accepts one of the following:
TextContent = object { text, signature, type }
text: string

The text content of the message.

signature: optional string

Stores a unique identifier for any reasoning associated with this text content.

type: optional "text"

The type of the message.

ImageContent = object { source, type }
source: object { url, type } or object { data, media_type, detail, type } or object { file_id, data, detail, 2 more }

The source of the image.

Accepts one of the following:
URL = object { url, type }
url: string

The URL of the image.

type: optional "url"

The source type for the image.

Base64 = object { data, media_type, detail, type }
data: string

The base64 encoded image data.

media_type: string

The media type for the image.

detail: optional string

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

type: optional "base64"

The source type for the image.

Letta = object { file_id, data, detail, 2 more }
file_id: string

The unique identifier of the image file persisted in storage.

data: optional string

The base64 encoded image data.

detail: optional string

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

media_type: optional string

The media type for the image.

type: optional "letta"

The source type for the image.

type: optional "image"

The type of the message.

UnionMember1 = string
stderr: optional array of string
stdout: optional array of string
type: optional "tool"

The message type to be created.

Deprecatedapprove: optional boolean

Whether the tool has been approved

group_id: optional string

The multi-agent group that the message was sent in

Deprecatedreason: optional string

An optional explanation for the provided approval status

type: optional "approval"

The message type to be created.

override_model: optional string

Model handle to use for this request instead of the agent's default model. This allows sending a message to a different model without changing the agent's configuration.

Deprecateduse_assistant_message: optional boolean

Whether the server should parse specific tool call arguments (default send_message) as AssistantMessage objects. Still supported for legacy agent types, but deprecated for letta_v1_agent onward.

ReturnsExpand Collapse
Run = object { id, agent_id, background, 14 more }

Representation of a run - a conversation or processing session for an agent. Runs track when agents process messages and maintain the relationship between agents, steps, and messages.

id: string

The human-friendly ID of the Run

agent_id: string

The unique identifier of the agent associated with the run.

background: optional boolean

Whether the run was created in background mode.

base_template_id: optional string

The base template ID that the run belongs to.

callback_error: optional string

Optional error message from attempting to POST the callback endpoint.

callback_sent_at: optional string

Timestamp when the callback was last attempted.

formatdate-time
callback_status_code: optional number

HTTP status code returned by the callback endpoint.

callback_url: optional string

If set, POST to this URL when the run completes.

completed_at: optional string

The timestamp when the run was completed.

formatdate-time
conversation_id: optional string

The unique identifier of the conversation associated with the run.

created_at: optional string

The timestamp when the run was created.

formatdate-time
metadata: optional map[unknown]

Additional metadata for the run.

request_config: optional object { assistant_message_tool_kwarg, assistant_message_tool_name, include_return_message_types, use_assistant_message }

The request configuration for the run.

assistant_message_tool_kwarg: optional string

The name of the message argument in the designated message tool.

assistant_message_tool_name: optional string

The name of the designated message tool.

include_return_message_types: optional array of MessageType

Only return specified message types in the response. If None (default) returns all messages.

Accepts one of the following:
"system_message"
"user_message"
"assistant_message"
"reasoning_message"
"hidden_reasoning_message"
"tool_call_message"
"tool_return_message"
"approval_request_message"
"approval_response_message"
"summary_message"
"event_message"
use_assistant_message: optional boolean

Whether the server should parse specific tool call arguments (default send_message) as AssistantMessage objects.

status: optional "created" or "running" or "completed" or 2 more

The current status of the run.

Accepts one of the following:
"created"
"running"
"completed"
"failed"
"cancelled"
stop_reason: optional StopReasonType

The reason why the run was stopped.

Accepts one of the following:
"end_turn"
"error"
"llm_api_error"
"invalid_llm_response"
"invalid_tool_call"
"max_steps"
"max_tokens_exceeded"
"no_tool_call"
"tool_rule"
"cancelled"
"requires_approval"
"context_window_overflow_in_system_prompt"
total_duration_ns: optional number

Total run duration in nanoseconds

ttft_ns: optional number

Time to first token for a run in nanoseconds

Create Message Async
curl https://api.letta.com/v1/agents/$AGENT_ID/messages/async \
    -H 'Content-Type: application/json' \
    -H "Authorization: Bearer $LETTA_API_KEY" \
    -d '{}'
{
  "id": "run-123e4567-e89b-12d3-a456-426614174000",
  "agent_id": "agent_id",
  "background": true,
  "base_template_id": "base_template_id",
  "callback_error": "callback_error",
  "callback_sent_at": "2019-12-27T18:11:19.117Z",
  "callback_status_code": 0,
  "callback_url": "callback_url",
  "completed_at": "2019-12-27T18:11:19.117Z",
  "conversation_id": "conversation_id",
  "created_at": "2019-12-27T18:11:19.117Z",
  "metadata": {
    "foo": "bar"
  },
  "request_config": {
    "assistant_message_tool_kwarg": "assistant_message_tool_kwarg",
    "assistant_message_tool_name": "assistant_message_tool_name",
    "include_return_message_types": [
      "system_message"
    ],
    "use_assistant_message": true
  },
  "status": "created",
  "stop_reason": "end_turn",
  "total_duration_ns": 0,
  "ttft_ns": 0
}
Returns Examples
{
  "id": "run-123e4567-e89b-12d3-a456-426614174000",
  "agent_id": "agent_id",
  "background": true,
  "base_template_id": "base_template_id",
  "callback_error": "callback_error",
  "callback_sent_at": "2019-12-27T18:11:19.117Z",
  "callback_status_code": 0,
  "callback_url": "callback_url",
  "completed_at": "2019-12-27T18:11:19.117Z",
  "conversation_id": "conversation_id",
  "created_at": "2019-12-27T18:11:19.117Z",
  "metadata": {
    "foo": "bar"
  },
  "request_config": {
    "assistant_message_tool_kwarg": "assistant_message_tool_kwarg",
    "assistant_message_tool_name": "assistant_message_tool_name",
    "include_return_message_types": [
      "system_message"
    ],
    "use_assistant_message": true
  },
  "status": "created",
  "stop_reason": "end_turn",
  "total_duration_ns": 0,
  "ttft_ns": 0
}