Create Message Async
Asynchronously process a user message and return a run object. The actual processing happens in the background, and the status can be checked using the run ID.
This is "asynchronous" in the sense that it's a background run and explicitly must be fetched by the run ID.
Note: Sending multiple concurrent requests to the same agent can lead to undefined behavior. Each agent processes messages sequentially, and concurrent requests may interleave in unexpected ways. Wait for each request to complete before sending the next one. Use separate agents or conversations for parallel processing.
ParametersExpand Collapse
agentID: string
The ID of the agent in the format 'agent-
body: MessageCreateAsyncParams { assistant_message_tool_kwarg, assistant_message_tool_name, callback_url, 14 more }
Deprecatedassistant_message_tool_kwarg?: string
The name of the message argument in the designated message tool. Still supported for legacy agent types, but deprecated for letta_v1_agent onward.
Deprecatedassistant_message_tool_name?: string
The name of the designated message tool. Still supported for legacy agent types, but deprecated for letta_v1_agent onward.
callback_url?: string | null
Optional callback URL to POST to when the job completes
client_skills?: Array<ClientSkill> | null
Client-side skills available in the environment. These are rendered in the system prompt's available skills section alongside agent-scoped skills from MemFS.
description: string
Description of what the skill does
location: string
Path or location hint for the skill (e.g. skills/my-skill/SKILL.md)
name: string
The name of the skill
client_tools?: Array<ClientTool> | null
Client-side tools that the agent can call. When the agent calls a client-side tool, execution pauses and returns control to the client to execute the tool and provide the result via a ToolReturn.
name: string
The name of the tool function
description?: string | null
Description of what the tool does
parameters?: Record<string, unknown> | null
JSON Schema for the function parameters
Deprecatedenable_thinking?: string
If set to True, enables reasoning before responses or tool calls from the agent.
include_compaction_messages?: boolean
If True, compaction events emit structured SummaryMessage and EventMessage types. If False (default), compaction messages are not included in the response.
Only return specified message types in the response. If None (default) returns all messages.
input?: string | Array<TextContent { text, signature, type } | ImageContent { source, type } | ToolCallContent { id, input, name, 2 more } | 5 more> | null
Syntactic sugar for a single user message. Equivalent to messages=[{'role': 'user', 'content': input}].
Array<TextContent { text, signature, type } | ImageContent { source, type } | ToolCallContent { id, input, name, 2 more } | 5 more>
TextContent { text, signature, type }
text: string
The text content of the message.
signature?: string | null
Stores a unique identifier for any reasoning associated with this text content.
type?: "text"
The type of the message.
ImageContent { source, type }
source: URLImage { url, type } | Base64Image { data, media_type, detail, type } | LettaImage { file_id, data, detail, 2 more }
The source of the image.
URLImage { url, type }
url: string
The URL of the image.
type?: "url"
The source type for the image.
Base64Image { data, media_type, detail, type }
data: string
The base64 encoded image data.
media_type: string
The media type for the image.
detail?: string | null
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
type?: "base64"
The source type for the image.
LettaImage { file_id, data, detail, 2 more }
file_id: string
The unique identifier of the image file persisted in storage.
data?: string | null
The base64 encoded image data.
detail?: string | null
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
media_type?: string | null
The media type for the image.
type?: "letta"
The source type for the image.
type?: "image"
The type of the message.
ToolCallContent { id, input, name, 2 more }
id: string
A unique identifier for this specific tool call instance.
input: Record<string, unknown>
The parameters being passed to the tool, structured as a dictionary of parameter names to values.
name: string
The name of the tool being called.
signature?: string | null
Stores a unique identifier for any reasoning associated with this tool call.
type?: "tool_call"
Indicates this content represents a tool call event.
ToolReturnContent { content, is_error, tool_call_id, type }
content: string
The content returned by the tool execution.
is_error: boolean
Indicates whether the tool execution resulted in an error.
tool_call_id: string
References the ID of the ToolCallContent that initiated this tool call.
type?: "tool_return"
Indicates this content represents a tool return event.
ReasoningContent { is_native, reasoning, signature, type }
Sent via the Anthropic Messages API
is_native: boolean
Whether the reasoning content was generated by a reasoner model that processed this step.
reasoning: string
The intermediate reasoning or thought process content.
signature?: string | null
A unique identifier for this reasoning step.
type?: "reasoning"
Indicates this is a reasoning/intermediate step.
RedactedReasoningContent { data, type }
Sent via the Anthropic Messages API
data: string
The redacted or filtered intermediate reasoning content.
type?: "redacted_reasoning"
Indicates this is a redacted thinking step.
OmittedReasoningContent { signature, type }
A placeholder for reasoning content we know is present, but isn't returned by the provider (e.g. OpenAI GPT-5 on ChatCompletions)
signature?: string | null
A unique identifier for this reasoning step.
type?: "omitted_reasoning"
Indicates this is an omitted reasoning step.
SummarizedReasoningContent { id, summary, encrypted_content, type }
The style of reasoning content returned by the OpenAI Responses API
id: string
The unique identifier for this reasoning step.
summary: Array<Summary>
Summaries of the reasoning content.
index: number
The index of the summary part.
text: string
The text of the summary part.
encrypted_content?: string
The encrypted reasoning content.
type?: "summarized_reasoning"
Indicates this is a summarized reasoning step.
max_steps?: number
Maximum number of steps the agent should take to process the request.
messages?: Array<MessageCreate { content, role, batch_item_id, 5 more } | ApprovalCreate { approval_request_id, approvals, approve, 4 more } | ToolReturnCreate { tool_returns, group_id, otid, type } > | null
The messages to be sent to the agent.
MessageCreate { content, role, batch_item_id, 5 more }
Request to create a message
The content of the message.
Array<LettaMessageContentUnion>
TextContent { text, signature, type }
text: string
The text content of the message.
signature?: string | null
Stores a unique identifier for any reasoning associated with this text content.
type?: "text"
The type of the message.
ImageContent { source, type }
source: URLImage { url, type } | Base64Image { data, media_type, detail, type } | LettaImage { file_id, data, detail, 2 more }
The source of the image.
URLImage { url, type }
url: string
The URL of the image.
type?: "url"
The source type for the image.
Base64Image { data, media_type, detail, type }
data: string
The base64 encoded image data.
media_type: string
The media type for the image.
detail?: string | null
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
type?: "base64"
The source type for the image.
LettaImage { file_id, data, detail, 2 more }
file_id: string
The unique identifier of the image file persisted in storage.
data?: string | null
The base64 encoded image data.
detail?: string | null
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
media_type?: string | null
The media type for the image.
type?: "letta"
The source type for the image.
type?: "image"
The type of the message.
ToolCallContent { id, input, name, 2 more }
id: string
A unique identifier for this specific tool call instance.
input: Record<string, unknown>
The parameters being passed to the tool, structured as a dictionary of parameter names to values.
name: string
The name of the tool being called.
signature?: string | null
Stores a unique identifier for any reasoning associated with this tool call.
type?: "tool_call"
Indicates this content represents a tool call event.
ToolReturnContent { content, is_error, tool_call_id, type }
content: string
The content returned by the tool execution.
is_error: boolean
Indicates whether the tool execution resulted in an error.
tool_call_id: string
References the ID of the ToolCallContent that initiated this tool call.
type?: "tool_return"
Indicates this content represents a tool return event.
ReasoningContent { is_native, reasoning, signature, type }
Sent via the Anthropic Messages API
is_native: boolean
Whether the reasoning content was generated by a reasoner model that processed this step.
reasoning: string
The intermediate reasoning or thought process content.
signature?: string | null
A unique identifier for this reasoning step.
type?: "reasoning"
Indicates this is a reasoning/intermediate step.
RedactedReasoningContent { data, type }
Sent via the Anthropic Messages API
data: string
The redacted or filtered intermediate reasoning content.
type?: "redacted_reasoning"
Indicates this is a redacted thinking step.
OmittedReasoningContent { signature, type }
A placeholder for reasoning content we know is present, but isn't returned by the provider (e.g. OpenAI GPT-5 on ChatCompletions)
signature?: string | null
A unique identifier for this reasoning step.
type?: "omitted_reasoning"
Indicates this is an omitted reasoning step.
role: "user" | "system" | "assistant"
The role of the participant.
batch_item_id?: string | null
The id of the LLMBatchItem that this message is associated with
group_id?: string | null
The multi-agent group that the message was sent in
name?: string | null
The name of the participant.
otid?: string | null
The offline threading id (OTID). Set by the client to deduplicate requests. Used for idempotency in background streaming mode — each message in a request must have a unique OTID. Retries of the same request should reuse the same OTIDs.
sender_id?: string | null
The id of the sender of the message, can be an identity id or agent id
type?: "message" | null
The message type to be created.
ApprovalCreate { approval_request_id, approvals, approve, 4 more }
Input to approve or deny a tool call request
Deprecatedapproval_request_id?: string | null
The message ID of the approval request
approvals?: Array<ApprovalReturn { approve, tool_call_id, reason, type } | ToolReturn { status, tool_call_id, tool_return, 3 more } > | null
The list of approval responses
ApprovalReturn { approve, tool_call_id, reason, type }
approve: boolean
Whether the tool has been approved
tool_call_id: string
The ID of the tool call that corresponds to this approval
reason?: string | null
An optional explanation for the provided approval status
type?: "approval"
The message type to be created.
ToolReturn { status, tool_call_id, tool_return, 3 more }
status: "success" | "error"
The tool return value - either a string or list of content parts (text/image)
Array<TextContent { text, signature, type } | ImageContent { source, type } >
TextContent { text, signature, type }
text: string
The text content of the message.
signature?: string | null
Stores a unique identifier for any reasoning associated with this text content.
type?: "text"
The type of the message.
ImageContent { source, type }
source: URLImage { url, type } | Base64Image { data, media_type, detail, type } | LettaImage { file_id, data, detail, 2 more }
The source of the image.
URLImage { url, type }
url: string
The URL of the image.
type?: "url"
The source type for the image.
Base64Image { data, media_type, detail, type }
data: string
The base64 encoded image data.
media_type: string
The media type for the image.
detail?: string | null
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
type?: "base64"
The source type for the image.
LettaImage { file_id, data, detail, 2 more }
file_id: string
The unique identifier of the image file persisted in storage.
data?: string | null
The base64 encoded image data.
detail?: string | null
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
media_type?: string | null
The media type for the image.
type?: "letta"
The source type for the image.
type?: "image"
The type of the message.
type?: "tool"
The message type to be created.
Deprecatedapprove?: boolean | null
Whether the tool has been approved
group_id?: string | null
The multi-agent group that the message was sent in
otid?: string | null
The offline threading id (OTID). Set by the client to deduplicate requests. Used for idempotency in background streaming mode — each message in a request must have a unique OTID. Retries of the same request should reuse the same OTIDs.
Deprecatedreason?: string | null
An optional explanation for the provided approval status
type?: "approval"
The message type to be created.
ToolReturnCreate { tool_returns, group_id, otid, type }
Submit tool return(s) from client-side tool execution.
This is the preferred way to send tool results back to the agent after client-side tool execution. It is equivalent to sending an ApprovalCreate with tool return approvals, but provides a cleaner API for the common case.
List of tool returns from client-side execution
status: "success" | "error"
The tool return value - either a string or list of content parts (text/image)
Array<TextContent { text, signature, type } | ImageContent { source, type } >
TextContent { text, signature, type }
text: string
The text content of the message.
signature?: string | null
Stores a unique identifier for any reasoning associated with this text content.
type?: "text"
The type of the message.
ImageContent { source, type }
source: URLImage { url, type } | Base64Image { data, media_type, detail, type } | LettaImage { file_id, data, detail, 2 more }
The source of the image.
URLImage { url, type }
url: string
The URL of the image.
type?: "url"
The source type for the image.
Base64Image { data, media_type, detail, type }
data: string
The base64 encoded image data.
media_type: string
The media type for the image.
detail?: string | null
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
type?: "base64"
The source type for the image.
LettaImage { file_id, data, detail, 2 more }
file_id: string
The unique identifier of the image file persisted in storage.
data?: string | null
The base64 encoded image data.
detail?: string | null
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
media_type?: string | null
The media type for the image.
type?: "letta"
The source type for the image.
type?: "image"
The type of the message.
type?: "tool"
The message type to be created.
group_id?: string | null
The multi-agent group that the message was sent in
otid?: string | null
The offline threading id (OTID). Set by the client to deduplicate requests. Used for idempotency in background streaming mode — each message in a request must have a unique OTID. Retries of the same request should reuse the same OTIDs.
type?: "tool_return"
The message type to be created.
override_model?: string | null
Model handle to use for this request instead of the agent's default model. This allows sending a message to a different model without changing the agent's configuration.
override_system?: string | null
Optional per-request system prompt override. When set, this is passed directly to the underlying LLM request and bypasses the persisted/compiled system message for that request.
return_logprobs?: boolean
If True, returns log probabilities of the output tokens in the response. Useful for RL training. Only supported for OpenAI-compatible providers (including SGLang).
return_token_ids?: boolean
If True, returns token IDs and logprobs for ALL LLM generations in the agent step, not just the last one. Uses SGLang native /generate endpoint. Returns 'turns' field with TurnTokenData for each assistant/tool turn. Required for proper multi-turn RL training with loss masking.
top_logprobs?: number | null
Number of most likely tokens to return at each position (0-20). Requires return_logprobs=True.
Deprecateduse_assistant_message?: boolean
Whether the server should parse specific tool call arguments (default send_message) as AssistantMessage objects. Still supported for legacy agent types, but deprecated for letta_v1_agent onward.
ReturnsExpand Collapse
Run { id, agent_id, background, 14 more }
Representation of a run - a conversation or processing session for an agent. Runs track when agents process messages and maintain the relationship between agents, steps, and messages.
id: string
The human-friendly ID of the Run
agent_id: string
The unique identifier of the agent associated with the run.
background?: boolean | null
Whether the run was created in background mode.
base_template_id?: string | null
The base template ID that the run belongs to.
callback_error?: string | null
Optional error message from attempting to POST the callback endpoint.
callback_sent_at?: string | null
Timestamp when the callback was last attempted.
callback_status_code?: number | null
HTTP status code returned by the callback endpoint.
callback_url?: string | null
If set, POST to this URL when the run completes.
completed_at?: string | null
The timestamp when the run was completed.
conversation_id?: string | null
The unique identifier of the conversation associated with the run.
created_at?: string
The timestamp when the run was created.
metadata?: Record<string, unknown> | null
Additional metadata for the run.
request_config?: RequestConfig | null
The request configuration for the run.
assistant_message_tool_kwarg?: string
The name of the message argument in the designated message tool.
assistant_message_tool_name?: string
The name of the designated message tool.
Only return specified message types in the response. If None (default) returns all messages.
use_assistant_message?: boolean
Whether the server should parse specific tool call arguments (default send_message) as AssistantMessage objects.
status?: "created" | "running" | "completed" | 2 more
The current status of the run.
The reason why the run was stopped.
total_duration_ns?: number | null
Total run duration in nanoseconds
ttft_ns?: number | null
Time to first token for a run in nanoseconds
Create Message Async
import Letta from '@letta-ai/letta-client';
const client = new Letta({
apiKey: process.env['LETTA_API_KEY'], // This is the default and can be omitted
});
const run = await client.agents.messages.createAsync('agent-123e4567-e89b-42d3-8456-426614174000');
console.log(run.id);
{
"id": "run-123e4567-e89b-12d3-a456-426614174000",
"agent_id": "agent_id",
"background": true,
"base_template_id": "base_template_id",
"callback_error": "callback_error",
"callback_sent_at": "2019-12-27T18:11:19.117Z",
"callback_status_code": 0,
"callback_url": "callback_url",
"completed_at": "2019-12-27T18:11:19.117Z",
"conversation_id": "conversation_id",
"created_at": "2019-12-27T18:11:19.117Z",
"metadata": {
"foo": "bar"
},
"request_config": {
"assistant_message_tool_kwarg": "assistant_message_tool_kwarg",
"assistant_message_tool_name": "assistant_message_tool_name",
"include_return_message_types": [
"system_message"
],
"use_assistant_message": true
},
"status": "created",
"stop_reason": "end_turn",
"total_duration_ns": 0,
"ttft_ns": 0
}
Returns Examples
{
"id": "run-123e4567-e89b-12d3-a456-426614174000",
"agent_id": "agent_id",
"background": true,
"base_template_id": "base_template_id",
"callback_error": "callback_error",
"callback_sent_at": "2019-12-27T18:11:19.117Z",
"callback_status_code": 0,
"callback_url": "callback_url",
"completed_at": "2019-12-27T18:11:19.117Z",
"conversation_id": "conversation_id",
"created_at": "2019-12-27T18:11:19.117Z",
"metadata": {
"foo": "bar"
},
"request_config": {
"assistant_message_tool_kwarg": "assistant_message_tool_kwarg",
"assistant_message_tool_name": "assistant_message_tool_name",
"include_return_message_types": [
"system_message"
],
"use_assistant_message": true
},
"status": "created",
"stop_reason": "end_turn",
"total_duration_ns": 0,
"ttft_ns": 0
}