Skip to content
  • Auto
  • Light
  • Dark
DiscordForumGitHubSign up
View as Markdown
Copy Markdown

Open in Claude
Open in ChatGPT

Create Group Message Streaming

post/v1/groups/{group_id}/messages/stream

Process a user message and return the group's responses. This endpoint accepts a message from a user and processes it through agents in the group based on the specified pattern. It will stream the steps of the response always, and stream the tokens if 'stream_tokens' is set to True.

Path ParametersExpand Collapse
group_id: string

The ID of the group in the format 'group-'

minLength42
maxLength42
Body ParametersExpand Collapse
Deprecatedassistant_message_tool_kwarg: optional string

The name of the message argument in the designated message tool. Still supported for legacy agent types, but deprecated for letta_v1_agent onward.

Deprecatedassistant_message_tool_name: optional string

The name of the designated message tool. Still supported for legacy agent types, but deprecated for letta_v1_agent onward.

background: optional boolean

Whether to process the request in the background (only used when streaming=true).

Deprecatedenable_thinking: optional string

If set to True, enables reasoning before responses or tool calls from the agent.

include_pings: optional boolean

Whether to include periodic keepalive ping messages in the stream to prevent connection timeouts (only used when streaming=true).

include_return_message_types: optional array of MessageType

Only return specified message types in the response. If None (default) returns all messages.

Accepts one of the following:
"system_message"
"user_message"
"assistant_message"
"reasoning_message"
"hidden_reasoning_message"
"tool_call_message"
"tool_return_message"
"approval_request_message"
"approval_response_message"
input: optional string or array of TextContent { text, signature, type } or ImageContent { source, type } or ToolCallContent { id, input, name, 2 more } or 5 more

Syntactic sugar for a single user message. Equivalent to messages=[{'role': 'user', 'content': input}].

Accepts one of the following:
UnionMember0 = string
UnionMember1 = array of TextContent { text, signature, type } or ImageContent { source, type } or ToolCallContent { id, input, name, 2 more } or 5 more
Accepts one of the following:
TextContent = object { text, signature, type }
text: string

The text content of the message.

signature: optional string

Stores a unique identifier for any reasoning associated with this text content.

type: optional "text"

The type of the message.

Accepts one of the following:
"text"
ImageContent = object { source, type }
source: object { url, type } or object { data, media_type, detail, type } or object { file_id, data, detail, 2 more }

The source of the image.

Accepts one of the following:
URL = object { url, type }
url: string

The URL of the image.

type: optional "url"

The source type for the image.

Accepts one of the following:
"url"
Base64 = object { data, media_type, detail, type }
data: string

The base64 encoded image data.

media_type: string

The media type for the image.

detail: optional string

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

type: optional "base64"

The source type for the image.

Accepts one of the following:
"base64"
Letta = object { file_id, data, detail, 2 more }
file_id: string

The unique identifier of the image file persisted in storage.

data: optional string

The base64 encoded image data.

detail: optional string

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

media_type: optional string

The media type for the image.

type: optional "letta"

The source type for the image.

Accepts one of the following:
"letta"
type: optional "image"

The type of the message.

Accepts one of the following:
"image"
ToolCallContent = object { id, input, name, 2 more }
id: string

A unique identifier for this specific tool call instance.

input: map[unknown]

The parameters being passed to the tool, structured as a dictionary of parameter names to values.

name: string

The name of the tool being called.

signature: optional string

Stores a unique identifier for any reasoning associated with this tool call.

type: optional "tool_call"

Indicates this content represents a tool call event.

Accepts one of the following:
"tool_call"
ToolReturnContent = object { content, is_error, tool_call_id, type }
content: string

The content returned by the tool execution.

is_error: boolean

Indicates whether the tool execution resulted in an error.

tool_call_id: string

References the ID of the ToolCallContent that initiated this tool call.

type: optional "tool_return"

Indicates this content represents a tool return event.

Accepts one of the following:
"tool_return"
ReasoningContent = object { is_native, reasoning, signature, type }

Sent via the Anthropic Messages API

is_native: boolean

Whether the reasoning content was generated by a reasoner model that processed this step.

reasoning: string

The intermediate reasoning or thought process content.

signature: optional string

A unique identifier for this reasoning step.

type: optional "reasoning"

Indicates this is a reasoning/intermediate step.

Accepts one of the following:
"reasoning"
RedactedReasoningContent = object { data, type }

Sent via the Anthropic Messages API

data: string

The redacted or filtered intermediate reasoning content.

type: optional "redacted_reasoning"

Indicates this is a redacted thinking step.

Accepts one of the following:
"redacted_reasoning"
OmittedReasoningContent = object { signature, type }

A placeholder for reasoning content we know is present, but isn't returned by the provider (e.g. OpenAI GPT-5 on ChatCompletions)

signature: optional string

A unique identifier for this reasoning step.

type: optional "omitted_reasoning"

Indicates this is an omitted reasoning step.

Accepts one of the following:
"omitted_reasoning"
SummarizedReasoning = object { id, summary, encrypted_content, type }

The style of reasoning content returned by the OpenAI Responses API

id: string

The unique identifier for this reasoning step.

summary: array of object { index, text }

Summaries of the reasoning content.

index: number

The index of the summary part.

text: string

The text of the summary part.

encrypted_content: optional string

The encrypted reasoning content.

type: optional "summarized_reasoning"

Indicates this is a summarized reasoning step.

Accepts one of the following:
"summarized_reasoning"
max_steps: optional number

Maximum number of steps the agent should take to process the request.

messages: optional array of MessageCreate { content, role, batch_item_id, 5 more } or ApprovalCreate { approval_request_id, approvals, approve, 3 more }

The messages to be sent to the agent.

Accepts one of the following:
MessageCreate = object { content, role, batch_item_id, 5 more }

Request to create a message

content: array of LettaMessageContentUnion or string

The content of the message.

Accepts one of the following:
UnionMember0 = array of LettaMessageContentUnion
Accepts one of the following:
TextContent = object { text, signature, type }
text: string

The text content of the message.

signature: optional string

Stores a unique identifier for any reasoning associated with this text content.

type: optional "text"

The type of the message.

Accepts one of the following:
"text"
ImageContent = object { source, type }
source: object { url, type } or object { data, media_type, detail, type } or object { file_id, data, detail, 2 more }

The source of the image.

Accepts one of the following:
URL = object { url, type }
url: string

The URL of the image.

type: optional "url"

The source type for the image.

Accepts one of the following:
"url"
Base64 = object { data, media_type, detail, type }
data: string

The base64 encoded image data.

media_type: string

The media type for the image.

detail: optional string

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

type: optional "base64"

The source type for the image.

Accepts one of the following:
"base64"
Letta = object { file_id, data, detail, 2 more }
file_id: string

The unique identifier of the image file persisted in storage.

data: optional string

The base64 encoded image data.

detail: optional string

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

media_type: optional string

The media type for the image.

type: optional "letta"

The source type for the image.

Accepts one of the following:
"letta"
type: optional "image"

The type of the message.

Accepts one of the following:
"image"
ToolCallContent = object { id, input, name, 2 more }
id: string

A unique identifier for this specific tool call instance.

input: map[unknown]

The parameters being passed to the tool, structured as a dictionary of parameter names to values.

name: string

The name of the tool being called.

signature: optional string

Stores a unique identifier for any reasoning associated with this tool call.

type: optional "tool_call"

Indicates this content represents a tool call event.

Accepts one of the following:
"tool_call"
ToolReturnContent = object { content, is_error, tool_call_id, type }
content: string

The content returned by the tool execution.

is_error: boolean

Indicates whether the tool execution resulted in an error.

tool_call_id: string

References the ID of the ToolCallContent that initiated this tool call.

type: optional "tool_return"

Indicates this content represents a tool return event.

Accepts one of the following:
"tool_return"
ReasoningContent = object { is_native, reasoning, signature, type }

Sent via the Anthropic Messages API

is_native: boolean

Whether the reasoning content was generated by a reasoner model that processed this step.

reasoning: string

The intermediate reasoning or thought process content.

signature: optional string

A unique identifier for this reasoning step.

type: optional "reasoning"

Indicates this is a reasoning/intermediate step.

Accepts one of the following:
"reasoning"
RedactedReasoningContent = object { data, type }

Sent via the Anthropic Messages API

data: string

The redacted or filtered intermediate reasoning content.

type: optional "redacted_reasoning"

Indicates this is a redacted thinking step.

Accepts one of the following:
"redacted_reasoning"
OmittedReasoningContent = object { signature, type }

A placeholder for reasoning content we know is present, but isn't returned by the provider (e.g. OpenAI GPT-5 on ChatCompletions)

signature: optional string

A unique identifier for this reasoning step.

type: optional "omitted_reasoning"

Indicates this is an omitted reasoning step.

Accepts one of the following:
"omitted_reasoning"
UnionMember1 = string
role: "user" or "system" or "assistant"

The role of the participant.

Accepts one of the following:
"user"
"system"
"assistant"
batch_item_id: optional string

The id of the LLMBatchItem that this message is associated with

group_id: optional string

The multi-agent group that the message was sent in

name: optional string

The name of the participant.

otid: optional string

The offline threading id associated with this message

sender_id: optional string

The id of the sender of the message, can be an identity id or agent id

type: optional "message"

The message type to be created.

Accepts one of the following:
"message"
ApprovalCreate = object { approval_request_id, approvals, approve, 3 more }

Input to approve or deny a tool call request

Deprecatedapproval_request_id: optional string

The message ID of the approval request

approvals: optional array of ApprovalReturn { approve, tool_call_id, reason, type } or ToolReturn { status, tool_call_id, tool_return, 3 more }

The list of approval responses

Accepts one of the following:
ApprovalReturn = object { approve, tool_call_id, reason, type }
approve: boolean

Whether the tool has been approved

tool_call_id: string

The ID of the tool call that corresponds to this approval

reason: optional string

An optional explanation for the provided approval status

type: optional "approval"

The message type to be created.

Accepts one of the following:
"approval"
ToolReturn = object { status, tool_call_id, tool_return, 3 more }
status: "success" or "error"
Accepts one of the following:
"success"
"error"
tool_call_id: string
tool_return: string
stderr: optional array of string
stdout: optional array of string
type: optional "tool"

The message type to be created.

Accepts one of the following:
"tool"
Deprecatedapprove: optional boolean

Whether the tool has been approved

group_id: optional string

The multi-agent group that the message was sent in

Deprecatedreason: optional string

An optional explanation for the provided approval status

type: optional "approval"

The message type to be created.

Accepts one of the following:
"approval"
stream_tokens: optional boolean

Flag to determine if individual tokens should be streamed, rather than streaming per step (only used when streaming=true).

streaming: optional boolean

If True, returns a streaming response (Server-Sent Events). If False (default), returns a complete response.

Deprecateduse_assistant_message: optional boolean

Whether the server should parse specific tool call arguments (default send_message) as AssistantMessage objects. Still supported for legacy agent types, but deprecated for letta_v1_agent onward.

Create Group Message Streaming
curl https://api.letta.com/v1/groups/$GROUP_ID/messages/stream \
    -H 'Content-Type: application/json' \
    -H "Authorization: Bearer $LETTA_API_KEY" \
    -d '{}'
{}
Returns Examples
{}