Tools
List Tools For Agent
Attach Tool To Agent
Detach Tool From Agent
Update Approval For Tool
Run Tool For Agent
ModelsExpand Collapse
ToolExecuteRequest = object { args }
Request to execute a tool.
args: optional map[unknown]
Arguments to pass to the tool
ToolExecutionResult = object { status, agent_state, func_return, 3 more }
status: "success" or "error"
The status of the tool execution and return object
Representation of an agent's state. This is the state of the agent at a given time, and is persisted in the DB backend. The state has all the information needed to recreate a persisted agent.
id: string
The id of the agent. Assigned by the database.
The type of agent.
The memory blocks used by the agent.
value: string
Value of the block.
id: optional string
The human-friendly ID of the Block
base_template_id: optional string
The base template id of the block.
created_by_id: optional string
The id of the user that made this Block.
deployment_id: optional string
The id of the deployment.
description: optional string
Description of the block.
entity_id: optional string
The id of the entity within the template.
hidden: optional boolean
If set to True, the block will be hidden.
is_template: optional boolean
Whether the block is a template (e.g. saved human/persona options).
label: optional string
Label of the block (e.g. 'human', 'persona') in the context window.
last_updated_by_id: optional string
The id of the user that last updated this Block.
limit: optional number
Character limit of the block.
metadata: optional map[unknown]
Metadata of the block.
preserve_on_migration: optional boolean
Preserve the block on template migration.
project_id: optional string
The associated project id.
read_only: optional boolean
Whether the agent has read-only access to the block.
tags: optional array of string
The tags associated with the block.
template_id: optional string
The id of the template.
template_name: optional string
Name of the block if it is a template.
Deprecated: Use model field instead. The LLM configuration used by the agent.
context_window: number
The context window size for the model.
model: string
LLM model name.
model_endpoint_type: "openai" or "anthropic" or "google_ai" or 22 more
The endpoint type for the model.
compatibility_type: optional "gguf" or "mlx"
The framework compatibility type for the model.
display_name: optional string
A human-friendly display name for the model.
effort: optional "low" or "medium" or "high" or "max"
The effort level for Anthropic models that support it (Opus 4.5, Opus 4.6). Controls token spending and thinking behavior. Not setting this gives similar performance to 'high'.
enable_reasoner: optional boolean
Whether or not the model should use extended thinking if it is a 'reasoning' style model
frequency_penalty: optional number
Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. From OpenAI: Number between -2.0 and 2.0.
handle: optional string
The handle for this config, in the format provider/model-name.
max_reasoning_tokens: optional number
Configurable thinking budget for extended thinking. Used for enable_reasoner and also for Google Vertex models like Gemini 2.5 Flash. Minimum value is 1024 when used with enable_reasoner.
max_tokens: optional number
The maximum number of tokens to generate. If not set, the model will use its default value.
model_endpoint: optional string
The endpoint for the model.
model_wrapper: optional string
The wrapper for the model.
Deprecatedparallel_tool_calls: optional boolean
Deprecated: Use model_settings to configure parallel tool calls instead. If set to True, enables parallel tool calling. Defaults to False.
The provider category for the model.
provider_name: optional string
The provider name for the model.
put_inner_thoughts_in_kwargs: optional boolean
Puts 'inner_thoughts' as a kwarg in the function call if this is set to True. This helps with function calling performance and also the generation of inner thoughts.
reasoning_effort: optional "none" or "minimal" or "low" or 3 more
The reasoning effort to use when generating text reasoning models
response_format: optional TextResponseFormat { type } or JsonSchemaResponseFormat { json_schema, type } or JsonObjectResponseFormat { type }
The response format for the model's output. Supports text, json_object, and json_schema (structured outputs). Can be set via model_settings.
TextResponseFormat = object { type }
Response format for plain text responses.
type: optional "text"
The type of the response format.
JsonSchemaResponseFormat = object { json_schema, type }
Response format for JSON schema-based responses.
json_schema: map[unknown]
The JSON schema of the response.
type: optional "json_schema"
The type of the response format.
JsonObjectResponseFormat = object { type }
Response format for JSON object responses.
type: optional "json_object"
The type of the response format.
strict: optional boolean
Enable strict mode for tool calling. When true, tool schemas include strict: true and additionalProperties: false, guaranteeing tool outputs match JSON schemas.
temperature: optional number
The temperature to use when generating text with the model. A higher temperature will result in more random text.
tier: optional string
The cost tier for the model (cloud only).
verbosity: optional "low" or "medium" or "high"
Soft control for how verbose model output should be, used for GPT-5 models.
Deprecatedmemory: object { blocks, agent_type, file_blocks, 2 more }
Deprecated: Use blocks field instead. The in-context memory of the agent.
Memory blocks contained in the agent's in-context memory
value: string
Value of the block.
id: optional string
The human-friendly ID of the Block
base_template_id: optional string
The base template id of the block.
created_by_id: optional string
The id of the user that made this Block.
deployment_id: optional string
The id of the deployment.
description: optional string
Description of the block.
entity_id: optional string
The id of the entity within the template.
hidden: optional boolean
If set to True, the block will be hidden.
is_template: optional boolean
Whether the block is a template (e.g. saved human/persona options).
label: optional string
Label of the block (e.g. 'human', 'persona') in the context window.
last_updated_by_id: optional string
The id of the user that last updated this Block.
limit: optional number
Character limit of the block.
metadata: optional map[unknown]
Metadata of the block.
preserve_on_migration: optional boolean
Preserve the block on template migration.
project_id: optional string
The associated project id.
read_only: optional boolean
Whether the agent has read-only access to the block.
tags: optional array of string
The tags associated with the block.
template_id: optional string
The id of the template.
template_name: optional string
Name of the block if it is a template.
Agent type controlling prompt rendering.
AgentType = "memgpt_agent" or "memgpt_v2_agent" or "letta_v1_agent" or 6 more
Enum to represent the type of agent.
file_blocks: optional array of object { file_id, is_open, source_id, 20 more }
Special blocks representing the agent's in-context memory of an attached file
file_id: string
Unique identifier of the file.
is_open: boolean
True if the agent currently has the file open.
Deprecatedsource_id: string
Deprecated: Use folder_id field instead. Unique identifier of the source.
value: string
Value of the block.
id: optional string
The human-friendly ID of the Block
base_template_id: optional string
The base template id of the block.
created_by_id: optional string
The id of the user that made this Block.
deployment_id: optional string
The id of the deployment.
description: optional string
Description of the block.
entity_id: optional string
The id of the entity within the template.
hidden: optional boolean
If set to True, the block will be hidden.
is_template: optional boolean
Whether the block is a template (e.g. saved human/persona options).
label: optional string
Label of the block (e.g. 'human', 'persona') in the context window.
last_accessed_at: optional string
UTC timestamp of the agent’s most recent access to this file. Any operations from the open, close, or search tools will update this field.
last_updated_by_id: optional string
The id of the user that last updated this Block.
limit: optional number
Character limit of the block.
metadata: optional map[unknown]
Metadata of the block.
preserve_on_migration: optional boolean
Preserve the block on template migration.
project_id: optional string
The associated project id.
read_only: optional boolean
Whether the agent has read-only access to the block.
tags: optional array of string
The tags associated with the block.
template_id: optional string
The id of the template.
template_name: optional string
Name of the block if it is a template.
git_enabled: optional boolean
Whether this agent uses git-backed memory with structured labels.
prompt_template: optional string
Deprecated. Ignored for performance.
name: string
The name of the agent.
Deprecatedsources: array of object { id, embedding_config, name, 8 more }
Deprecated: Use folders field instead. The sources used by the agent.
id: string
The human-friendly ID of the Source
embedding_config: EmbeddingConfig { embedding_dim, embedding_endpoint_type, embedding_model, 7 more }
The embedding configuration used by the source.
embedding_dim: number
The dimension of the embedding.
embedding_endpoint_type: "openai" or "anthropic" or "bedrock" or 16 more
The endpoint type for the model.
embedding_model: string
The model for the embedding.
azure_deployment: optional string
The Azure deployment for the model.
azure_endpoint: optional string
The Azure endpoint for the model.
azure_version: optional string
The Azure version for the model.
batch_size: optional number
The maximum batch size for processing embeddings.
embedding_chunk_size: optional number
The chunk size of the embedding.
embedding_endpoint: optional string
The endpoint for the model (None if local).
handle: optional string
The handle for this config, in the format provider/model-name.
name: string
The name of the source.
created_at: optional string
The timestamp when the source was created.
created_by_id: optional string
The id of the user that made this Tool.
description: optional string
The description of the source.
instructions: optional string
Instructions for how to use the source.
last_updated_by_id: optional string
The id of the user that made this Tool.
metadata: optional map[unknown]
Metadata associated with the source.
updated_at: optional string
The timestamp when the source was last updated.
The vector database provider used for this source's passages
system: string
The system prompt used by the agent.
tags: array of string
The tags associated with the agent.
The tools used by the agent.
id: string
The human-friendly ID of the Tool
args_json_schema: optional map[unknown]
The args JSON schema of the function.
created_by_id: optional string
The id of the user that made this Tool.
default_requires_approval: optional boolean
Default value for whether or not executing this tool requires approval.
description: optional string
The description of the tool.
enable_parallel_execution: optional boolean
If set to True, then this tool will potentially be executed concurrently with other tools. Default False.
json_schema: optional map[unknown]
The JSON schema of the function.
last_updated_by_id: optional string
The id of the user that made this Tool.
metadata_: optional map[unknown]
A dictionary of additional metadata for the tool.
name: optional string
The name of the function.
Optional list of npm packages required by this tool.
name: string
Name of the npm package.
version: optional string
Optional version of the package, following semantic versioning.
Optional list of pip packages required by this tool.
name: string
Name of the pip package.
version: optional string
Optional version of the package, following semantic versioning.
project_id: optional string
The project id of the tool.
return_char_limit: optional number
The maximum number of characters in the response.
source_code: optional string
The source code of the function.
source_type: optional string
The type of the source code.
tags: optional array of string
Metadata tags.
The type of the tool.
base_template_id: optional string
The base template id of the agent.
compaction_settings: optional object { model, clip_chars, mode, 4 more }
Configuration for conversation compaction / summarization.
model is the only required user-facing field – it specifies the summarizer
model handle (e.g. "openai/gpt-4o-mini"). Per-model settings (temperature,
max tokens, etc.) are derived from the default configuration for that handle.
model: string
Model handle to use for summarization (format: provider/model-name).
clip_chars: optional number
The maximum length of the summary in characters. If none, no clipping is performed.
mode: optional "all" or "sliding_window"
The type of summarization technique use.
model_settings: optional OpenAIModelSettings { max_output_tokens, parallel_tool_calls, provider_type, 4 more } or AnthropicModelSettings { effort, max_output_tokens, parallel_tool_calls, 6 more } or GoogleAIModelSettings { max_output_tokens, parallel_tool_calls, provider_type, 3 more } or 10 more
Optional model settings used to override defaults for the summarizer model.
OpenAIModelSettings = object { max_output_tokens, parallel_tool_calls, provider_type, 4 more }
max_output_tokens: optional number
The maximum number of tokens the model can generate.
parallel_tool_calls: optional boolean
Whether to enable parallel tool calling.
provider_type: optional "openai"
The type of the provider.
reasoning: optional object { reasoning_effort }
The reasoning configuration for the model.
reasoning_effort: optional "none" or "minimal" or "low" or 3 more
The reasoning effort to use when generating text reasoning models
response_format: optional TextResponseFormat { type } or JsonSchemaResponseFormat { json_schema, type } or JsonObjectResponseFormat { type }
The response format for the model.
TextResponseFormat = object { type }
Response format for plain text responses.
type: optional "text"
The type of the response format.
JsonSchemaResponseFormat = object { json_schema, type }
Response format for JSON schema-based responses.
json_schema: map[unknown]
The JSON schema of the response.
type: optional "json_schema"
The type of the response format.
JsonObjectResponseFormat = object { type }
Response format for JSON object responses.
type: optional "json_object"
The type of the response format.
strict: optional boolean
Enable strict mode for tool calling. When true, tool outputs are guaranteed to match JSON schemas.
temperature: optional number
The temperature of the model.
AnthropicModelSettings = object { effort, max_output_tokens, parallel_tool_calls, 6 more }
effort: optional "low" or "medium" or "high"
Effort level for Opus 4.5 model (controls token conservation). Not setting this gives similar performance to 'high'.
max_output_tokens: optional number
The maximum number of tokens the model can generate.
parallel_tool_calls: optional boolean
Whether to enable parallel tool calling.
provider_type: optional "anthropic"
The type of the provider.
response_format: optional TextResponseFormat { type } or JsonSchemaResponseFormat { json_schema, type } or JsonObjectResponseFormat { type }
The response format for the model.
TextResponseFormat = object { type }
Response format for plain text responses.
type: optional "text"
The type of the response format.
JsonSchemaResponseFormat = object { json_schema, type }
Response format for JSON schema-based responses.
json_schema: map[unknown]
The JSON schema of the response.
type: optional "json_schema"
The type of the response format.
JsonObjectResponseFormat = object { type }
Response format for JSON object responses.
type: optional "json_object"
The type of the response format.
strict: optional boolean
Enable strict mode for tool calling. When true, tool outputs are guaranteed to match JSON schemas.
temperature: optional number
The temperature of the model.
thinking: optional object { budget_tokens, type }
The thinking configuration for the model.
budget_tokens: optional number
The maximum number of tokens the model can use for extended thinking.
type: optional "enabled" or "disabled"
The type of thinking to use.
verbosity: optional "low" or "medium" or "high"
Soft control for how verbose model output should be, used for GPT-5 models.
GoogleAIModelSettings = object { max_output_tokens, parallel_tool_calls, provider_type, 3 more }
max_output_tokens: optional number
The maximum number of tokens the model can generate.
parallel_tool_calls: optional boolean
Whether to enable parallel tool calling.
provider_type: optional "google_ai"
The type of the provider.
response_schema: optional TextResponseFormat { type } or JsonSchemaResponseFormat { json_schema, type } or JsonObjectResponseFormat { type }
The response schema for the model.
TextResponseFormat = object { type }
Response format for plain text responses.
type: optional "text"
The type of the response format.
JsonSchemaResponseFormat = object { json_schema, type }
Response format for JSON schema-based responses.
json_schema: map[unknown]
The JSON schema of the response.
type: optional "json_schema"
The type of the response format.
JsonObjectResponseFormat = object { type }
Response format for JSON object responses.
type: optional "json_object"
The type of the response format.
temperature: optional number
The temperature of the model.
thinking_config: optional object { include_thoughts, thinking_budget }
The thinking configuration for the model.
include_thoughts: optional boolean
Whether to include thoughts in the model's response.
thinking_budget: optional number
The thinking budget for the model.
GoogleVertexModelSettings = object { max_output_tokens, parallel_tool_calls, provider_type, 3 more }
max_output_tokens: optional number
The maximum number of tokens the model can generate.
parallel_tool_calls: optional boolean
Whether to enable parallel tool calling.
provider_type: optional "google_vertex"
The type of the provider.
response_schema: optional TextResponseFormat { type } or JsonSchemaResponseFormat { json_schema, type } or JsonObjectResponseFormat { type }
The response schema for the model.
TextResponseFormat = object { type }
Response format for plain text responses.
type: optional "text"
The type of the response format.
JsonSchemaResponseFormat = object { json_schema, type }
Response format for JSON schema-based responses.
json_schema: map[unknown]
The JSON schema of the response.
type: optional "json_schema"
The type of the response format.
JsonObjectResponseFormat = object { type }
Response format for JSON object responses.
type: optional "json_object"
The type of the response format.
temperature: optional number
The temperature of the model.
thinking_config: optional object { include_thoughts, thinking_budget }
The thinking configuration for the model.
include_thoughts: optional boolean
Whether to include thoughts in the model's response.
thinking_budget: optional number
The thinking budget for the model.
AzureModelSettings = object { max_output_tokens, parallel_tool_calls, provider_type, 2 more }
Azure OpenAI model configuration (OpenAI-compatible).
max_output_tokens: optional number
The maximum number of tokens the model can generate.
parallel_tool_calls: optional boolean
Whether to enable parallel tool calling.
provider_type: optional "azure"
The type of the provider.
response_format: optional TextResponseFormat { type } or JsonSchemaResponseFormat { json_schema, type } or JsonObjectResponseFormat { type }
The response format for the model.
TextResponseFormat = object { type }
Response format for plain text responses.
type: optional "text"
The type of the response format.
JsonSchemaResponseFormat = object { json_schema, type }
Response format for JSON schema-based responses.
json_schema: map[unknown]
The JSON schema of the response.
type: optional "json_schema"
The type of the response format.
JsonObjectResponseFormat = object { type }
Response format for JSON object responses.
type: optional "json_object"
The type of the response format.
temperature: optional number
The temperature of the model.
XaiModelSettings = object { max_output_tokens, parallel_tool_calls, provider_type, 2 more }
xAI model configuration (OpenAI-compatible).
max_output_tokens: optional number
The maximum number of tokens the model can generate.
parallel_tool_calls: optional boolean
Whether to enable parallel tool calling.
provider_type: optional "xai"
The type of the provider.
response_format: optional TextResponseFormat { type } or JsonSchemaResponseFormat { json_schema, type } or JsonObjectResponseFormat { type }
The response format for the model.
TextResponseFormat = object { type }
Response format for plain text responses.
type: optional "text"
The type of the response format.
JsonSchemaResponseFormat = object { json_schema, type }
Response format for JSON schema-based responses.
json_schema: map[unknown]
The JSON schema of the response.
type: optional "json_schema"
The type of the response format.
JsonObjectResponseFormat = object { type }
Response format for JSON object responses.
type: optional "json_object"
The type of the response format.
temperature: optional number
The temperature of the model.
Zai = object { max_output_tokens, parallel_tool_calls, provider_type, 3 more }
Z.ai (ZhipuAI) model configuration (OpenAI-compatible).
max_output_tokens: optional number
The maximum number of tokens the model can generate.
parallel_tool_calls: optional boolean
Whether to enable parallel tool calling.
provider_type: optional "zai"
The type of the provider.
response_format: optional TextResponseFormat { type } or JsonSchemaResponseFormat { json_schema, type } or JsonObjectResponseFormat { type }
The response format for the model.
TextResponseFormat = object { type }
Response format for plain text responses.
type: optional "text"
The type of the response format.
JsonSchemaResponseFormat = object { json_schema, type }
Response format for JSON schema-based responses.
json_schema: map[unknown]
The JSON schema of the response.
type: optional "json_schema"
The type of the response format.
JsonObjectResponseFormat = object { type }
Response format for JSON object responses.
type: optional "json_object"
The type of the response format.
temperature: optional number
The temperature of the model.
thinking: optional object { clear_thinking, type }
The thinking configuration for GLM-4.5+ models.
clear_thinking: optional boolean
If False, preserved thinking is used (recommended for agents).
type: optional "enabled" or "disabled"
Whether thinking is enabled or disabled.
GroqModelSettings = object { max_output_tokens, parallel_tool_calls, provider_type, 2 more }
Groq model configuration (OpenAI-compatible).
max_output_tokens: optional number
The maximum number of tokens the model can generate.
parallel_tool_calls: optional boolean
Whether to enable parallel tool calling.
provider_type: optional "groq"
The type of the provider.
response_format: optional TextResponseFormat { type } or JsonSchemaResponseFormat { json_schema, type } or JsonObjectResponseFormat { type }
The response format for the model.
TextResponseFormat = object { type }
Response format for plain text responses.
type: optional "text"
The type of the response format.
JsonSchemaResponseFormat = object { json_schema, type }
Response format for JSON schema-based responses.
json_schema: map[unknown]
The JSON schema of the response.
type: optional "json_schema"
The type of the response format.
JsonObjectResponseFormat = object { type }
Response format for JSON object responses.
type: optional "json_object"
The type of the response format.
temperature: optional number
The temperature of the model.
DeepseekModelSettings = object { max_output_tokens, parallel_tool_calls, provider_type, 2 more }
Deepseek model configuration (OpenAI-compatible).
max_output_tokens: optional number
The maximum number of tokens the model can generate.
parallel_tool_calls: optional boolean
Whether to enable parallel tool calling.
provider_type: optional "deepseek"
The type of the provider.
response_format: optional TextResponseFormat { type } or JsonSchemaResponseFormat { json_schema, type } or JsonObjectResponseFormat { type }
The response format for the model.
TextResponseFormat = object { type }
Response format for plain text responses.
type: optional "text"
The type of the response format.
JsonSchemaResponseFormat = object { json_schema, type }
Response format for JSON schema-based responses.
json_schema: map[unknown]
The JSON schema of the response.
type: optional "json_schema"
The type of the response format.
JsonObjectResponseFormat = object { type }
Response format for JSON object responses.
type: optional "json_object"
The type of the response format.
temperature: optional number
The temperature of the model.
TogetherModelSettings = object { max_output_tokens, parallel_tool_calls, provider_type, 2 more }
Together AI model configuration (OpenAI-compatible).
max_output_tokens: optional number
The maximum number of tokens the model can generate.
parallel_tool_calls: optional boolean
Whether to enable parallel tool calling.
provider_type: optional "together"
The type of the provider.
response_format: optional TextResponseFormat { type } or JsonSchemaResponseFormat { json_schema, type } or JsonObjectResponseFormat { type }
The response format for the model.
TextResponseFormat = object { type }
Response format for plain text responses.
type: optional "text"
The type of the response format.
JsonSchemaResponseFormat = object { json_schema, type }
Response format for JSON schema-based responses.
json_schema: map[unknown]
The JSON schema of the response.
type: optional "json_schema"
The type of the response format.
JsonObjectResponseFormat = object { type }
Response format for JSON object responses.
type: optional "json_object"
The type of the response format.
temperature: optional number
The temperature of the model.
BedrockModelSettings = object { max_output_tokens, parallel_tool_calls, provider_type, 2 more }
AWS Bedrock model configuration.
max_output_tokens: optional number
The maximum number of tokens the model can generate.
parallel_tool_calls: optional boolean
Whether to enable parallel tool calling.
provider_type: optional "bedrock"
The type of the provider.
response_format: optional TextResponseFormat { type } or JsonSchemaResponseFormat { json_schema, type } or JsonObjectResponseFormat { type }
The response format for the model.
TextResponseFormat = object { type }
Response format for plain text responses.
type: optional "text"
The type of the response format.
JsonSchemaResponseFormat = object { json_schema, type }
Response format for JSON schema-based responses.
json_schema: map[unknown]
The JSON schema of the response.
type: optional "json_schema"
The type of the response format.
JsonObjectResponseFormat = object { type }
Response format for JSON object responses.
type: optional "json_object"
The type of the response format.
temperature: optional number
The temperature of the model.
Openrouter = object { max_output_tokens, parallel_tool_calls, provider_type, 2 more }
OpenRouter model configuration (OpenAI-compatible).
max_output_tokens: optional number
The maximum number of tokens the model can generate.
parallel_tool_calls: optional boolean
Whether to enable parallel tool calling.
provider_type: optional "openrouter"
The type of the provider.
response_format: optional TextResponseFormat { type } or JsonSchemaResponseFormat { json_schema, type } or JsonObjectResponseFormat { type }
The response format for the model.
TextResponseFormat = object { type }
Response format for plain text responses.
type: optional "text"
The type of the response format.
JsonSchemaResponseFormat = object { json_schema, type }
Response format for JSON schema-based responses.
json_schema: map[unknown]
The JSON schema of the response.
type: optional "json_schema"
The type of the response format.
JsonObjectResponseFormat = object { type }
Response format for JSON object responses.
type: optional "json_object"
The type of the response format.
temperature: optional number
The temperature of the model.
ChatgptOAuth = object { max_output_tokens, parallel_tool_calls, provider_type, 2 more }
ChatGPT OAuth model configuration (uses ChatGPT backend API).
max_output_tokens: optional number
The maximum number of tokens the model can generate.
parallel_tool_calls: optional boolean
Whether to enable parallel tool calling.
provider_type: optional "chatgpt_oauth"
The type of the provider.
reasoning: optional object { reasoning_effort }
The reasoning configuration for the model.
reasoning_effort: optional "none" or "low" or "medium" or 2 more
The reasoning effort level for GPT-5.x and o-series models.
temperature: optional number
The temperature of the model.
prompt: optional string
The prompt to use for summarization.
prompt_acknowledgement: optional boolean
Whether to include an acknowledgement post-prompt (helps prevent non-summary outputs).
sliding_window_percentage: optional number
The percentage of the context window to keep post-summarization (only used in sliding window mode).
created_at: optional string
The timestamp when the object was created.
created_by_id: optional string
The id of the user that made this object.
deployment_id: optional string
The id of the deployment.
description: optional string
The description of the agent.
embedding: optional string
The embedding model handle used by the agent (format: provider/model-name).
Deprecatedembedding_config: optional EmbeddingConfig { embedding_dim, embedding_endpoint_type, embedding_model, 7 more }
Configuration for embedding model connection and processing parameters.
embedding_dim: number
The dimension of the embedding.
embedding_endpoint_type: "openai" or "anthropic" or "bedrock" or 16 more
The endpoint type for the model.
embedding_model: string
The model for the embedding.
azure_deployment: optional string
The Azure deployment for the model.
azure_endpoint: optional string
The Azure endpoint for the model.
azure_version: optional string
The Azure version for the model.
batch_size: optional number
The maximum batch size for processing embeddings.
embedding_chunk_size: optional number
The chunk size of the embedding.
embedding_endpoint: optional string
The endpoint for the model (None if local).
handle: optional string
The handle for this config, in the format provider/model-name.
enable_sleeptime: optional boolean
If set to True, memory management will move to a background agent thread.
entity_id: optional string
The id of the entity within the template.
hidden: optional boolean
If set to True, the agent will be hidden.
identities: optional array of object { id, agent_ids, block_ids, 5 more }
The identities associated with this agent.
id: string
The human-friendly ID of the Identity
Deprecatedagent_ids: array of string
The IDs of the agents associated with the identity.
Deprecatedblock_ids: array of string
The IDs of the blocks associated with the identity.
identifier_key: string
External, user-generated identifier key of the identity.
identity_type: "org" or "user" or "other"
The type of the identity.
name: string
The name of the identity.
project_id: optional string
The project id of the identity, if applicable.
properties: optional array of object { key, type, value }
List of properties associated with the identity
key: string
The key of the property
type: "string" or "number" or "boolean" or "json"
The type of the property
value: string or number or boolean or map[unknown]
The value of the property
Deprecatedidentity_ids: optional array of string
Deprecated: Use identities field instead. The ids of the identities associated with this agent.
last_run_completion: optional string
The timestamp when the agent last completed a run.
last_run_duration_ms: optional number
The duration in milliseconds of the agent's last run.
The stop reason from the agent's last run.
last_updated_by_id: optional string
The id of the user that made this object.
managed_group: optional object { id, agent_ids, description, 15 more }
The multi-agent group that this agent manages
id: string
The id of the group. Assigned by the database.
manager_type: "round_robin" or "supervisor" or "dynamic" or 3 more
base_template_id: optional string
The base template id.
deployment_id: optional string
The id of the deployment.
hidden: optional boolean
If set to True, the group will be hidden.
max_message_buffer_length: optional number
The desired maximum length of messages in the context window of the convo agent. This is a best effort, and may be off slightly due to user/assistant interleaving.
min_message_buffer_length: optional number
The desired minimum length of messages in the context window of the convo agent. This is a best effort, and may be off-by-one due to user/assistant interleaving.
project_id: optional string
The associated project id.
template_id: optional string
The id of the template.
max_files_open: optional number
Maximum number of files that can be open at once for this agent. Setting this too high may exceed the context window, which will break the agent.
message_buffer_autoclear: optional boolean
If set to True, the agent will not remember previous messages (though the agent will still retain state via core memory blocks and archival/recall memory). Not recommended unless you have an advanced use case.
message_ids: optional array of string
The ids of the messages in the agent's in-context memory.
metadata: optional map[unknown]
The metadata of the agent.
model: optional string
The model handle used by the agent (format: provider/model-name).
model_settings: optional OpenAIModelSettings { max_output_tokens, parallel_tool_calls, provider_type, 4 more } or AnthropicModelSettings { effort, max_output_tokens, parallel_tool_calls, 6 more } or GoogleAIModelSettings { max_output_tokens, parallel_tool_calls, provider_type, 3 more } or 10 more
The model settings used by the agent.
OpenAIModelSettings = object { max_output_tokens, parallel_tool_calls, provider_type, 4 more }
max_output_tokens: optional number
The maximum number of tokens the model can generate.
parallel_tool_calls: optional boolean
Whether to enable parallel tool calling.
provider_type: optional "openai"
The type of the provider.
reasoning: optional object { reasoning_effort }
The reasoning configuration for the model.
reasoning_effort: optional "none" or "minimal" or "low" or 3 more
The reasoning effort to use when generating text reasoning models
response_format: optional TextResponseFormat { type } or JsonSchemaResponseFormat { json_schema, type } or JsonObjectResponseFormat { type }
The response format for the model.
TextResponseFormat = object { type }
Response format for plain text responses.
type: optional "text"
The type of the response format.
JsonSchemaResponseFormat = object { json_schema, type }
Response format for JSON schema-based responses.
json_schema: map[unknown]
The JSON schema of the response.
type: optional "json_schema"
The type of the response format.
JsonObjectResponseFormat = object { type }
Response format for JSON object responses.
type: optional "json_object"
The type of the response format.
strict: optional boolean
Enable strict mode for tool calling. When true, tool outputs are guaranteed to match JSON schemas.
temperature: optional number
The temperature of the model.
AnthropicModelSettings = object { effort, max_output_tokens, parallel_tool_calls, 6 more }
effort: optional "low" or "medium" or "high"
Effort level for Opus 4.5 model (controls token conservation). Not setting this gives similar performance to 'high'.
max_output_tokens: optional number
The maximum number of tokens the model can generate.
parallel_tool_calls: optional boolean
Whether to enable parallel tool calling.
provider_type: optional "anthropic"
The type of the provider.
response_format: optional TextResponseFormat { type } or JsonSchemaResponseFormat { json_schema, type } or JsonObjectResponseFormat { type }
The response format for the model.
TextResponseFormat = object { type }
Response format for plain text responses.
type: optional "text"
The type of the response format.
JsonSchemaResponseFormat = object { json_schema, type }
Response format for JSON schema-based responses.
json_schema: map[unknown]
The JSON schema of the response.
type: optional "json_schema"
The type of the response format.
JsonObjectResponseFormat = object { type }
Response format for JSON object responses.
type: optional "json_object"
The type of the response format.
strict: optional boolean
Enable strict mode for tool calling. When true, tool outputs are guaranteed to match JSON schemas.
temperature: optional number
The temperature of the model.
thinking: optional object { budget_tokens, type }
The thinking configuration for the model.
budget_tokens: optional number
The maximum number of tokens the model can use for extended thinking.
type: optional "enabled" or "disabled"
The type of thinking to use.
verbosity: optional "low" or "medium" or "high"
Soft control for how verbose model output should be, used for GPT-5 models.
GoogleAIModelSettings = object { max_output_tokens, parallel_tool_calls, provider_type, 3 more }
max_output_tokens: optional number
The maximum number of tokens the model can generate.
parallel_tool_calls: optional boolean
Whether to enable parallel tool calling.
provider_type: optional "google_ai"
The type of the provider.
response_schema: optional TextResponseFormat { type } or JsonSchemaResponseFormat { json_schema, type } or JsonObjectResponseFormat { type }
The response schema for the model.
TextResponseFormat = object { type }
Response format for plain text responses.
type: optional "text"
The type of the response format.
JsonSchemaResponseFormat = object { json_schema, type }
Response format for JSON schema-based responses.
json_schema: map[unknown]
The JSON schema of the response.
type: optional "json_schema"
The type of the response format.
JsonObjectResponseFormat = object { type }
Response format for JSON object responses.
type: optional "json_object"
The type of the response format.
temperature: optional number
The temperature of the model.
thinking_config: optional object { include_thoughts, thinking_budget }
The thinking configuration for the model.
include_thoughts: optional boolean
Whether to include thoughts in the model's response.
thinking_budget: optional number
The thinking budget for the model.
GoogleVertexModelSettings = object { max_output_tokens, parallel_tool_calls, provider_type, 3 more }
max_output_tokens: optional number
The maximum number of tokens the model can generate.
parallel_tool_calls: optional boolean
Whether to enable parallel tool calling.
provider_type: optional "google_vertex"
The type of the provider.
response_schema: optional TextResponseFormat { type } or JsonSchemaResponseFormat { json_schema, type } or JsonObjectResponseFormat { type }
The response schema for the model.
TextResponseFormat = object { type }
Response format for plain text responses.
type: optional "text"
The type of the response format.
JsonSchemaResponseFormat = object { json_schema, type }
Response format for JSON schema-based responses.
json_schema: map[unknown]
The JSON schema of the response.
type: optional "json_schema"
The type of the response format.
JsonObjectResponseFormat = object { type }
Response format for JSON object responses.
type: optional "json_object"
The type of the response format.
temperature: optional number
The temperature of the model.
thinking_config: optional object { include_thoughts, thinking_budget }
The thinking configuration for the model.
include_thoughts: optional boolean
Whether to include thoughts in the model's response.
thinking_budget: optional number
The thinking budget for the model.
AzureModelSettings = object { max_output_tokens, parallel_tool_calls, provider_type, 2 more }
Azure OpenAI model configuration (OpenAI-compatible).
max_output_tokens: optional number
The maximum number of tokens the model can generate.
parallel_tool_calls: optional boolean
Whether to enable parallel tool calling.
provider_type: optional "azure"
The type of the provider.
response_format: optional TextResponseFormat { type } or JsonSchemaResponseFormat { json_schema, type } or JsonObjectResponseFormat { type }
The response format for the model.
TextResponseFormat = object { type }
Response format for plain text responses.
type: optional "text"
The type of the response format.
JsonSchemaResponseFormat = object { json_schema, type }
Response format for JSON schema-based responses.
json_schema: map[unknown]
The JSON schema of the response.
type: optional "json_schema"
The type of the response format.
JsonObjectResponseFormat = object { type }
Response format for JSON object responses.
type: optional "json_object"
The type of the response format.
temperature: optional number
The temperature of the model.
XaiModelSettings = object { max_output_tokens, parallel_tool_calls, provider_type, 2 more }
xAI model configuration (OpenAI-compatible).
max_output_tokens: optional number
The maximum number of tokens the model can generate.
parallel_tool_calls: optional boolean
Whether to enable parallel tool calling.
provider_type: optional "xai"
The type of the provider.
response_format: optional TextResponseFormat { type } or JsonSchemaResponseFormat { json_schema, type } or JsonObjectResponseFormat { type }
The response format for the model.
TextResponseFormat = object { type }
Response format for plain text responses.
type: optional "text"
The type of the response format.
JsonSchemaResponseFormat = object { json_schema, type }
Response format for JSON schema-based responses.
json_schema: map[unknown]
The JSON schema of the response.
type: optional "json_schema"
The type of the response format.
JsonObjectResponseFormat = object { type }
Response format for JSON object responses.
type: optional "json_object"
The type of the response format.
temperature: optional number
The temperature of the model.
Zai = object { max_output_tokens, parallel_tool_calls, provider_type, 3 more }
Z.ai (ZhipuAI) model configuration (OpenAI-compatible).
max_output_tokens: optional number
The maximum number of tokens the model can generate.
parallel_tool_calls: optional boolean
Whether to enable parallel tool calling.
provider_type: optional "zai"
The type of the provider.
response_format: optional TextResponseFormat { type } or JsonSchemaResponseFormat { json_schema, type } or JsonObjectResponseFormat { type }
The response format for the model.
TextResponseFormat = object { type }
Response format for plain text responses.
type: optional "text"
The type of the response format.
JsonSchemaResponseFormat = object { json_schema, type }
Response format for JSON schema-based responses.
json_schema: map[unknown]
The JSON schema of the response.
type: optional "json_schema"
The type of the response format.
JsonObjectResponseFormat = object { type }
Response format for JSON object responses.
type: optional "json_object"
The type of the response format.
temperature: optional number
The temperature of the model.
thinking: optional object { clear_thinking, type }
The thinking configuration for GLM-4.5+ models.
clear_thinking: optional boolean
If False, preserved thinking is used (recommended for agents).
type: optional "enabled" or "disabled"
Whether thinking is enabled or disabled.
GroqModelSettings = object { max_output_tokens, parallel_tool_calls, provider_type, 2 more }
Groq model configuration (OpenAI-compatible).
max_output_tokens: optional number
The maximum number of tokens the model can generate.
parallel_tool_calls: optional boolean
Whether to enable parallel tool calling.
provider_type: optional "groq"
The type of the provider.
response_format: optional TextResponseFormat { type } or JsonSchemaResponseFormat { json_schema, type } or JsonObjectResponseFormat { type }
The response format for the model.
TextResponseFormat = object { type }
Response format for plain text responses.
type: optional "text"
The type of the response format.
JsonSchemaResponseFormat = object { json_schema, type }
Response format for JSON schema-based responses.
json_schema: map[unknown]
The JSON schema of the response.
type: optional "json_schema"
The type of the response format.
JsonObjectResponseFormat = object { type }
Response format for JSON object responses.
type: optional "json_object"
The type of the response format.
temperature: optional number
The temperature of the model.
DeepseekModelSettings = object { max_output_tokens, parallel_tool_calls, provider_type, 2 more }
Deepseek model configuration (OpenAI-compatible).
max_output_tokens: optional number
The maximum number of tokens the model can generate.
parallel_tool_calls: optional boolean
Whether to enable parallel tool calling.
provider_type: optional "deepseek"
The type of the provider.
response_format: optional TextResponseFormat { type } or JsonSchemaResponseFormat { json_schema, type } or JsonObjectResponseFormat { type }
The response format for the model.
TextResponseFormat = object { type }
Response format for plain text responses.
type: optional "text"
The type of the response format.
JsonSchemaResponseFormat = object { json_schema, type }
Response format for JSON schema-based responses.
json_schema: map[unknown]
The JSON schema of the response.
type: optional "json_schema"
The type of the response format.
JsonObjectResponseFormat = object { type }
Response format for JSON object responses.
type: optional "json_object"
The type of the response format.
temperature: optional number
The temperature of the model.
TogetherModelSettings = object { max_output_tokens, parallel_tool_calls, provider_type, 2 more }
Together AI model configuration (OpenAI-compatible).
max_output_tokens: optional number
The maximum number of tokens the model can generate.
parallel_tool_calls: optional boolean
Whether to enable parallel tool calling.
provider_type: optional "together"
The type of the provider.
response_format: optional TextResponseFormat { type } or JsonSchemaResponseFormat { json_schema, type } or JsonObjectResponseFormat { type }
The response format for the model.
TextResponseFormat = object { type }
Response format for plain text responses.
type: optional "text"
The type of the response format.
JsonSchemaResponseFormat = object { json_schema, type }
Response format for JSON schema-based responses.
json_schema: map[unknown]
The JSON schema of the response.
type: optional "json_schema"
The type of the response format.
JsonObjectResponseFormat = object { type }
Response format for JSON object responses.
type: optional "json_object"
The type of the response format.
temperature: optional number
The temperature of the model.
BedrockModelSettings = object { max_output_tokens, parallel_tool_calls, provider_type, 2 more }
AWS Bedrock model configuration.
max_output_tokens: optional number
The maximum number of tokens the model can generate.
parallel_tool_calls: optional boolean
Whether to enable parallel tool calling.
provider_type: optional "bedrock"
The type of the provider.
response_format: optional TextResponseFormat { type } or JsonSchemaResponseFormat { json_schema, type } or JsonObjectResponseFormat { type }
The response format for the model.
TextResponseFormat = object { type }
Response format for plain text responses.
type: optional "text"
The type of the response format.
JsonSchemaResponseFormat = object { json_schema, type }
Response format for JSON schema-based responses.
json_schema: map[unknown]
The JSON schema of the response.
type: optional "json_schema"
The type of the response format.
JsonObjectResponseFormat = object { type }
Response format for JSON object responses.
type: optional "json_object"
The type of the response format.
temperature: optional number
The temperature of the model.
Openrouter = object { max_output_tokens, parallel_tool_calls, provider_type, 2 more }
OpenRouter model configuration (OpenAI-compatible).
max_output_tokens: optional number
The maximum number of tokens the model can generate.
parallel_tool_calls: optional boolean
Whether to enable parallel tool calling.
provider_type: optional "openrouter"
The type of the provider.
response_format: optional TextResponseFormat { type } or JsonSchemaResponseFormat { json_schema, type } or JsonObjectResponseFormat { type }
The response format for the model.
TextResponseFormat = object { type }
Response format for plain text responses.
type: optional "text"
The type of the response format.
JsonSchemaResponseFormat = object { json_schema, type }
Response format for JSON schema-based responses.
json_schema: map[unknown]
The JSON schema of the response.
type: optional "json_schema"
The type of the response format.
JsonObjectResponseFormat = object { type }
Response format for JSON object responses.
type: optional "json_object"
The type of the response format.
temperature: optional number
The temperature of the model.
ChatgptOAuth = object { max_output_tokens, parallel_tool_calls, provider_type, 2 more }
ChatGPT OAuth model configuration (uses ChatGPT backend API).
max_output_tokens: optional number
The maximum number of tokens the model can generate.
parallel_tool_calls: optional boolean
Whether to enable parallel tool calling.
provider_type: optional "chatgpt_oauth"
The type of the provider.
reasoning: optional object { reasoning_effort }
The reasoning configuration for the model.
reasoning_effort: optional "none" or "low" or "medium" or 2 more
The reasoning effort level for GPT-5.x and o-series models.
temperature: optional number
The temperature of the model.
Deprecatedmulti_agent_group: optional object { id, agent_ids, description, 15 more }
Deprecated: Use managed_group field instead. The multi-agent group that this agent manages.
id: string
The id of the group. Assigned by the database.
manager_type: "round_robin" or "supervisor" or "dynamic" or 3 more
base_template_id: optional string
The base template id.
deployment_id: optional string
The id of the deployment.
hidden: optional boolean
If set to True, the group will be hidden.
max_message_buffer_length: optional number
The desired maximum length of messages in the context window of the convo agent. This is a best effort, and may be off slightly due to user/assistant interleaving.
min_message_buffer_length: optional number
The desired minimum length of messages in the context window of the convo agent. This is a best effort, and may be off-by-one due to user/assistant interleaving.
project_id: optional string
The associated project id.
template_id: optional string
The id of the template.
A message representing a request for approval to call a tool (generated by the LLM to trigger tool execution).
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message tool_call (ToolCall): The tool call
Deprecatedtool_call: ToolCall { arguments, name, tool_call_id } or ToolCallDelta { arguments, name, tool_call_id }
The tool call that has been requested by the llm to run
ToolCall = object { arguments, name, tool_call_id }
ToolCallDelta = object { arguments, name, tool_call_id }
message_type: optional "approval_request_message"
The type of the message.
tool_calls: optional array of ToolCall { arguments, name, tool_call_id } or ToolCallDelta { arguments, name, tool_call_id }
The tool calls that have been requested by the llm to run, which are pending approval
ToolCallDelta = object { arguments, name, tool_call_id }
per_file_view_window_char_limit: optional number
The per-file view window character limit for this agent. Setting this too high may exceed the context window, which will break the agent.
project_id: optional string
The id of the project the agent belongs to.
response_format: optional TextResponseFormat { type } or JsonSchemaResponseFormat { json_schema, type } or JsonObjectResponseFormat { type }
The response format used by the agent
TextResponseFormat = object { type }
Response format for plain text responses.
type: optional "text"
The type of the response format.
JsonSchemaResponseFormat = object { json_schema, type }
Response format for JSON schema-based responses.
json_schema: map[unknown]
The JSON schema of the response.
type: optional "json_schema"
The type of the response format.
JsonObjectResponseFormat = object { type }
Response format for JSON object responses.
type: optional "json_object"
The type of the response format.
The environment variables for tool execution specific to this agent.
agent_id: string
The ID of the agent this environment variable belongs to.
key: string
The name of the environment variable.
value: string
The value of the environment variable.
id: optional string
The human-friendly ID of the Agent-env
created_at: optional string
The timestamp when the object was created.
created_by_id: optional string
The id of the user that made this object.
description: optional string
An optional description of the environment variable.
last_updated_by_id: optional string
The id of the user that made this object.
updated_at: optional string
The timestamp when the object was last updated.
value_enc: optional string
Encrypted secret value (stored as encrypted string)
template_id: optional string
The id of the template the agent belongs to.
timezone: optional string
The timezone of the agent (IANA format).
Deprecatedtool_exec_environment_variables: optional array of AgentEnvironmentVariable { agent_id, key, value, 7 more }
Deprecated: use secrets field instead.
agent_id: string
The ID of the agent this environment variable belongs to.
key: string
The name of the environment variable.
value: string
The value of the environment variable.
id: optional string
The human-friendly ID of the Agent-env
created_at: optional string
The timestamp when the object was created.
created_by_id: optional string
The id of the user that made this object.
description: optional string
An optional description of the environment variable.
last_updated_by_id: optional string
The id of the user that made this object.
updated_at: optional string
The timestamp when the object was last updated.
value_enc: optional string
Encrypted secret value (stored as encrypted string)
tool_rules: optional array of ChildToolRule { children, tool_name, child_arg_nodes, 2 more } or InitToolRule { tool_name, args, prompt_template, type } or TerminalToolRule { tool_name, prompt_template, type } or 6 more
The list of tool rules.
ChildToolRule = object { children, tool_name, child_arg_nodes, 2 more }
A ToolRule represents a tool that can be invoked by the agent.
children: array of string
The children tools that can be invoked.
tool_name: string
The name of the tool. Must exist in the database for the user's organization.
child_arg_nodes: optional array of object { name, args }
Optional list of typed child argument overrides. Each node must reference a child in 'children'.
name: string
The name of the child tool to invoke next.
args: optional map[unknown]
Optional prefilled arguments for this child tool. Keys must match the tool's parameter names and values must satisfy the tool's JSON schema. Supports partial prefill; non-overlapping parameters are left to the model.
prompt_template: optional string
Optional template string (ignored).
InitToolRule = object { tool_name, args, prompt_template, type }
Represents the initial tool rule configuration.
tool_name: string
The name of the tool. Must exist in the database for the user's organization.
args: optional map[unknown]
Optional prefilled arguments for this tool. When present, these values will override any LLM-provided arguments with the same keys during invocation. Keys must match the tool's parameter names and values must satisfy the tool's JSON schema. Supports partial prefill; non-overlapping parameters are left to the model.
prompt_template: optional string
Optional template string (ignored). Rendering uses fast built-in formatting for performance.
TerminalToolRule = object { tool_name, prompt_template, type }
Represents a terminal tool rule configuration where if this tool gets called, it must end the agent loop.
tool_name: string
The name of the tool. Must exist in the database for the user's organization.
prompt_template: optional string
Optional template string (ignored).
ConditionalToolRule = object { child_output_mapping, tool_name, default_child, 3 more }
A ToolRule that conditionally maps to different child tools based on the output.
child_output_mapping: map[string]
The output case to check for mapping
tool_name: string
The name of the tool. Must exist in the database for the user's organization.
default_child: optional string
The default child tool to be called. If None, any tool can be called.
prompt_template: optional string
Optional template string (ignored).
require_output_mapping: optional boolean
Whether to throw an error when output doesn't match any case
ContinueToolRule = object { tool_name, prompt_template, type }
Represents a tool rule configuration where if this tool gets called, it must continue the agent loop.
tool_name: string
The name of the tool. Must exist in the database for the user's organization.
prompt_template: optional string
Optional template string (ignored).
RequiredBeforeExitToolRule = object { tool_name, prompt_template, type }
Represents a tool rule configuration where this tool must be called before the agent loop can exit.
tool_name: string
The name of the tool. Must exist in the database for the user's organization.
prompt_template: optional string
Optional template string (ignored).
MaxCountPerStepToolRule = object { max_count_limit, tool_name, prompt_template, type }
Represents a tool rule configuration which constrains the total number of times this tool can be invoked in a single step.
max_count_limit: number
The max limit for the total number of times this tool can be invoked in a single step.
tool_name: string
The name of the tool. Must exist in the database for the user's organization.
prompt_template: optional string
Optional template string (ignored).
ParentToolRule = object { children, tool_name, prompt_template, type }
A ToolRule that only allows a child tool to be called if the parent has been called.
children: array of string
The children tools that can be invoked.
tool_name: string
The name of the tool. Must exist in the database for the user's organization.
prompt_template: optional string
Optional template string (ignored).
RequiresApprovalToolRule = object { tool_name, prompt_template, type }
Represents a tool rule configuration which requires approval before the tool can be invoked.
tool_name: string
The name of the tool. Must exist in the database for the user's organization.
prompt_template: optional string
Optional template string (ignored). Rendering uses fast built-in formatting for performance.
updated_at: optional string
The timestamp when the object was last updated.
func_return: optional unknown
The function return object
sandbox_config_fingerprint: optional string
The fingerprint of the config for the sandbox
stderr: optional array of string
Captured stderr from the function invocation
stdout: optional array of string
Captured stdout (prints, logs) from function invocation