Parallel Tool Calling
When an agent calls multiple tools, Letta can execute them concurrently instead of sequentially.
Parallel tool calling has two configuration levels:
- Agent LLM config: Controls whether the LLM can request multiple tool calls at once
- Individual tool settings: Controls whether requested tools actually execute in parallel or sequentially
Model Support
Parallel tool calling is supported for OpenAI and Anthropic models.
Enabling Parallel Tool Calling
Agent Configuration
Set parallel_tool_calls: true in the agent’s LLM config:
Tool Configuration
Individual tools must opt-in to parallel execution:
By default, tools execute sequentially (enable_parallel_execution=False).
Only enable parallel execution for tools safe to run concurrently. Tools that modify shared state or have ordering dependencies should remain sequential.
ADE Configuration
Agent Toggle
- Open Settings → LLM Config
- Enable “Parallel tool calls”
Tool Toggle
- Open the Tools panel
- Click a tool to open it
- Go to the Settings tab
- Enable “Enable parallel execution”
Execution Behavior
When the agent calls multiple tools:
- Sequential tools execute one-by-one
- Parallel-enabled tools execute concurrently
- Mixed: sequential tools complete first, then parallel tools execute together
Example:
Limitations
- Parallel execution is automatically disabled when tool rules are configured
- Only enable for tools safe to run concurrently (e.g., read-only operations)
- Tools that modify shared state should remain sequential