Groups

Coordinate multiple agents with different communication patterns

Groups are a new feature in Letta and the specification is actively evolving. If you need support, please chat with us on Discord.

Groups enable sophisticated multi-agent coordination patterns in Letta. Each group type provides a different communication and execution pattern, allowing you to choose the right architecture for your multi-agent system.

Choosing the Right Group Type

Group TypeBest ForKey Features
Sleep-timeBackground monitoring, periodic tasksMain + background agents, configurable frequency
Round RobinEqual participation, structured discussionsSequential, predictable, no orchestrator needed
SupervisorParallel task execution, work distributionCentralized control, parallel processing, result aggregation
DynamicContext-aware routing, complex workflowsFlexible, adaptive, orchestrator-driven
HandoffSpecialized routing, expertise-based delegationTask-based transfers (coming soon)

Working with Groups

All group types follow a similar creation pattern using the SDK:

  1. Create individual agents with their specific roles and personas
  2. Create a group with the appropriate manager configuration
  3. Send messages to the group for coordinated multi-agent interaction

Groups can be managed through the Letta API or SDKs:

  • List all groups: client.groups.list()
  • Retrieve a specific group: client.groups.retrieve(group_id)
  • Update group configuration: client.groups.update(group_id, update_config)
  • Delete a group: client.groups.delete(group_id)

Sleep-time

The Sleep-time pattern enables background agents to execute periodically while a main conversation agent handles user interactions. This is based on our sleep-time compute research.

For an in-depth guide on sleep-time agents, including conversation processing and data source integration, see our Sleep-time Agents documentation.

How it works

  • A main conversation agent handles direct user interactions
  • Sleeptime agents execute in the background every Nth turn
  • Background agents have access to the full message history
  • Useful for periodic tasks like monitoring, data collection, or summary generation
  • Frequency of background execution is configurable

Code Example

1from letta_client import Letta, SleeptimeManager
2
3client = Letta()
4
5# Create main conversation agent
6main_agent = client.agents.create(
7 model="openai/gpt-4.1",
8 memory_blocks=[
9 {"label": "persona", "value": "I am the main conversation agent"}
10 ]
11)
12
13# Create sleeptime agents for background tasks
14monitor_agent = client.agents.create(
15 model="openai/gpt-4.1",
16 memory_blocks=[
17 {"label": "persona", "value": "I monitor conversation sentiment and key topics"}
18 ]
19)
20
21summary_agent = client.agents.create(
22 model="openai/gpt-4.1",
23 memory_blocks=[
24 {"label": "persona", "value": "I create periodic summaries of the conversation"}
25 ]
26)
27
28# Create a Sleeptime group
29group = client.groups.create(
30 agent_ids=[monitor_agent.id, summary_agent.id],
31 description="Background agents that process conversation periodically",
32 manager_config=SleeptimeManager(
33 manager_agent_id=main_agent.id,
34 sleeptime_agent_frequency=3 # Execute every 3 turns
35 )
36)
37
38# Send messages to the group
39response = client.groups.messages.send(
40 group_id=group.id,
41 message="Let's discuss our project roadmap"
42)

RoundRobin

The RoundRobin group cycles through each agent in the group in the specified order. This pattern is useful for scenarios where each agent needs to contribute equally and in sequence.

How it works

  • Cycles through agents in the order they were added to the group
  • Every agent has access to the full conversation history
  • Each agent can choose whether or not to respond when it’s their turn
  • Default ensures each agent gets one turn, but max turns can be configured
  • Does not require an orchestrator agent

Code Example

1from letta_client import Letta, RoundRobinManager
2
3client = Letta()
4
5# Create agents for the group
6agent1 = client.agents.create(
7 model="openai/gpt-4.1",
8 memory_blocks=[
9 {"label": "persona", "value": "I am the first agent in the group"}
10 ]
11)
12
13agent2 = client.agents.create(
14 model="openai/gpt-4.1",
15 memory_blocks=[
16 {"label": "persona", "value": "I am the second agent in the group"}
17 ]
18)
19
20agent3 = client.agents.create(
21 model="openai/gpt-4.1",
22 memory_blocks=[
23 {"label": "persona", "value": "I am the third agent in the group"}
24 ]
25)
26
27# Create a RoundRobin group
28group = client.groups.create(
29 agent_ids=[agent1.id, agent2.id, agent3.id],
30 description="A group that cycles through agents in order",
31 manager_config=RoundRobinManager(
32 max_turns=3 # Optional: defaults to number of agents
33 )
34)
35
36# Send a message to the group
37response = client.groups.messages.send(
38 group_id=group.id,
39 message="Hello group, what are your thoughts on this topic?"
40)

Supervisor

The Supervisor pattern uses a manager agent to coordinate worker agents. The supervisor forwards prompts to all workers and aggregates their responses.

How it works

  • A designated supervisor agent manages the group
  • Supervisor forwards messages to all worker agents simultaneously
  • Worker agents process in parallel and return responses
  • Supervisor aggregates all responses and returns to the user
  • Ideal for parallel task execution and result aggregation

Code Example

1from letta_client import Letta, SupervisorManager
2
3client = Letta()
4
5# Create supervisor agent
6supervisor = client.agents.create(
7 model="openai/gpt-4.1",
8 memory_blocks=[
9 {"label": "persona", "value": "I am a supervisor managing multiple workers"}
10 ]
11)
12
13# Create worker agents
14worker1 = client.agents.create(
15 model="openai/gpt-4.1",
16 memory_blocks=[
17 {"label": "persona", "value": "I am a data analysis specialist"}
18 ]
19)
20
21worker2 = client.agents.create(
22 model="openai/gpt-4.1",
23 memory_blocks=[
24 {"label": "persona", "value": "I am a research specialist"}
25 ]
26)
27
28worker3 = client.agents.create(
29 model="openai/gpt-4.1",
30 memory_blocks=[
31 {"label": "persona", "value": "I am a writing specialist"}
32 ]
33)
34
35# Create a Supervisor group
36group = client.groups.create(
37 agent_ids=[worker1.id, worker2.id, worker3.id],
38 description="A supervisor-worker group for parallel task execution",
39 manager_config=SupervisorManager(
40 manager_agent_id=supervisor.id
41 )
42)
43
44# Send a message to the group
45response = client.groups.messages.send(
46 group_id=group.id,
47 message="Analyze this data and prepare a report"
48)

Dynamic

The Dynamic pattern uses an orchestrator agent to dynamically determine which agent should speak next based on the conversation context.

How it works

  • An orchestrator agent is invoked on every turn to select the next speaker
  • Every agent has access to the full message history
  • Agents can choose not to respond when selected
  • Supports a termination token to end the conversation
  • Maximum turns can be configured to prevent infinite loops

Code Example

1from letta_client import Letta, DynamicManager
2
3client = Letta()
4
5# Create orchestrator agent
6orchestrator = client.agents.create(
7 model="openai/gpt-4.1",
8 memory_blocks=[
9 {"label": "persona", "value": "I am an orchestrator that decides who speaks next based on context"}
10 ]
11)
12
13# Create participant agents
14expert1 = client.agents.create(
15 model="openai/gpt-4.1",
16 memory_blocks=[
17 {"label": "persona", "value": "I am a technical expert"}
18 ]
19)
20
21expert2 = client.agents.create(
22 model="openai/gpt-4.1",
23 memory_blocks=[
24 {"label": "persona", "value": "I am a business strategist"}
25 ]
26)
27
28expert3 = client.agents.create(
29 model="openai/gpt-4.1",
30 memory_blocks=[
31 {"label": "persona", "value": "I am a creative designer"}
32 ]
33)
34
35# Create a Dynamic group
36group = client.groups.create(
37 agent_ids=[expert1.id, expert2.id, expert3.id],
38 description="A dynamic group where the orchestrator chooses speakers",
39 manager_config=DynamicManager(
40 manager_agent_id=orchestrator.id,
41 termination_token="DONE!", # Optional: default is "DONE!"
42 max_turns=10 # Optional: prevent infinite loops
43 )
44)
45
46# Send a message to the group
47response = client.groups.messages.send(
48 group_id=group.id,
49 message="Let's design a new product. Who should start?"
50)

Handoff (Coming Soon)

The Handoff pattern will enable agents to explicitly transfer control to other agents based on task requirements or expertise areas.

Planned Features

  • Agents can hand off conversations to specialists
  • Context and state preservation during handoffs
  • Support for both orchestrated and peer-to-peer handoffs
  • Automatic routing based on agent capabilities

Best Practices

  • Choose the group type that matches your coordination needs
  • Configure appropriate max turns to prevent infinite loops
  • Use shared memory blocks for state that needs to be accessed by multiple agents
  • Monitor group performance and adjust configurations as needed