Groups

Coordinate multiple agents with different communication patterns

Groups support is experimental and may be unstable. For more information, visit our Discord.

Groups enable sophisticated multi-agent coordination patterns in Letta. Each group type provides a different communication and execution pattern, allowing you to choose the right architecture for your multi-agent system.

Choosing the Right Group Type

Group TypeBest ForKey Features
Sleep-timeBackground monitoring, periodic tasksMain + background agents, configurable frequency
Round RobinEqual participation, structured discussionsSequential, predictable, no orchestrator needed
SupervisorParallel task execution, work distributionCentralized control, parallel processing, result aggregation
DynamicContext-aware routing, complex workflowsFlexible, adaptive, orchestrator-driven
HandoffSpecialized routing, expertise-based delegationTask-based transfers (coming soon)

Working with Groups

All group types follow a similar creation pattern using the SDK:

  1. Create individual agents with their specific roles and personas
  2. Create a group with the appropriate manager configuration
  3. Send messages to the group for coordinated multi-agent interaction

Groups can be managed through the Letta API or SDKs:

  • List all groups: client.groups.list()
  • Retrieve a specific group: client.groups.retrieve(group_id)
  • Update group configuration: client.groups.update(group_id, update_config)
  • Delete a group: client.groups.delete(group_id)

Sleep-time

The Sleep-time pattern enables background agents to execute periodically while a main conversation agent handles user interactions. This is based on our sleep-time compute research.

For an in-depth guide on sleep-time agents, including conversation processing and data source integration, see our Sleep-time Agents documentation.

How it works

  • A main conversation agent handles direct user interactions
  • Sleeptime agents execute in the background every Nth turn
  • Background agents have access to the full message history
  • Useful for periodic tasks like monitoring, data collection, or summary generation
  • Frequency of background execution is configurable

Code Example

1import { LettaClient } from '@letta-ai/letta-client';
2
3const client = new LettaClient();
4
5// Create main conversation agent
6const mainAgent = await client.agents.create({
7 model: "openai/gpt-4.1",
8 memoryBlocks: [
9 {label: "persona", value: "I am the main conversation agent"}
10 ]
11});
12
13// Create sleeptime agents for background tasks
14const monitorAgent = await client.agents.create({
15 model: "openai/gpt-4.1",
16 memoryBlocks: [
17 {label: "persona", value: "I monitor conversation sentiment and key topics"}
18 ]
19});
20
21const summaryAgent = await client.agents.create({
22 model: "openai/gpt-4.1",
23 memoryBlocks: [
24 {label: "persona", value: "I create periodic summaries of the conversation"}
25 ]
26});
27
28// Create a Sleeptime group
29const group = await client.groups.create({
30 agentIds: [monitorAgent.id, summaryAgent.id],
31 description: "Background agents that process conversation periodically",
32 managerConfig: {
33 managerType: "sleeptime",
34 managerAgentId: mainAgent.id,
35 sleeptimeAgentFrequency: 3 // Execute every 3 turns
36 }
37});
38
39// Send messages to the group
40const response = await client.groups.messages.create(
41 group.id,
42 {
43 messages: [{role: "user", content: "Let's discuss our project roadmap"}]
44 }
45);

RoundRobin

The RoundRobin group cycles through each agent in the group in the specified order. This pattern is useful for scenarios where each agent needs to contribute equally and in sequence.

How it works

  • Cycles through agents in the order they were added to the group
  • Every agent has access to the full conversation history
  • Each agent can choose whether or not to respond when it’s their turn
  • Default ensures each agent gets one turn, but max turns can be configured
  • Does not require an orchestrator agent

Code Example

1import { LettaClient } from '@letta-ai/letta-client';
2
3const client = new LettaClient();
4
5// Create agents for the group
6const agent1 = await client.agents.create({
7 model: "openai/gpt-4.1",
8 memoryBlocks: [
9 {label: "persona", value: "I am the first agent in the group"}
10 ]
11});
12
13const agent2 = await client.agents.create({
14 model: "openai/gpt-4.1",
15 memoryBlocks: [
16 {label: "persona", value: "I am the second agent in the group"}
17 ]
18});
19
20const agent3 = await client.agents.create({
21 model: "openai/gpt-4.1",
22 memoryBlocks: [
23 {label: "persona", value: "I am the third agent in the group"}
24 ]
25});
26
27// Create a RoundRobin group
28const group = await client.groups.create({
29 agentIds: [agent1.id, agent2.id, agent3.id],
30 description: "A group that cycles through agents in order",
31 managerConfig: {
32 managerType: "round_robin",
33 maxTurns: 3 // Optional: defaults to number of agents
34 }
35});
36
37// Send a message to the group
38const response = await client.groups.messages.create(
39 group.id,
40 {
41 messages: [{role: "user", content: "Hello group, what are your thoughts on this topic?"}]
42 }
43);

Supervisor

The Supervisor pattern uses a manager agent to coordinate worker agents. The supervisor forwards prompts to all workers and aggregates their responses.

How it works

  • A designated supervisor agent manages the group
  • Supervisor forwards messages to all worker agents simultaneously
  • Worker agents process in parallel and return responses
  • Supervisor aggregates all responses and returns to the user
  • Ideal for parallel task execution and result aggregation

Code Example

1import { LettaClient } from '@letta-ai/letta-client';
2
3const client = new LettaClient();
4
5// Create supervisor agent
6const supervisor = await client.agents.create({
7 model: "openai/gpt-4.1",
8 memoryBlocks: [
9 {label: "persona", value: "I am a supervisor managing multiple workers"}
10 ]
11});
12
13// Create worker agents
14const worker1 = await client.agents.create({
15 model: "openai/gpt-4.1",
16 memoryBlocks: [
17 {label: "persona", value: "I am a data analysis specialist"}
18 ]
19});
20
21const worker2 = await client.agents.create({
22 model: "openai/gpt-4.1",
23 memoryBlocks: [
24 {label: "persona", value: "I am a research specialist"}
25 ]
26});
27
28const worker3 = await client.agents.create({
29 model: "openai/gpt-4.1",
30 memoryBlocks: [
31 {label: "persona", value: "I am a writing specialist"}
32 ]
33});
34
35// Create a Supervisor group
36const group = await client.groups.create({
37 agentIds: [worker1.id, worker2.id, worker3.id],
38 description: "A supervisor-worker group for parallel task execution",
39 managerConfig: {
40 managerType: "supervisor",
41 managerAgentId: supervisor.id
42 }
43});
44
45// Send a message to the group
46const response = await client.groups.messages.create(
47 group.id,
48 {
49 messages: [{role: "user", content: "Analyze this data and prepare a report"}]
50 }
51);

Dynamic

The Dynamic pattern uses an orchestrator agent to dynamically determine which agent should speak next based on the conversation context.

How it works

  • An orchestrator agent is invoked on every turn to select the next speaker
  • Every agent has access to the full message history
  • Agents can choose not to respond when selected
  • Supports a termination token to end the conversation
  • Maximum turns can be configured to prevent infinite loops

Code Example

1import { LettaClient } from '@letta-ai/letta-client';
2
3const client = new LettaClient();
4
5// Create orchestrator agent
6const orchestrator = await client.agents.create({
7 model: "openai/gpt-4.1",
8 memoryBlocks: [
9 {label: "persona", value: "I am an orchestrator that decides who speaks next based on context"}
10 ]
11});
12
13// Create participant agents
14const expert1 = await client.agents.create({
15 model: "openai/gpt-4.1",
16 memoryBlocks: [
17 {label: "persona", value: "I am a technical expert"}
18 ]
19});
20
21const expert2 = await client.agents.create({
22 model: "openai/gpt-4.1",
23 memoryBlocks: [
24 {label: "persona", value: "I am a business strategist"}
25 ]
26});
27
28const expert3 = await client.agents.create({
29 model: "openai/gpt-4.1",
30 memoryBlocks: [
31 {label: "persona", value: "I am a creative designer"}
32 ]
33});
34
35// Create a Dynamic group
36const group = await client.groups.create({
37 agentIds: [expert1.id, expert2.id, expert3.id],
38 description: "A dynamic group where the orchestrator chooses speakers",
39 managerConfig: {
40 managerType: "dynamic",
41 managerAgentId: orchestrator.id,
42 terminationToken: "DONE!", // Optional: default is "DONE!"
43 maxTurns: 10 // Optional: prevent infinite loops
44 }
45});
46
47// Send a message to the group
48const response = await client.groups.messages.create(
49 group.id,
50 {
51 messages: [{role: "user", content: "Let's design a new product. Who should start?"}]
52 }
53);

Handoff (Coming Soon)

The Handoff pattern will enable agents to explicitly transfer control to other agents based on task requirements or expertise areas.

Planned Features

  • Agents can hand off conversations to specialists
  • Context and state preservation during handoffs
  • Support for both orchestrated and peer-to-peer handoffs
  • Automatic routing based on agent capabilities

Best Practices

  • Choose the group type that matches your coordination needs
  • Configure appropriate max turns to prevent infinite loops
  • Use shared memory blocks for state that needs to be accessed by multiple agents
  • Monitor group performance and adjust configurations as needed