The producer-reviewer pattern
Build iterative content creation systems that use a producer to create content and a reviewer to evaluate it, looping with feedback until approval or the loop reaches its max iterations.
A producer agent creates content while a reviewer agent evaluates it. The reviewer accepts or rejects the content. If rejected, the reviewer provides feedback, and the producer revises based on that feedback. This loop continues until the reviewer approves or a maximum iteration limit is reached.
When to use this pattern
Section titled “When to use this pattern”The producer-reviewer pattern works well when you need to:
- Create content that meets specific quality standards
- Iteratively refine outputs based on expert feedback
- Separate creation from evaluation responsibilities
- Enforce approval workflows before content is finalized
Example use case
Section titled “Example use case”A marketing team needs social media posts that match brand guidelines. A producer agent drafts posts, while a reviewer agent evaluates them against tone, length, and messaging requirements. The reviewer rejects drafts that don’t meet its standards and provides specific feedback. The producer revises based on this feedback until the reviewer approves or the maximum revision count is reached.
How it works
Section titled “How it works”The pattern uses four key components:
- The producer agent creates content based on requirements and feedback.
- The reviewer agent evaluates the content against criteria, then accepts or rejects it.
- A shared memory block stores the current draft for review.
- Client orchestration manages the feedback loop and iteration count.
The client orchestrates the workflow:
- The client sends the initial request to the producer.
- The producer creates content and stores it in shared memory.
- The client reads the draft and sends it to the reviewer.
- The reviewer evaluates the draft, then provides a verdict (accept or reject) and feedback.
- The client acts based on the reviewer’s response.
- If rejected: The client sends feedback to the producer, and the loop continues.
- If accepted or max iterations reached: The loop terminates.
Implementation
Section titled “Implementation”Implement a producer-reviewer pattern using the following steps.
Step 1: Create shared memory for drafts
Section titled “Step 1: Create shared memory for drafts”Create a memory block where the producer stores drafts and the reviewer reads them:
from letta_client import Lettaimport os
client = Letta(api_key=os.getenv("LETTA_API_KEY"))
# Shared memory block for current draftdraft_block = client.blocks.create( label="current_draft", description="The current content draft under review", value="" # Producer will update this)The producer updates this block with each draft version. The reviewer reads it to evaluate the content.
Step 2: Create the producer agent
Section titled “Step 2: Create the producer agent”Create the producer agent with access to the shared draft block:
producer = client.agents.create( name="content_producer", model="anthropic/claude-sonnet-4-5-20250929", memory_blocks=[{ "label": "persona", "value": "I create marketing content. I store my drafts in the current_draft memory block and revise based on feedback." }], block_ids=[draft_block.id], tools=["core_memory_replace"])The producer’s key features include:
- The
core_memory_replacetool, which it uses to update the draft block - A persona that emphasizes storing drafts and incorporating feedback
- Access to the shared draft block for writing content
Step 3: Create the reviewer agent
Section titled “Step 3: Create the reviewer agent”Create the reviewer agent with evaluation criteria:
reviewer = client.agents.create( name="content_reviewer", model="anthropic/claude-sonnet-4-5-20250929", memory_blocks=[{ "label": "persona", "value": "I review marketing content against brand guidelines. I evaluate tone, length (max 280 characters), and messaging. I respond with ACCEPT or REJECT followed by specific feedback." }], block_ids=[draft_block.id])The reviewer’s key features include:
- A persona with clear evaluation criteria
- Access to the shared draft block for reading content
- Its ability to provide an explicit verdict
Step 4: Run the feedback loop
Section titled “Step 4: Run the feedback loop”Orchestrate the iterative refinement process:
# Configurationmax_iterations = 5request = "Write a social media post announcing our new AI agent framework launch."
# Initial productionresponse = client.agents.messages.create( agent_id=producer.id, messages=[{ "role": "user", "content": f"{request} Store your draft in the current_draft memory block." }])
# Feedback loopfor iteration in range(max_iterations): print(f"\n--- Iteration {iteration + 1}/{max_iterations} ---")
# Get current draft from shared memory draft_block_updated = client.blocks.retrieve(draft_block.id) current_draft = draft_block_updated.value
# Reviewer evaluates the draft review_response = client.agents.messages.create( agent_id=reviewer.id, messages=[{ "role": "user", "content": f"Review this draft from current_draft memory:\n\n{current_draft}\n\nRespond with ACCEPT or REJECT followed by your feedback." }] )
# Extract reviewer's verdict reviewer_message = review_response.messages[-1].content
# Check if approved if "ACCEPT" in reviewer_message.upper(): print("✓ Draft approved!") break
# Extract feedback and send to producer feedback = reviewer_message
response = client.agents.messages.create( agent_id=producer.id, messages=[{ "role": "user", "content": f"The reviewer provided this feedback:\n\n{feedback}\n\nPlease revise your draft and update the current_draft memory block." }] )
# Get final approved draftfinal_draft_block = client.blocks.retrieve(draft_block.id)final_draft = final_draft_block.valueprint(f"\nFinal draft:\n{final_draft}")The loop works as follows:
- The producer creates an initial draft and stores it in the
current_draftblock. - The client reads the draft from the memory block.
- The client sends the draft to the reviewer for evaluation.
- The reviewer responds with a verdict and feedback.
- The client acts based on the reviewer’s response.
- If accepted: The loop terminates with approved content.
- If rejected: The client forwards the feedback to the producer.
- The producer revises based on the feedback and updates the draft block.
- The loop continues until approval or until it reaches the maximum iterations.
Minimal example
Section titled “Minimal example”Here’s a minimal end-to-end example:
from letta_client import Lettaimport os
client = Letta(api_key=os.getenv("LETTA_API_KEY"))
# Create shared draft blockdraft_block = client.blocks.create( label="current_draft", value="")
# Create producerproducer = client.agents.create( model="anthropic/claude-sonnet-4-5-20250929", memory_blocks=[{ "label": "persona", "value": "I write content and store drafts in current_draft block." }], block_ids=[draft_block.id], tools=["core_memory_replace"])
# Create reviewerreviewer = client.agents.create( model="anthropic/claude-sonnet-4-5-20250929", memory_blocks=[{ "label": "persona", "value": "I review content. Max 50 characters. Respond with ACCEPT or REJECT plus feedback." }], block_ids=[draft_block.id])
# Initial productionclient.agents.messages.create( agent_id=producer.id, messages=[{ "role": "user", "content": "Write a product tagline. Store in current_draft block." }])
# Feedback loopmax_iterations = 3for iteration in range(max_iterations): # Get draft draft_block_updated = client.blocks.retrieve(draft_block.id) current_draft = draft_block_updated.value
# Review review_response = client.agents.messages.create( agent_id=reviewer.id, messages=[{ "role": "user", "content": f"Review: {current_draft}" }] )
verdict = review_response.messages[-1].content
if "ACCEPT" in verdict.upper(): print(f"Approved: {current_draft}") break
# Revise client.agents.messages.create( agent_id=producer.id, messages=[{ "role": "user", "content": f"Feedback: {verdict}\n\nRevise and update current_draft." }] )
# Cleanupclient.agents.delete(producer.id)client.agents.delete(reviewer.id)client.blocks.delete(draft_block.id)Best practices
Section titled “Best practices”-
Set appropriate max iterations: Balance quality refinement with cost. Three to five iterations work well for most content creation tasks.
-
Use explicit verdict format: Train the reviewer to respond with clear keywords for acceptance or rejection. This makes parsing decisions reliable.
-
Store criteria in reviewer persona: Put specific evaluation rules (character limits, tone requirements, brand guidelines) in the reviewer’s persona for consistent evaluation.
-
Provide actionable feedback: The reviewer should give specific, actionable feedback. Vague feedback (“this could be better”) leads to unproductive iterations.
-
Consider archival memory for history: For complex projects, attach a shared archive where both agents can store draft history and feedback records for future reference.
-
Monitor iteration patterns: Track how many iterations typically occur. If most content requires the maximum iterations, the evaluation criteria may be too strict or unclear.