Skip to content
Letta Platform Letta Platform Letta Docs
Sign up

Retrieve Conversation

conversations.retrieve(strconversation_id) -> Conversation
get/v1/conversations/{conversation_id}

Retrieve a specific conversation.

ParametersExpand Collapse
conversation_id: str

The conversation identifier. Either the special value 'default' or an ID in the format 'conv-'

minLength1
maxLength41
ReturnsExpand Collapse
class Conversation:

Represents a conversation on an agent for concurrent messaging.

id: str

The unique identifier of the conversation.

agent_id: str

The ID of the agent this conversation belongs to.

created_at: Optional[datetime]

The timestamp when the object was created.

formatdate-time
created_by_id: Optional[str]

The id of the user that made this object.

in_context_message_ids: Optional[List[str]]

The IDs of in-context messages for the conversation.

isolated_block_ids: Optional[List[str]]

IDs of blocks that are isolated (specific to this conversation, overriding agent defaults).

last_updated_by_id: Optional[str]

The id of the user that made this object.

model: Optional[str]

The model handle for this conversation (overrides agent's model). Format: provider/model-name.

model_settings: Optional[ModelSettings]

The model settings for this conversation (overrides agent's model settings).

Accepts one of the following:
class OpenAIModelSettings:
max_output_tokens: Optional[int]

The maximum number of tokens the model can generate.

parallel_tool_calls: Optional[bool]

Whether to enable parallel tool calling.

provider_type: Optional[Literal["openai"]]

The type of the provider.

reasoning: Optional[Reasoning]

The reasoning configuration for the model.

reasoning_effort: Optional[Literal["none", "minimal", "low", 3 more]]

The reasoning effort to use when generating text reasoning models

Accepts one of the following:
"none"
"minimal"
"low"
"medium"
"high"
"xhigh"
response_format: Optional[ResponseFormat]

The response format for the model.

Accepts one of the following:
class TextResponseFormat:

Response format for plain text responses.

type: Optional[Literal["text"]]

The type of the response format.

class JsonSchemaResponseFormat:

Response format for JSON schema-based responses.

json_schema: Dict[str, object]

The JSON schema of the response.

type: Optional[Literal["json_schema"]]

The type of the response format.

class JsonObjectResponseFormat:

Response format for JSON object responses.

type: Optional[Literal["json_object"]]

The type of the response format.

strict: Optional[bool]

Enable strict mode for tool calling. When true, tool outputs are guaranteed to match JSON schemas.

temperature: Optional[float]

The temperature of the model.

class AnthropicModelSettings:
effort: Optional[Literal["low", "medium", "high"]]

Effort level for Opus 4.5 model (controls token conservation). Not setting this gives similar performance to 'high'.

Accepts one of the following:
"low"
"medium"
"high"
max_output_tokens: Optional[int]

The maximum number of tokens the model can generate.

parallel_tool_calls: Optional[bool]

Whether to enable parallel tool calling.

provider_type: Optional[Literal["anthropic"]]

The type of the provider.

response_format: Optional[ResponseFormat]

The response format for the model.

Accepts one of the following:
class TextResponseFormat:

Response format for plain text responses.

type: Optional[Literal["text"]]

The type of the response format.

class JsonSchemaResponseFormat:

Response format for JSON schema-based responses.

json_schema: Dict[str, object]

The JSON schema of the response.

type: Optional[Literal["json_schema"]]

The type of the response format.

class JsonObjectResponseFormat:

Response format for JSON object responses.

type: Optional[Literal["json_object"]]

The type of the response format.

strict: Optional[bool]

Enable strict mode for tool calling. When true, tool outputs are guaranteed to match JSON schemas.

temperature: Optional[float]

The temperature of the model.

thinking: Optional[Thinking]

The thinking configuration for the model.

budget_tokens: Optional[int]

The maximum number of tokens the model can use for extended thinking.

type: Optional[Literal["enabled", "disabled"]]

The type of thinking to use.

Accepts one of the following:
"enabled"
"disabled"
verbosity: Optional[Literal["low", "medium", "high"]]

Soft control for how verbose model output should be, used for GPT-5 models.

Accepts one of the following:
"low"
"medium"
"high"
class GoogleAIModelSettings:
max_output_tokens: Optional[int]

The maximum number of tokens the model can generate.

parallel_tool_calls: Optional[bool]

Whether to enable parallel tool calling.

provider_type: Optional[Literal["google_ai"]]

The type of the provider.

response_schema: Optional[ResponseSchema]

The response schema for the model.

Accepts one of the following:
class TextResponseFormat:

Response format for plain text responses.

type: Optional[Literal["text"]]

The type of the response format.

class JsonSchemaResponseFormat:

Response format for JSON schema-based responses.

json_schema: Dict[str, object]

The JSON schema of the response.

type: Optional[Literal["json_schema"]]

The type of the response format.

class JsonObjectResponseFormat:

Response format for JSON object responses.

type: Optional[Literal["json_object"]]

The type of the response format.

temperature: Optional[float]

The temperature of the model.

thinking_config: Optional[ThinkingConfig]

The thinking configuration for the model.

include_thoughts: Optional[bool]

Whether to include thoughts in the model's response.

thinking_budget: Optional[int]

The thinking budget for the model.

class GoogleVertexModelSettings:
max_output_tokens: Optional[int]

The maximum number of tokens the model can generate.

parallel_tool_calls: Optional[bool]

Whether to enable parallel tool calling.

provider_type: Optional[Literal["google_vertex"]]

The type of the provider.

response_schema: Optional[ResponseSchema]

The response schema for the model.

Accepts one of the following:
class TextResponseFormat:

Response format for plain text responses.

type: Optional[Literal["text"]]

The type of the response format.

class JsonSchemaResponseFormat:

Response format for JSON schema-based responses.

json_schema: Dict[str, object]

The JSON schema of the response.

type: Optional[Literal["json_schema"]]

The type of the response format.

class JsonObjectResponseFormat:

Response format for JSON object responses.

type: Optional[Literal["json_object"]]

The type of the response format.

temperature: Optional[float]

The temperature of the model.

thinking_config: Optional[ThinkingConfig]

The thinking configuration for the model.

include_thoughts: Optional[bool]

Whether to include thoughts in the model's response.

thinking_budget: Optional[int]

The thinking budget for the model.

class AzureModelSettings:

Azure OpenAI model configuration (OpenAI-compatible).

max_output_tokens: Optional[int]

The maximum number of tokens the model can generate.

parallel_tool_calls: Optional[bool]

Whether to enable parallel tool calling.

provider_type: Optional[Literal["azure"]]

The type of the provider.

response_format: Optional[ResponseFormat]

The response format for the model.

Accepts one of the following:
class TextResponseFormat:

Response format for plain text responses.

type: Optional[Literal["text"]]

The type of the response format.

class JsonSchemaResponseFormat:

Response format for JSON schema-based responses.

json_schema: Dict[str, object]

The JSON schema of the response.

type: Optional[Literal["json_schema"]]

The type of the response format.

class JsonObjectResponseFormat:

Response format for JSON object responses.

type: Optional[Literal["json_object"]]

The type of the response format.

temperature: Optional[float]

The temperature of the model.

class XaiModelSettings:

xAI model configuration (OpenAI-compatible).

max_output_tokens: Optional[int]

The maximum number of tokens the model can generate.

parallel_tool_calls: Optional[bool]

Whether to enable parallel tool calling.

provider_type: Optional[Literal["xai"]]

The type of the provider.

response_format: Optional[ResponseFormat]

The response format for the model.

Accepts one of the following:
class TextResponseFormat:

Response format for plain text responses.

type: Optional[Literal["text"]]

The type of the response format.

class JsonSchemaResponseFormat:

Response format for JSON schema-based responses.

json_schema: Dict[str, object]

The JSON schema of the response.

type: Optional[Literal["json_schema"]]

The type of the response format.

class JsonObjectResponseFormat:

Response format for JSON object responses.

type: Optional[Literal["json_object"]]

The type of the response format.

temperature: Optional[float]

The temperature of the model.

class ModelSettingsZaiModelSettings:

Z.ai (ZhipuAI) model configuration (OpenAI-compatible).

max_output_tokens: Optional[int]

The maximum number of tokens the model can generate.

parallel_tool_calls: Optional[bool]

Whether to enable parallel tool calling.

provider_type: Optional[Literal["zai"]]

The type of the provider.

response_format: Optional[ModelSettingsZaiModelSettingsResponseFormat]

The response format for the model.

Accepts one of the following:
class TextResponseFormat:

Response format for plain text responses.

type: Optional[Literal["text"]]

The type of the response format.

class JsonSchemaResponseFormat:

Response format for JSON schema-based responses.

json_schema: Dict[str, object]

The JSON schema of the response.

type: Optional[Literal["json_schema"]]

The type of the response format.

class JsonObjectResponseFormat:

Response format for JSON object responses.

type: Optional[Literal["json_object"]]

The type of the response format.

temperature: Optional[float]

The temperature of the model.

thinking: Optional[ModelSettingsZaiModelSettingsThinking]

The thinking configuration for GLM-4.5+ models.

clear_thinking: Optional[bool]

If False, preserved thinking is used (recommended for agents).

type: Optional[Literal["enabled", "disabled"]]

Whether thinking is enabled or disabled.

Accepts one of the following:
"enabled"
"disabled"
class GroqModelSettings:

Groq model configuration (OpenAI-compatible).

max_output_tokens: Optional[int]

The maximum number of tokens the model can generate.

parallel_tool_calls: Optional[bool]

Whether to enable parallel tool calling.

provider_type: Optional[Literal["groq"]]

The type of the provider.

response_format: Optional[ResponseFormat]

The response format for the model.

Accepts one of the following:
class TextResponseFormat:

Response format for plain text responses.

type: Optional[Literal["text"]]

The type of the response format.

class JsonSchemaResponseFormat:

Response format for JSON schema-based responses.

json_schema: Dict[str, object]

The JSON schema of the response.

type: Optional[Literal["json_schema"]]

The type of the response format.

class JsonObjectResponseFormat:

Response format for JSON object responses.

type: Optional[Literal["json_object"]]

The type of the response format.

temperature: Optional[float]

The temperature of the model.

class DeepseekModelSettings:

Deepseek model configuration (OpenAI-compatible).

max_output_tokens: Optional[int]

The maximum number of tokens the model can generate.

parallel_tool_calls: Optional[bool]

Whether to enable parallel tool calling.

provider_type: Optional[Literal["deepseek"]]

The type of the provider.

response_format: Optional[ResponseFormat]

The response format for the model.

Accepts one of the following:
class TextResponseFormat:

Response format for plain text responses.

type: Optional[Literal["text"]]

The type of the response format.

class JsonSchemaResponseFormat:

Response format for JSON schema-based responses.

json_schema: Dict[str, object]

The JSON schema of the response.

type: Optional[Literal["json_schema"]]

The type of the response format.

class JsonObjectResponseFormat:

Response format for JSON object responses.

type: Optional[Literal["json_object"]]

The type of the response format.

temperature: Optional[float]

The temperature of the model.

class TogetherModelSettings:

Together AI model configuration (OpenAI-compatible).

max_output_tokens: Optional[int]

The maximum number of tokens the model can generate.

parallel_tool_calls: Optional[bool]

Whether to enable parallel tool calling.

provider_type: Optional[Literal["together"]]

The type of the provider.

response_format: Optional[ResponseFormat]

The response format for the model.

Accepts one of the following:
class TextResponseFormat:

Response format for plain text responses.

type: Optional[Literal["text"]]

The type of the response format.

class JsonSchemaResponseFormat:

Response format for JSON schema-based responses.

json_schema: Dict[str, object]

The JSON schema of the response.

type: Optional[Literal["json_schema"]]

The type of the response format.

class JsonObjectResponseFormat:

Response format for JSON object responses.

type: Optional[Literal["json_object"]]

The type of the response format.

temperature: Optional[float]

The temperature of the model.

class BedrockModelSettings:

AWS Bedrock model configuration.

max_output_tokens: Optional[int]

The maximum number of tokens the model can generate.

parallel_tool_calls: Optional[bool]

Whether to enable parallel tool calling.

provider_type: Optional[Literal["bedrock"]]

The type of the provider.

response_format: Optional[ResponseFormat]

The response format for the model.

Accepts one of the following:
class TextResponseFormat:

Response format for plain text responses.

type: Optional[Literal["text"]]

The type of the response format.

class JsonSchemaResponseFormat:

Response format for JSON schema-based responses.

json_schema: Dict[str, object]

The JSON schema of the response.

type: Optional[Literal["json_schema"]]

The type of the response format.

class JsonObjectResponseFormat:

Response format for JSON object responses.

type: Optional[Literal["json_object"]]

The type of the response format.

temperature: Optional[float]

The temperature of the model.

class ModelSettingsOpenRouterModelSettings:

OpenRouter model configuration (OpenAI-compatible).

max_output_tokens: Optional[int]

The maximum number of tokens the model can generate.

parallel_tool_calls: Optional[bool]

Whether to enable parallel tool calling.

provider_type: Optional[Literal["openrouter"]]

The type of the provider.

response_format: Optional[ModelSettingsOpenRouterModelSettingsResponseFormat]

The response format for the model.

Accepts one of the following:
class TextResponseFormat:

Response format for plain text responses.

type: Optional[Literal["text"]]

The type of the response format.

class JsonSchemaResponseFormat:

Response format for JSON schema-based responses.

json_schema: Dict[str, object]

The JSON schema of the response.

type: Optional[Literal["json_schema"]]

The type of the response format.

class JsonObjectResponseFormat:

Response format for JSON object responses.

type: Optional[Literal["json_object"]]

The type of the response format.

temperature: Optional[float]

The temperature of the model.

class ModelSettingsChatGptoAuthModelSettings:

ChatGPT OAuth model configuration (uses ChatGPT backend API).

max_output_tokens: Optional[int]

The maximum number of tokens the model can generate.

parallel_tool_calls: Optional[bool]

Whether to enable parallel tool calling.

provider_type: Optional[Literal["chatgpt_oauth"]]

The type of the provider.

reasoning: Optional[ModelSettingsChatGptoAuthModelSettingsReasoning]

The reasoning configuration for the model.

reasoning_effort: Optional[Literal["none", "low", "medium", 2 more]]

The reasoning effort level for GPT-5.x and o-series models.

Accepts one of the following:
"none"
"low"
"medium"
"high"
"xhigh"
temperature: Optional[float]

The temperature of the model.

summary: Optional[str]

A summary of the conversation.

updated_at: Optional[datetime]

The timestamp when the object was last updated.

formatdate-time
Retrieve Conversation
import os
from letta_client import Letta

client = Letta(
    api_key=os.environ.get("LETTA_API_KEY"),  # This is the default and can be omitted
)
conversation = client.conversations.retrieve(
    "default",
)
print(conversation.id)
{
  "id": "id",
  "agent_id": "agent_id",
  "created_at": "2019-12-27T18:11:19.117Z",
  "created_by_id": "created_by_id",
  "in_context_message_ids": [
    "string"
  ],
  "isolated_block_ids": [
    "string"
  ],
  "last_updated_by_id": "last_updated_by_id",
  "model": "model",
  "model_settings": {
    "max_output_tokens": 0,
    "parallel_tool_calls": true,
    "provider_type": "openai",
    "reasoning": {
      "reasoning_effort": "none"
    },
    "response_format": {
      "type": "text"
    },
    "strict": true,
    "temperature": 0
  },
  "summary": "summary",
  "updated_at": "2019-12-27T18:11:19.117Z"
}
Returns Examples
{
  "id": "id",
  "agent_id": "agent_id",
  "created_at": "2019-12-27T18:11:19.117Z",
  "created_by_id": "created_by_id",
  "in_context_message_ids": [
    "string"
  ],
  "isolated_block_ids": [
    "string"
  ],
  "last_updated_by_id": "last_updated_by_id",
  "model": "model",
  "model_settings": {
    "max_output_tokens": 0,
    "parallel_tool_calls": true,
    "provider_type": "openai",
    "reasoning": {
      "reasoning_effort": "none"
    },
    "response_format": {
      "type": "text"
    },
    "strict": true,
    "temperature": 0
  },
  "summary": "summary",
  "updated_at": "2019-12-27T18:11:19.117Z"
}