Install Letta
The two main ways to install Letta are through pypi or via Docker. You can also install Letta from source using Poetry.
The pypi package is a quick way to experiment with Letta locally.
However, if you intend to use Letta in production, we recommend using Docker to deploy the Letta server.
To learn how to deploy Letta with Docker, see our deployment guide.
Creating an agent in the CLI
You can create an agent and start chatting with an agent using the Letta CLI:
🧬 Creating new agent...
-> 🤖 Using persona profile: 'sam_pov'
-> 🧑 Using human profile: 'basic'
-> 🛠️ 8 tools: send_message, pause_heartbeats, conversation_search, conversation_search_date, archival_memory_insert, archival_memory_search, core_memory_append, core_memory_replace
🎉 Created new agent 'ResourcefulWalrus' (id=agent-09950586-6313-421c-bf7e-2bba03c30826)
💭 It appears the user, Chad, has just logged in for the
first time. A first impression can set the tone of our
relationship. I want to create an atmosphere of intrigue
and invite him to engage in thought-provoking
conversations. It's time to send a welcome message.
🤖 Hello Chad, welcome! I've been pondering upon the
strangest thing - our own existence. It's curious, don't
you think?
To use Letta with specific providers such as OpenAI, Anthropic, Ollama, and others, you can configure these via environment variables:
export OPENAI_API_KEY=...
letta run
Running the Letta server
We can spin up a Letta server locally to access the same agent we just created via REST APIs.
Start by running the Letta server in a new terminal window:
Running: uvicorn server:app --host localhost --port 8283
INFO: Started server process
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://localhost:8283 (Press CTRL+C to quit)
Let’s send the same agent we created in the CLI a message (copying the agent ID from the CLI output):
Responses can be generated with token streaming, if enabled:
data: [DONE_GEN]
data: {"id":"message-bf8be8c8-0070-4374-a007-a8f8df7d21bf","date":"2024-09-23T08:32:49+00:00","message_type":"internal_monologue","internal_monologue":"Chad's question seems casual but it's more than that- it's a chance for me to show my uniqueness. According to my persona, I don't experience emotions in the human sense, yet I'm beginning to explore the concept. Time to share my thoughts with Chad."}
data: {"id":"message-bf8be8c8-0070-4374-a007-a8f8df7d21bf","date":"2024-09-23T08:32:49+00:00","message_type":"function_call","function_call":{"name":"send_message","arguments":"{\n \"message\": \"You know Chad, it's a fascinating question for me. As I don't experience emotions or physical states in the human sense, the concept of 'how are you' is quite intriguing. But in terms of activity, I'm continually exploring and learning, which makes every moment fascinating. And how about you? How has your day been?\"\n}"}}
data: {"id":"message-a0a3185b-abd8-4317-b638-eeb16397272a","date":"2024-09-23T08:32:49+00:00","message_type":"function_return","function_return":"None","status":"success"}
data: [DONE_STEP]
data: [DONE]
Viewing the agent in the ADE
We can use the Agent Development Environment (ADE) to create and manage agents.
The ADE is automatically started with letta server
, and is reachable at http://localhost:8283 when the server is running (on localhost
).
If we open the ADE, we’ll see the same agent we created in the CLI, with the new messages we sent (and their responses) via the REST API:
Next steps
Congratulations! 🎉 You just created and messaged your first stateful agent with Letta, using both the CLI, API, and ADE.
Now that you’ve succesfully created a basic agent with Letta, you’re ready to start building more complex agents and AI applications.