Developer quickstart
Create your first Letta agent and view it in the ADE
This quickstart will get guide you through creating your first Letta agent. If you’re interested in learning about Letta and how it works, read more here.
Run the Letta Server
If you’re using Letta Cloud, you can skip this step (you do not need to run your own Letta Server). Instead, you’ll simply need your Letta Cloud API key.
Letta agents live inside a Letta Server, which persists them to a database. You can interact with the Letta agents inside your Letta Server with the ADE (a visual interface), and connect your agents to external application via the REST API and Python & TypeScript SDKs.
The recommended way to run the Letta Server is with Docker (view official installation guide).
You can also install and run the Letta Server using pip
(guide here).
The Letta server can be connected to various LLM API backends (see our Docker guide for more details). In this example, we’ll use an OpenAI API key:
If you have many different LLM API keys, you can also set up a .env
file instead and pass that to docker run
:
Once the Letta server is running, you can access it via port 8283
(e.g. sending REST API requests to http://localhost:8283/v1
).
You can also connect your server to the Letta ADE to access and manage your agents in a visual interface.
Access the Letta ADE (Agent Development Environment)
The Letta ADE is a graphical user interface for creating, deploying, interacting and observing with your Letta agents. You can access the ADE at https://app.letta.com.
The ADE can connect to self-hosted Letta servers (e.g. a Letta server running on your laptop), as well as the Letta Cloud service. When connected to a self-hosted / private server, the ADE uses the Letta REST API to communicate with your server.
If you’re running a Letta server to power an end-user application (such as a customer support chatbot), you can use the ADE to test, debug, and observe the agents in your server. You can also use the ADE as a general chat interface to interacting with your Letta agents.
To connect the ADE with your local Letta server, simply navigate to https://app.letta.com and you will see “Local server” as an option in the left panel (if your server is running):
For information on how to configure the ADE with a Letta server running on a remote server, refer to our guide on remote servers.
Creating an agent with the Letta API
Let’s create an agent via the Letta API, which we can then view in the ADE (you can also use the ADE to create agents).
To create an agent we’ll send a POST request to the Letta Server (API docs).
In this example, we’ll use the Claude API for the base LLM model, and the OpenAI API for the embedding model (this requires having configured both OPENAI_API_KEY
and ANTHROPIC_API_KEY
on our Letta Server):
The response will include information about the agent:
Send a message to the agent with the Letta API
The Letta API supports streaming both agent steps and streaming tokens. For more information on streaming, see our guide on streaming.
Let’s try sending a message to the new agent! We’ll use the agent id
(starting with agent-...
) that we received in the response above in place of agent_id
(route documentation):
The response contains the agent’s full response to the message, which includes inner thoughts / chain-of-thought, tool calls, tool responses, and agent messages (directed at the user):
Viewing the agent in the ADE
We’ve created and messaged our first stateful agent. This agent exists in our Letta server, which means we can view it in the ADE (and continue the conversation there!).
If we open the ADE and click on “Agents”, we should see our agent, as well as the message that we sent to it:
Next steps
Congratulations! 🎉 You just created and messaged your first stateful agent with Letta, using both the Letta ADE, API, and Python/Typescript SDKs.
Now that you’ve succesfully created a basic agent with Letta, you’re ready to start building more complex agents and AI applications.