Developer quickstart (Desktop)
Create your first Letta agent and view it in the ADE
This quickstart will get guide you through creating your first Letta agent. If you’re interested in learning about Letta and how it works, read more here.
Letta Desktop is in beta. View known issues here.
For bug reports and feature requests, please join our Discord.
Install Letta Desktop
You can install Letta Desktop for MacOS (M series) or Windows on our install page.
If you don’t have an M series (Apple Silicon) Mac or a Window machine (for example if you are using Linux), you can still use Letta via Docker or pip.
Run Letta Desktop
Letta agents live inside a Letta Server, which persists them to a database. You can interact with the Letta agents inside your Letta Server with the ADE (a visual interface), and connect your agents to external application via the REST API and Python & TypeScript SDKs.
Letta Desktop bundles together the Letta Server and the Agent Development Environment (ADE) into a single application.
When you launch Letta Desktop, you’ll be prompted to wait while the Letta Server starts up. You can monitor the server startup process by opening the server logs (clicking the icon).
Adding LLM backends
The Letta server can be connected to various LLM API backends. You can add additional LLM API backends by opening the integrations panel (clicking the icon). When you configure a new integration (by setting the environment variable in the dialog), the Letta Server will be restarted to load the new LLM API backend.
You can also edit the environment variable file directly, located at ~/.letta/env
.
For this quickstart demo, we’ll add an OpenAI API key (once we enter our key and click confirm, the Letta Server will automatically restart):
Creating an agent with the Letta API
Let’s create an agent via the Letta API, which we can then view in the ADE (you can also use the ADE to create agents).
To create an agent we’ll send a POST request to the Letta Server (API docs).
In this example, we’ll use gpt-4o-mini
as the base LLM model, and text-embedding-3-small
as the embedding model (this requires having configured both OPENAI_API_KEY
on our Letta Server).
We’ll also artificially set the context window limit to 16k, instead of the 128k default for gpt-4o-mini
(this can improve stability and performance):
The response will include information about the agent, including its id
:
Send a message to the agent with the Letta API
The Letta API supports streaming both agent steps and streaming tokens. For more information on streaming, see our guide on streaming.
Let’s try sending a message to the new agent! Replace AGENT_ID
with the actual agent ID we received in the agent state (route documentation):
The response contains the agent’s full response to the message, which includes reasoning steps (inner thoughts / chain-of-thought), tool calls, tool responses, and agent messages (directed at the user):
You can read more about the response format from the message route here.
Viewing the agent in the ADE
We’ve created and messaged our first stateful agent. This agent exists in our Letta server, which means we can view it in the ADE (and continue the conversation there!).
In Letta Desktop, we can view our agents by clicking on the alien icon on the left. Once we go to the agents tab, we should be able to open our agent in the ADE, and see the message we sent to it:
Next steps
Congratulations! 🎉 You just created and messaged your first stateful agent with Letta, using both the Letta ADE, API, and Python/Typescript SDKs.
Now that you’ve succesfully created a basic agent with Letta, you’re ready to start building more complex agents and AI applications.