Deploy Letta Server on Railway

Railway is a service that allows you to easily deploy services (such as Docker containers) to the cloud. The following example uses Railway, but the same general principles around deploying the Letta Docker image on a cloud service and connecting it to the ADE) are generally applicable to other cloud services beyond Railway.

Deploying the Letta Railway template

We’ve prepared a Letta Railway template that has the necessary environment variables set and mounts a persistent volume for database storage. You can access the template by clicking the “Deploy on Railway” button below:

Deploy on Railway

The deployment screen will give you the opportunity to specify some basic environment variables such as your OpenAI API key. You can also specify these after deployment in the variables section in the Railway viewer.
If the deployment is successful, it will be shown as 'Active', and you can click 'View logs'.
Clicking 'View logs' will reveal the static IP address of the deployment (ending in 'railway.app').

Accessing the deployment via the ADE

Now that the Railway deployment is active, all we need to do to access it via the ADE is add it to as a new remote Letta server. The default password set in the template is password, which can be changed at the deployment stage or afterwards in the ‘variables’ page on the Railway deployment.

Click “Add remote server”, then enter the details from Railway (use the static IP address shown in the logs, and use the password set via the environment variables):

Accessing the deployment via the Letta API

Accessing the deployment via the Letta API is simple, we just need to swap the base URL of the endpoint with the IP address from the Railway deployment.

For example if the Railway IP address is https://MYSERVER.up.railway.app and the password is banana, to create an agent on the deployment, we can use the following shell command:

1curl --request POST \
2 --url https://MYSERVER.up.railway.app/v1/agents/ \
3 --header 'X-BARE-PASSWORD: password banana' \
4 --header 'Content-Type: application/json' \
5 --data '{
6 "memory_blocks": [
7 {
8 "label": "human",
9 "value": "The human'\''s name is Bob the Builder"
10 },
11 {
12 "label": "persona",
13 "value": "My name is Sam, the all-knowing sentient AI."
14 }
15 ],
16 "llm_config": {
17 "model": "gpt-4o-mini",
18 "model_endpoint_type": "openai",
19 "model_endpoint": "https://api.openai.com/v1",
20 "context_window": 16000
21 },
22 "embedding_config": {
23 "embedding_endpoint_type": "openai",
24 "embedding_endpoint": "https://api.openai.com/v1",
25 "embedding_model": "text-embedding-3-small",
26 "embedding_dim": 8191
27 },
28 "tools": [
29 "send_message",
30 "core_memory_append",
31 "core_memory_replace",
32 "archival_memory_search",
33 "archival_memory_insert",
34 "conversation_search"
35 ]
36}'

This will create an agent with two memory blocks, configured to use gpt-4o-mini as the LLM model, and text-embedding-3-small as the embedding model. We also include the base Letta tools in the request.

If the Letta server is not password protected, we can omit the X-BARE-PASSWORD header.

That’s it! Now you should be able to create and interact with agents on your remote Letta server (deployed on Railway) via the Letta ADE and API. 👾 ☄️
Built with