Deploy Self-hosted Retool on AWS Fargate and ECS with CloudFormation
Learn how to deploy Retool on AWS Fargate and ECS with CloudFormation.
You can deploy Self-hosted Retool on AWS Fargate using CloudFormation templates.
Requirements
To deploy Self-hosted Retool on AWS Fargate and ECS, you need:
- A Retool license key, which you can obtain from the Retool Self-hosted Portal or your Retool account manager.
- An AWS account.
- An ECS cluster with the AWS Fargate launch type.
Temporal
Temporal is a distributed system used to schedule and run asynchronous tasks for Retool Workflows. A Self-hosted Retool instance uses a Temporal cluster to facilitate the execution of each workflow amongst a pool of self-hosted workers that make queries and execute code in your VPC. Temporal manages the queueing, scheduling, and orchestration of workflows to guarantee that each workflow block executes in the correct order of the control flow. It does not store any block results by default.
You can use a Retool-managed cluster on Temporal Cloud, which is recommended for most use cases. You can also use an existing self-managed cluster that is hosted on Temporal Cloud or in your own infrastructure. Alternatively, you can spin up a new self-hosted cluster alongside your Self-hosted Retool instance.
- Retool-managed cluster
- Self-managed cluster
- Local cluster
Recommended
You should use a Retool-managed cluster if:
- You are on a version greater than 3.6.14.
- Your organization is on the Enterprise plan.
- You don't have an existing cluster which you prefer to use.
- Your cluster only needs to be used for a Self-hosted Retool deployment.
- You don't want to manage the cluster directly.
- You have a single or multi-instance Retool deployment, where each instance requires its own namespace.
Retool admins can enable Retool-managed Temporal. To get started, navigate to the Retool Workflows page and click Enroll now. Once you update your configuration, return to the page and click Complete setup.
It can take a few minutes to initialize a namespace in Retool-managed Temporal.
Retool-managed Temporal clusters are hosted on Temporal Cloud. Your Self-hosted Retool deployment communicates with the cluster when building, deploying, and executing Retool Workflows. All orchestration data to Temporal is fully encrypted and uses the private encryption key set for your deployment.
If you want to create a new, self-hosted cluster on Temporal Cloud, sign up first. Once your account is provisioned, you can then deploy Self-hosted Retool.
Temporal Cloud supports 10+ AWS regions from which to select, 99.99% availability, and 99.99% guarantee against service errors.
You should use an existing self-managed cluster, hosted on Temporal Cloud or in your own infrastructure, if:
- You cannot use a Retool-managed cluster.
- You are on a version greater than 3.6.14.
- Your organization is on the Free, Team, or Business plan.
- You have an existing cluster and would prefer to use another namespace within it.
- You need a cluster for uses other than a Self-hosted Retool deployment.
- You want to manage the cluster directly.
- You have a multi-instance Retool deployment, where each instance would have its own namespace in a shared Self-hosted Temporal cluster.
Self-managed cluster considerations
Retool recommends using a separate datastore for the Workflows Queue in production. Consider using AWS Aurora Serverless V2 configured to an ACU (cpu) provision ranging from 0.5 to 8 ACU. 1 ACU can provide around 10 QPS. The Workflows Queue is write-heavy (around 100:1 write to read operations) and Aurora Serverless can scale to accommodate spikes in traffic without any extra configuration.
Environments
For test environments, Retool recommends using the same database for the Retool Database and Workflows Queue. Without any extra configuration, Retool Workflows can process approximately 5-10 QPS (roughly, 5-10 concurrent blocks executed per second).
Workflows at scale
You can scale Self-hosted Retool Workflow-related services to perform a high rate of concurrent blocks per second. If your deployment needs to process more than 10 workflows per second, you can use:
- A Retool-managed cluster.
- A self-managed cluster on Temporal Cloud.
- Apache Cassandra as the Temporal datastore.
If you anticipate running workflows at a higher scale, please reach out to us to work through a deployment strategy that is best for your use case.
You should spin up a new cluster alongside your Self-hosted Retool instance if:
- You cannot use a Retool-managed cluster.
- You are on a version greater than 3.6.14.
- Your organization is on the Free, Team, or Business plan.
- You don't have an existing cluster to use.
- You don't need a cluster for uses other than a Self-hosted Retool deployment.
- You want to test a Self-hosted Retool deployment with a local cluster first.
- You have a multi-instance Retool deployment, but each instance is in its own VPC and requires its own Self-hosted Temporal cluster.
Local cluster considerations
Retool recommends using a separate datastore for the Workflows Queue in production. Consider using AWS Aurora Serverless V2 configured to an ACU (cpu) provision ranging from 0.5 to 8 ACU. 1 ACU can provide around 10 QPS. The Workflows Queue is write-heavy (around 100:1 write to read operations) and Aurora Serverless can scale to accommodate spikes in traffic without any extra configuration.
Environments
For test environments, Retool recommends using the same database for the Retool Database and Workflows Queue. Without any extra configuration, Retool Workflows can process approximately 5-10 QPS (roughly, 5-10 concurrent blocks executed per second).
Workflows at scale
You can scale Self-hosted Retool Workflow-related services to perform a high rate of concurrent blocks per second. If your deployment needs to process more than 10 workflows per second, you can use:
- A Retool-managed cluster.
- A self-managed cluster on Temporal Cloud.
- Apache Cassandra as the Temporal datastore.
If you anticipate running workflows at a higher scale, please reach out to us to work through a deployment strategy that is best for your use case.
1. Verify network configuration
The CloudFormation template you choose depends on whether you need to deploy with or without public subnets.
- Use the public template to deploy Self-hosted Retool in VPCs with public subnets.
- Use the private template to deploy Self-hosted Retool in VPCs without public subnets.
- Public template
- Private template
Use the public template to deploy Retool in VPCs with public subnets. This requires the following networking components:
- One VPC with internet access.
- Two public subnets.
You use the private template to deploy Retool in VPCs without public subnets. To use this installation, set up a NAT gateway in front of your private subnets. This requires the following networking components:
- One VPC with internet access
- Two private subnets, attached to NAT gateways
VPC
- Open the AWS Management Console and navigate to the VPC service to use. Copy the value of the VPC ID.
- In the left navigation pane, select Internet Gateways.
- In the list of Internet Gateways, find the Internet Gateway associated with your VPC. If there is a record in this table associated with your VPC, the VPC is public.
Subnets
- Public template
- Private template
Verify that your subnets are public. Subnet IDs are required in a later step.
- Open the AWS Management Console and navigate to the VPC service.
- In the left navigation pane, select Subnets.
- Identify at least two subnets which are part of the VPC.
- Confirm that the subnets are public subnets. For each subnet, select the Route table tab. Look for an entry where the Destination is
0.0.0.0/0
and the Target is an Internet Gateway (igw-xxxx
). If such an entry exists, the subnet is a public subnet. - Confirm that the subnets have auto-assign public IP addresses enabled. For each subnet, select the subnet and go to the Details tab in the lower pane. Look for the Auto-assign public IPv4 address attribute. Confirm that the value is Yes.
Verify that your subnets are connected to a NAT gateway. Subnet IDs are required in a later step.
- Open the AWS Management Console and navigate to the VPC service.
- In the left navigation pane, click on Subnets.
- Identify at least two subnets which are part of the VPC.
- Confirm that the subnets are connected to the NAT gateway. For each subnet, select the Route table tab. Look for an entry where the Destination is
0.0.0.0/0
and the Target is a NAT Gateway. If such an entry exists, the subnet has access to the NAT gateway.
2. Configure instance
If you previously deployed Self-hosted Retool with an older version of a CloudFormation template, compare your current template with the new one. If the RetoolRDSInstance
object has changed, this could delete your current database instance.
Download the private CloudFormation template for either ECS on Fargate or ECS on EC2 from the Self-hosted Retool GitHub repository. Both of these templates assume a deployment in private subnets of your VPC (with NAT gateway) along with an Application Load Balancer (ALB) to direct external traffic to the Retool ECS service.
- ECS on Fargate
- ECS on EC2
curl -L -O https://raw.githubusercontent.com/tryretool/retool-onpremise/master/cloudformation/retool-workflows.fargate.yaml \
&& mv retool-workflows.fargate.yaml fargate.yaml
curl -L -O https://raw.githubusercontent.com/tryretool/retool-onpremise/master/cloudformation/retool-workflows.ec2.yaml \
&& mv retool-workflows.fargate.yaml fargate.yaml
Configure Temporal
- Retool-managed cluster
- Self-managed cluster
- Local cluster
Comment out the Temporal configuration section in either fargate.yaml or ec2.yaml, depending on the type of deployment used.
Comment out the Temporal configuration section in either fargate.yaml or ec2.yaml, depending on the type of deployment used.
Temporal Cloud
Allow your deployment to connect to Temporal
Open up egress to the public internet on ports 443
and 7233
to allow outbound-only connections to Temporal Cloud from your deployment. This is so services can enqueue work to, and poll work out, of Temporal.
Temporal Cloud does not have a static IP range to allow list. If more specificity is required, egress is required on ports on the following domains:
Port | Domains |
---|---|
443 | *.retool.com, *.tryretool.com, *.temporal.io |
7233 | *.tmprl.cloud |
Configure environment variables for Temporal cluster
Set the following environment variables for the MAIN_BACKEND
, WORKFLOW_BACKEND
, and WORKFLOW_TEMPORAL_WORKER
services in the configuration file.
Temporal Cloud requires security certificates for secure access.
Variable | Description |
---|---|
WORKFLOW_TEMPORAL_CLUSTER_NAMESPACE | The namespace in your Temporal cluster for each Retool deployment you have (e.g., retool-prod ). Default is workflows . |
WORKFLOW_TEMPORAL_CLUSTER_FRONTEND_HOST | The frontend host of the cluster. |
WORKFLOW_TEMPORAL_CLUSTER_FRONTEND_PORT | The port with which to connect. Default is 7233 . |
WORKFLOW_TEMPORAL_TLS_ENABLED | Whether to enable mTLS. Set to true . |
WORKFLOW_TEMPORAL_TLS_CRT | The base64-encoded mTLS certificate. |
WORKFLOW_TEMPORAL_TLS_KEY | The base64-encoded mTLS key. |
Self-hosted
If you use a PostgreSQL database as a persistence store, the PostgreSQL user must have permissions to CREATE DATABASE
. If this is not possible, you can manually create the required databases in your PostgreSQL cluster: temporal
and temporal_visibility
.
Configure environment variables for Temporal cluster
Set the following environment variables for MAIN_BACKEND
and WORKFLOW_TEMPORAL_WORKER
services, if not already configured.
Variable | Description |
---|---|
WORKFLOW_TEMPORAL_CLUSTER_NAMESPACE | The namespace in your Temporal cluster for each Retool deployment you have (e.g., retool-prod ). Default is workflows . |
WORKFLOW_TEMPORAL_CLUSTER_FRONTEND_HOST | The frontend host of the Temporal cluster. |
WORKFLOW_TEMPORAL_CLUSTER_FRONTEND_PORT | The port with which to connect to the Temporal cluster. Defaults to 7233 . |
WORKFLOW_TEMPORAL_TLS_ENABLED | (Optional) Whether to enable mTLS. |
WORKFLOW_TEMPORAL_TLS_CRT | (Optional) The base64-encoded mTLS certificate. |
WORKFLOW_TEMPORAL_TLS_KEY | (Optional) The base64-encoded mTLS key. |
If you use a PostgreSQL database as a persistence store, the PostgreSQL user must have permissions to CREATE DATABASE
. If this is not possible, you can manually create the required databases in your PostgreSQL cluster: temporal
and temporal_visibility
.
Configure environment variables for Temporal cluster
Set the following environment variables for MAIN_BACKEND
and WORKFLOW_TEMPORAL_WORKER
services, if not already configured.
Variable | Description |
---|---|
WORKFLOW_TEMPORAL_CLUSTER_NAMESPACE | The namespace in your Temporal cluster for each Retool deployment you have (e.g., retool-prod ). Default is workflows . |
WORKFLOW_TEMPORAL_CLUSTER_FRONTEND_HOST | The frontend host of the Temporal cluster. |
WORKFLOW_TEMPORAL_CLUSTER_FRONTEND_PORT | The port with which to connect to the Temporal cluster. Defaults to 7233 . |
WORKFLOW_TEMPORAL_TLS_ENABLED | (Optional) Whether to enable mTLS. |
WORKFLOW_TEMPORAL_TLS_CRT | (Optional) The base64-encoded mTLS certificate. |
WORKFLOW_TEMPORAL_TLS_KEY | (Optional) The base64-encoded mTLS key. |
Add your license key
Edit the configuration and include your license key. Replace all values of LICENSE_KEY
with your license key.
Environment:
- Name: LICENSE_KEY
Value: "EXPIRED-LICENSE-KEY-TRIAL"
Set encryption key
Self-hosted Retool deployments use an encryption key to encrypt:
- Private keys in the PostgreSQL database of the Self-hosted Retool instance.
- All data stored in Temporal when deploying Self-hosted Retool.
Set the ENCRYPTION_KEY environment variable for your deployment.
3. Configure CloudFormation service
Next, log into the AWS Management Console to configure the CloudFormation service:
- Navigate to the CloudFormation service and create a new stack.
- Upload the
fargate.yaml
file. - Set the following parameters.
Parameter | Value |
---|---|
cluster | The name of your ECS cluster. |
desiredCount | 2 |
environment | Staging |
force | false |
image | The Docker tag for the version of Retool to install, such as tryretool/backend:3.114.3-stable . |
maximumPercent | 250 |
minimumPercent | 50 |
subnetID | Two subnets you identified in the networking requirements section. |
vpcID | The VPC ID to use. |
After creating the stack, verify its status is CREATE_COMPLETE
.
4. Start the instance
The Outputs tab of the CloudFormation stack contains the URL of the load balancer running Self-hosted Retool. Once running, your instance is available at http://{load-balancer-url}:3000/auth/signup
. When you first visit the page, you must create an admin account.
Additional steps
Retool strongly recommends you externalize your database, configure SSL, and keep up-to-date with the latest version of Self-hosted Retool. Setting environment variables is often necessary to configure SSO, source control, and other self-hosted features.
Externalize database
By default, the Retool ECS template creates a new RDS instance to serve as the PostgreSQL database. To set up the ECS deployment to use a different database, you need to update the CloudFormation template by modifying the environment variables for both the RetoolTask
and the RetoolJobsRunnerTask
.
- Modify
fargate.yaml
, setting the following environment variables for both theRetoolTask
and theRetoolJobsRunnerTask
.
Variable | Description |
---|---|
POSTGRES_DB | The name of the external database. |
POSTGRES_HOST | The hostname of your external database instance. |
POSTGRES_PORT | The port number for your external database instance. |
POSTGRES_USER | The external database username. |
POSTGRES_PASSWORD | The external database password. |
- In the AWS Management Console, navigate to the CloudFormation service. Select the Retool stack.
- Update the stack. Replace the current template by uploading the new version of
fargate.yaml
. - Submit changes, and wait for the stack to redeploy. Verify that the CloudFormation stack has a status of
CREATE_COMPLETE
before continuing.
Update Retool
- Back up your database. Amazon RDS provides two different methods for backing up and restoring your DB instances: automated backups and database snapshots.
- Identify the appropriate release version on Docker Hub. See Retool's self-hosted release notes to learn about version-specific features.
- In the AWS Management Console, navigate to the CloudFormation service. Select the Retool stack.
- Update the stack. Use the current template. On the parameters screen, change the value of
Image
to the Docker tag for the version of Retool to use, such astryretool/backend:3.114.3-stable
. - Submit changes, and wait for the stack to redeploy. Verify that the CloudFormation stack has a status of
CREATE_COMPLETE
before continuing.
Add environment variables
To add environment variables, follow the steps below.
- Modify
fargate.yaml
, setting the environment variables for both theRetoolTask
and theRetoolJobsRunnerTask
. - In the AWS Management Console, navigate to the CloudFormation service. Select the Retool stack.
- Update the stack. Replace the current template by uploading the new version of
fargate.yaml
from your computer. - Submit changes, and wait for the stack to redeploy. Verify that the CloudFormation stack has a status of
CREATE_COMPLETE
before continuing.
Configure SSL
To configure SSL, you must first obtain an SSL certificate. You can either purchase an SSL certificate from a Certificate Authority (CA) or generate a free one using Let's Encrypt. AWS also provides a service called AWS Certificate Manager (ACM) for provisioning, managing, and deploying public and private SSL/TLS certificates.
To add your SLS certificate to your instance:
- In the AWS Management Console, navigate to the EC2 service.
- Import your SSL certificate to AWS Certificate Manager (ACM).
- In the AWS Management Console, navigate to the EC2 service. Under Load Balancing, choose Load Balancers.
- Select the load balancer that is part of the Retool deployment.
- In the Listeners tab, select Add listener.
- Select HTTPS (port 443). Select the ACM, which you created in step 2.
- Add the listener. You can now access Retool with a SSL connection.
Mount volumes
There are several use cases which require the use of volumes. For example, when configuring a gRPC resource, you need to mount a volume containing the protos files to the Retool deployment. Follow these instructions to create a persistent volume and copy files from your local machine to the volume.
First, choose a method to upload files to the EFS volume. You can either:
- Mount the EFS volume on an EC2 instance and use standard file operations to upload files directly to the mounted directory on the EFS volume.
- Use AWS DataSync to transfer your local files to an EFS file system by setting up a DataSync agent and configuring a data transfer task.
- Set up an SFTP server using AWS Transfer for SFTP and configure it to use an EFS file system as the storage backend, then upload files using an SFTP client to store them on the EFS volume.
- In the AWS Management Console, navigate to the EFS service.
- Create a new file system. Select the same VPC in which you deploy Retool.
- Upload files to the EFS volume using your data transfer method of choice.
- In
fargate.yaml
, add the EFS volume as a parameter in theParameters
section.
Parameters:
...
EFS:
Type: AWS::EFS::FileSystem::Id
Description: Select an existing EFS volume to mount.
- In
fargate.yaml
, add anEFSMountTarget
toResources
section.
Resources:
EFSMountTarget:
Type: AWS::EFS::MountTarget
Properties:
FileSystemId: !Ref 'EFS'
SecurityGroups: [!Ref 'ALBSecurityGroup']
SubnetId: !Select [0, !Ref 'SubnetId']
- In
fargate.yaml
, add the volume to theRetoolTask
.
RetoolTask:
Type: AWS::ECS::TaskDefinition
Properties:
...
ContainerDefinitions:
...
Volumes:
- Name: efs-volume
EFSVolumeConfiguration:
FileSystemId: !Ref 'EFS'
MountPoints:
- SourceVolume: efs-volume
ContainerPath: /efs
ReadOnly: false # Set to 'true' if you want read-only access
- In the AWS Management Console, navigate to the CloudFormation service. Select the Retool stack.
- Update the stack. Replace the current template by uploading the new version of
fargate.yaml
. - Submit changes, and wait for the stack to redeploy.