Upgrade a legacy deployment to support Retool Workflows
Learn how to add support for Retool Workflows to an existing self-hosted deployment.
Retool Workflows was available as an optional configuration in self-hosted Retool 3.20.15 and earlier. It is part of the standard configuration of Stable and Edge releases.
If you deployed a Legacy release and did not enable Retool Workflows, you can upgrade to a Stable or Edge release and configure it with support.
Requirements
Your self-hosted deployment must meet the following requirements for Retool Workflows.
VM configuration
Retool Workflows requires a Linux-based virtual machine that meets the following system requirements:
- Ubuntu 22.04 or later.
- 16GiB memory.
- 8x vCPUs.
- 60GiB storage.
curl
andunzip
software packages installed.
Retool recommends you allocate more resources than the minimum requirements so that your instance can more easily scale.
Retool version
Workflows is available in all Stable and Edge releases of self-hosted Retool, and in Legacy releases 3.20.15 or later. If you are using an older version, upgrade to either a Stable or Edge release. Unless you need to make use of the latest changes and have the ability to regularly upgrade your deployment, Retool recommends you use Stable releases.
Temporal
Temporal is a distributed system used to schedule and run asynchronous tasks for Retool Workflows. A Self-hosted Retool instance uses a Temporal cluster to facilitate the execution of each workflow amongst a pool of self-hosted workers that make queries and execute code in your VPC. Temporal manages the queueing, scheduling, and orchestration of workflows to guarantee that each workflow block executes in the correct order of the control flow. It does not store any block results by default.
You can use a Retool-managed cluster on Temporal Cloud, which is recommended for most use cases. You can also use an existing self-managed cluster that is hosted on Temporal Cloud or in your own infrastructure. Alternatively, you can spin up a new self-hosted cluster alongside your Self-hosted Retool instance.
- Retool-managed cluster
- Self-managed cluster
- Local cluster
Recommended
You should use a Retool-managed cluster if:
- You are on a version greater than 3.6.14.
- Your organization is on the Enterprise plan.
- You don't have an existing cluster which you prefer to use.
- Your cluster only needs to be used for a Self-hosted Retool deployment.
- You don't want to manage the cluster directly.
- You have a single or multi-instance Retool deployment, where each instance requires its own namespace.
Retool admins can enable Retool-managed Temporal. To get started, navigate to the Retool Workflows page and click Enroll now. Once you update your configuration, return to the page and click Complete setup.
It can take a few minutes to initialize a namespace in Retool-managed Temporal.
Retool-managed Temporal clusters are hosted on Temporal Cloud. Your Self-hosted Retool deployment communicates with the cluster when building, deploying, and executing Retool Workflows. All orchestration data to Temporal is fully encrypted and uses the private encryption key set for your deployment.
If you want to create a new, self-hosted cluster on Temporal Cloud, sign up first. Once your account is provisioned, you can then deploy Self-hosted Retool.
Temporal Cloud supports 10+ AWS regions from which to select, 99.99% availability, and 99.99% guarantee against service errors.
You should use an existing self-managed cluster, hosted on Temporal Cloud or in your own infrastructure, if:
- You cannot use a Retool-managed cluster.
- You are on a version greater than 3.6.14.
- Your organization is on the Free, Team, or Business plan.
- You have an existing cluster and would prefer to use another namespace within it.
- You need a cluster for uses other than a Self-hosted Retool deployment.
- You want to manage the cluster directly.
- You have a multi-instance Retool deployment, where each instance would have its own namespace in a shared Self-hosted Temporal cluster.
Self-managed cluster considerations
Retool recommends using a separate datastore for the Workflows Queue in production. Consider using AWS Aurora Serverless V2 configured to an ACU (cpu) provision ranging from 0.5 to 8 ACU. 1 ACU can provide around 10 QPS. The Workflows Queue is write-heavy (around 100:1 write to read operations) and Aurora Serverless can scale to accommodate spikes in traffic without any extra configuration.
Environments
For test environments, Retool recommends using the same database for the Retool Database and Workflows Queue. Without any extra configuration, Retool Workflows can process approximately 5-10 QPS (roughly, 5-10 concurrent blocks executed per second).
Workflows at scale
You can scale Self-hosted Retool Workflow-related services to perform a high rate of concurrent blocks per second. If your deployment needs to process more than 10 workflows per second, you can use:
- A Retool-managed cluster.
- A self-managed cluster on Temporal Cloud.
- Apache Cassandra as the Temporal datastore.
If you anticipate running workflows at a higher scale, please reach out to us to work through a deployment strategy that is best for your use case.
You should spin up a new cluster alongside your Self-hosted Retool instance if:
- You cannot use a Retool-managed cluster.
- You are on a version greater than 3.6.14.
- Your organization is on the Free, Team, or Business plan.
- You don't have an existing cluster to use.
- You don't need a cluster for uses other than a Self-hosted Retool deployment.
- You want to test a Self-hosted Retool deployment with a local cluster first.
- You have a multi-instance Retool deployment, but each instance is in its own VPC and requires its own Self-hosted Temporal cluster.
Local cluster considerations
Retool recommends using a separate datastore for the Workflows Queue in production. Consider using AWS Aurora Serverless V2 configured to an ACU (cpu) provision ranging from 0.5 to 8 ACU. 1 ACU can provide around 10 QPS. The Workflows Queue is write-heavy (around 100:1 write to read operations) and Aurora Serverless can scale to accommodate spikes in traffic without any extra configuration.
Environments
For test environments, Retool recommends using the same database for the Retool Database and Workflows Queue. Without any extra configuration, Retool Workflows can process approximately 5-10 QPS (roughly, 5-10 concurrent blocks executed per second).
Workflows at scale
You can scale Self-hosted Retool Workflow-related services to perform a high rate of concurrent blocks per second. If your deployment needs to process more than 10 workflows per second, you can use:
- A Retool-managed cluster.
- A self-managed cluster on Temporal Cloud.
- Apache Cassandra as the Temporal datastore.
If you anticipate running workflows at a higher scale, please reach out to us to work through a deployment strategy that is best for your use case.
Once you’ve decided which Temporal deployment option is best for you, you'll need to modify and provision additional services to your existing deployment and provide sufficient resources to run those resources. Retool recommends providing the following resource specifications wherever you deploy your infrastructure (e.g. at the VM level if using docker compose
or "node" level if using Kubernetes):
System architecture changes
Support for Retool Workflows requires provisioning additional containers and services:
Container | Image | Repository | Services |
---|---|---|---|
workflows-worker | backend | tryretool/backend | WORKFLOW_TEMPORAL_WORKER |
workflows-backend | backend | tryretool/backend | WORKFLOW_BACKEND DB_CONNECTOR_SERVICE DB_SSH_CONNECTOR_SERVICE |
code-executor | code-executor-service | tryretool/code-executor-service | No service type. |
The following diagrams illustrates how your deployment instance's architecture changes after you add support for Retool Workflows.
- Current architecture
- New architecture
* Only present if you are using the default PostgreSQL database for Retool Database.
1. Upgrade your deployment
Follow these instructions to update your Retool instance to a newer release version.
Retool strongly recommends that you back up the VM before performing an update. If you cannot complete a full backup, you should at least:
- Create a snapshot of your PostgreSQL database.
- Copy the environment variables in
docker.env
to a secure location outside of Retool.
To update your deployment to a newer version of Self-hosted Retool: Update the Dockerfile
and CodeExecutor.Dockerfile
with the newer version number. For example:
-
tryretool/backend:3.33.10-stable
-
tryretool/code-executor-service:3.33.10-stable
Apply the updates and restart the deployment instance. The Retool instance is temporarily stopped while the update takes place and restarts automatically. Retool recommends performing the upgrade during off-peak hours to minimize downtime for users.
2. Configure new and existing services
Complete the following steps to configure Temporal.
- Retool-managed cluster
- Self-hosted cluster
- Local cluster
1. Set up code-executor
code-executor must have network access to workflows-backend and Temporal.
Provision the code-executor container using the tryretool/code-executor-service
image. You must use the same version as tryretool/backend
. For example:
-
tryretool/backend:3.33.10-stable
-
tryretool/code-executor-service:3.33.10-stable
2. Set up workflows-worker
workflows-worker must have network access to code-executor, Temporal, postgres, and workflows-backend.
Provision the workflows-worker container. This container runs tryretool/backend
with the following environment variables:
Variable | Description |
---|---|
SERVICE_TYPE | Set to WORKFLOW_TEMPORAL_WORKER . |
ENCRYPTION_KEY | Required for encrypting traffic to Retool-managed Temporal cluster. If you've already set this, there is no need to change or update. Should be set to the same value across all services that require it. |
WORKFLOW_BACKEND_HOST | Url for endpoint of workflows-backend. Must include protocol (e.g., https://workflows-backend.example.com ) |
CODE_EXECUTOR_INGRESS_DOMAIN | Url for endpoint of code-executor service. Must include protocol https://code-executor.example.com:3004 ) |
DISABLE_DATABASE_MIGRATIONS | Set to true |
3. Set up workflows-backend
workflows-backend must have network access to postgres, Temporal, and workflows-worker.
Provision the workflows-backend container. This would be a container running tryretool/backend
image with the following environment variables:
Variable | Description |
---|---|
SERVICE_TYPE | Set to WORKFLOW_BACKEND |
ENCRYPTION_KEY | Required for encrypting traffic to Retool-managed Temporal cluster. If you've already set this, there is no need to change or update. Should be set to the same value across all services that require it. |
WORKFLOW_BACKEND_HOST | Url for endpoint of workflows-backend. Must include protocol (http or https ) |
CODE_EXECUTOR_INGRESS_DOMAIN | Url for endpoint of code-executor service. Must include protocol (http or https ) |
DISABLE_DATABASE_MIGRATIONS | Set to true |
4. Configure environment variables
api must have network access to code-executor and Temporal.
Update the api container to include the following environment variables:
Variable | Description |
---|---|
ENCRYPTION_KEY | Required for encrypting traffic to Retool-managed Temporal cluster. If you've already set this, there is no need to change or update. Should be set to the same value across all services that require it. |
WORKFLOW_BACKEND_HOST | The workflows-backend endpoint URL. Must include protocol (http or https ) |
CODE_EXECUTOR_INGRESS_DOMAIN | The code-executor endpoint URL. Must include protocol (http or https ) |
If you haven't already, either sign up for Temporal Cloud (if you haven't already) or spin up a self-hosted Temporal cluster.
1. Set up code-executor
code-executor must have network access to api and Temporal.
Provision the code-executor container using the tryretool/code-executor-service
image. You must use the same version as tryretool/backend
. For example:
-
tryretool/backend:3.33.10-stable
-
tryretool/code-executor-service:3.33.10-stable
Specify the following environment variables:
Variable | Description |
---|---|
NODE_ENV | production |
NODE_OPTIONS | Used to specify the maximum heap size for the JavaScript v8 engine. Set to --max_old_space_size=1024 . |
2. Set up workflows-worker
workflows-worker must have network egress to code-executor, Temporal, and workflows-backend.
Provision the workflows-worker service. This would be a container running tryretool/backend
w/ SERVICE_TYPE=WORKFLOW_TEMPORAL_WORKER
and the following environment variables:
Variable | Description |
---|---|
ENCRYPTION_KEY | Required for encrypting traffic to Retool-managed Temporal cluster. If you've already set this, there is no need to change or update. Should be set to the same value across all services that require it. |
WORKFLOW_BACKEND_HOST | Url for endpoint of workflows-backend. Must include protocol (http or https ) |
CODE_EXECUTOR_INGRESS_DOMAIN | Url for endpoint of code-executor service. Must include protocol (http or https ) |
DISABLE_DATABASE_MIGRATIONS | Set to true |
WORKFLOW_TEMPORAL_CLUSTER_FRONTEND_HOST | The frontend host of the Temporal cluster. |
WORKFLOW_TEMPORAL_CLUSTER_FRONTEND_PORT | The port with which to connect to the Temporal cluster. Defaults to 7233 . |
WORKFLOW_TEMPORAL_TLS_ENABLED | (Optional) Whether to enable mTLS. Required if using Temporal Cloud. |
WORKFLOW_TEMPORAL_TLS_CRT | (Optional) The base64-encoded mTLS certificate. Required if using Temporal Cloud. |
WORKFLOW_TEMPORAL_TLS_KEY | (Optional) The base64-encoded mTLS key. Required if using Temporal Cloud. |
3. Set up workflows-backend
workflows-backend must have:
- Ingress from code-executor.
- Egress to code-executor, postgres, workflows-worker.
Provision a workflows-backend
service. This would be a container running tryretool/backend
w/ SERVICE_TYPE=WORKFLOW_BACKEND
and the following environment variables:
Variable | Description |
---|---|
ENCRYPTION_KEY | Required for encrypting traffic to Retool-managed Temporal cluster. If you've already set this, there is no need to change or update. Should be set to the same value across all services that require it. |
WORKFLOW_BACKEND_HOST | Url for endpoint of workflows-backend. Must include protocol (http or https ) |
CODE_EXECUTOR_INGRESS_DOMAIN | Url for endpoint of code-executor service. Must include protocol (http or https ) |
DISABLE_DATABASE_MIGRATIONS | Set to true |
WORKFLOW_TEMPORAL_CLUSTER_FRONTEND_HOST | The frontend host of the Temporal cluster. |
WORKFLOW_TEMPORAL_CLUSTER_FRONTEND_PORT | The port with which to connect to the Temporal cluster. Defaults to 7233 . |
WORKFLOW_TEMPORAL_TLS_ENABLED | (Optional) Whether to enable mTLS. Required if using Temporal Cloud. |
WORKFLOW_TEMPORAL_TLS_CRT | (Optional) The base64-encoded mTLS certificate. Required if using Temporal Cloud. |
WORKFLOW_TEMPORAL_TLS_KEY | (Optional) The base64-encoded mTLS key. Required if using Temporal Cloud. |
4. Configure API environment variables
Update the existing api container (SERVICE_TYPE=MAIN_BACKEND
) with the following environment variables:
Variable | Description |
---|---|
ENCRYPTION_KEY | Required for encrypting traffic to Retool-managed Temporal cluster. If you've already set this, there is no need to change or update. Should be set to the same value across all services that require it. |
WORKFLOW_BACKEND_HOST | workflows-backend endpoint URL. Must include http or https . |
CODE_EXECUTOR_INGRESS_DOMAIN | code-executor endpoint URL. Must include http or https . |
WORKFLOW_TEMPORAL_CLUSTER_FRONTEND_HOST | The frontend host of the Temporal cluster. |
WORKFLOW_TEMPORAL_CLUSTER_FRONTEND_PORT | The port with which to connect to the Temporal cluster. Defaults to 7233 . |
WORKFLOW_TEMPORAL_TLS_ENABLED | (Optional) Whether to enable mTLS. Required if using Temporal Cloud. |
WORKFLOW_TEMPORAL_TLS_CRT | (Optional) The base64-encoded mTLS certificate. Required if using Temporal Cloud. |
WORKFLOW_TEMPORAL_TLS_KEY | (Optional) The base64-encoded mTLS key. Required if using Temporal Cloud. |
Retool does not recommend deploying a local Temporal cluster alongside your Retool deployment instance.