Temporal clusters for Self-hosted Retool
Learn about Temporal clusters for Self-hosted Retool deployments.
Temporal
Temporal is a distributed system used to schedule and run asynchronous tasks for Retool Workflows. A Self-hosted Retool instance uses a Temporal cluster to facilitate the execution of each workflow amongst a pool of self-hosted workers that make queries and execute code in your VPC. Temporal manages the queueing, scheduling, and orchestration of workflows to guarantee that each workflow block executes in the correct order of the control flow. It does not store any block results by default.
You can use a Retool-managed cluster on Temporal Cloud, which is recommended for most use cases. You can also use an existing self-managed cluster that is hosted on Temporal Cloud or in your own infrastructure. Alternatively, you can spin up a new self-hosted cluster alongside your Self-hosted Retool instance.
- Retool-managed cluster
- Self-managed cluster
- Local cluster
Recommended
You should use a Retool-managed cluster if:
- You are on a version greater than 3.6.14.
- Your organization is on the Enterprise plan.
- You don't have an existing cluster which you prefer to use.
- Your cluster only needs to be used for a Self-hosted Retool deployment.
- You don't want to manage the cluster directly.
- You have a single or multi-instance Retool deployment, where each instance requires its own namespace.
Retool admins can enable Retool-managed Temporal. To get started, navigate to the Retool Workflows page and click Enroll now. Once you update your configuration, return to the page and click Complete setup.
It can take a few minutes to initialize a namespace in Retool-managed Temporal.
Retool-managed Temporal clusters are hosted on Temporal Cloud. Your Self-hosted Retool deployment communicates with the cluster when building, deploying, and executing Retool Workflows. All orchestration data to Temporal is fully encrypted and uses the private encryption key set for your deployment.
If you want to create a new, self-hosted cluster on Temporal Cloud, sign up first. Once your account is provisioned, you can then deploy Self-hosted Retool.
Temporal Cloud supports 10+ AWS regions from which to select, 99.99% availability, and 99.99% guarantee against service errors.
You should use an existing self-managed cluster, hosted on Temporal Cloud or in your own infrastructure, if:
- You cannot use a Retool-managed cluster.
- You are on a version greater than 3.6.14.
- Your organization is on the Free, Team, or Business plan.
- You have an existing cluster and would prefer to use another namespace within it.
- You need a cluster for uses other than a Self-hosted Retool deployment.
- You want to manage the cluster directly.
- You have a multi-instance Retool deployment, where each instance would have its own namespace in a shared Self-hosted Temporal cluster.
Self-managed cluster considerations
Retool recommends using a separate datastore for the Workflows Queue in production. Consider using AWS Aurora Serverless V2 configured to an ACU (cpu) provision ranging from 0.5 to 8 ACU. 1 ACU can provide around 10 QPS. The Workflows Queue is write-heavy (around 100:1 write to read operations) and Aurora Serverless can scale to accommodate spikes in traffic without any extra configuration.
Environments
For test environments, Retool recommends using the same database for the Retool Database and Workflows Queue. Without any extra configuration, Retool Workflows can process approximately 5-10 QPS (roughly, 5-10 concurrent blocks executed per second).
Workflows at scale
You can scale Self-hosted Retool Workflow-related services to perform a high rate of concurrent blocks per second. If your deployment needs to process more than 10 workflows per second, you can use:
- A Retool-managed cluster.
- A self-managed cluster on Temporal Cloud.
- Apache Cassandra as the Temporal datastore.
If you anticipate running workflows at a higher scale, please reach out to us to work through a deployment strategy that is best for your use case.
You should spin up a new cluster alongside your Self-hosted Retool instance if:
- You cannot use a Retool-managed cluster.
- You are on a version greater than 3.6.14.
- Your organization is on the Free, Team, or Business plan.
- You don't have an existing cluster to use.
- You don't need a cluster for uses other than a Self-hosted Retool deployment.
- You want to test a Self-hosted Retool deployment with a local cluster first.
- You have a multi-instance Retool deployment, but each instance is in its own VPC and requires its own Self-hosted Temporal cluster.
Local cluster considerations
Retool recommends using a separate datastore for the Workflows Queue in production. Consider using AWS Aurora Serverless V2 configured to an ACU (cpu) provision ranging from 0.5 to 8 ACU. 1 ACU can provide around 10 QPS. The Workflows Queue is write-heavy (around 100:1 write to read operations) and Aurora Serverless can scale to accommodate spikes in traffic without any extra configuration.
Environments
For test environments, Retool recommends using the same database for the Retool Database and Workflows Queue. Without any extra configuration, Retool Workflows can process approximately 5-10 QPS (roughly, 5-10 concurrent blocks executed per second).
Workflows at scale
You can scale Self-hosted Retool Workflow-related services to perform a high rate of concurrent blocks per second. If your deployment needs to process more than 10 workflows per second, you can use:
- A Retool-managed cluster.
- A self-managed cluster on Temporal Cloud.
- Apache Cassandra as the Temporal datastore.
If you anticipate running workflows at a higher scale, please reach out to us to work through a deployment strategy that is best for your use case.
Compare options
In general, Retool recommends using a Retool-managed Temporal cluster. You can also use either:
- A self-managed external cluster: Either Temporal Cloud or self-hosted within your infrastructure.
- A local cluster: A self-hosted cluster within your Self-hosted Retool instance.
Retool-managed | Self-managed external | Local | |
---|---|---|---|
Hosting | Externally on Temporal Cloud | Either externally on Temporal Cloud or self-hosted internally within your own VPC | Locally as part of the Self-hosted Retool instance |
Availability | All Self-hosted Retool instances (3.6.14+) on an Enterprise plan | All Self-hosted Retool instances and for purposes outside of Retool. | Only to the Self-hosted Retool instance |
Management | Managed by Retool | Self-managed on Temporal Cloud, self-managed and self-hosted if within your own VPC | Self-managed and self-hosted |
Scale and performance | At least 4 AWS regions are available to choose from | If using Temporal Cloud, at least 10 AWS regions are available to choose from; if self-hosted, low latency due to hosting within your infrastructure | Low latency due to local deployment alongside self-hosted Retool |
Uptime | 99.99% uptime | 99.99% uptime | Dependent on DevOps |
Configuration | Minimal | Moderate | High |
Cost | No minimum contract | See Temporal Cloud pricing | No contract |
Security | Resource-isolated namespace in a Temporal Cloud cluster, in which only orchestration data (workflow IDs and block names) is stored; all data encrypted with private key | Only orchestration data (workflow IDs and block names), encrypted with private key, is stored in Temporal Cloud. All other data remains internal on VPC for self-hosted. | All data remains local to the Self-hosted Retool instance |
Egress
Whether you use a Retool-managed or self-managed cluster, egress is required for Self-hosted Retool to use Temporal. Both MAIN_BACKEND
and WORKFLOW_TEMPORAL_WORKER
containers must be able to connect.
If you're using a Retool-managed cluster or a self-managed cluster on Temporal Cloud, your instance must have egress to the internet (public network) on ports for the following domains:
Port | Domains |
---|---|
443 | *.retool.com, *.tryretool.com, *.temporal.io |
7233 | *.tmprl.cloud |
Lifecycle
When the Temporal cluster is needed:
- The
MAIN_BACKEND
service enqueues workflows with Temporal. - A
WORKFLOWS_TEMPORAL_WORKER
receives instructions from Temporal for the specific tasks required to run a workflow, leveraging the Self-hosted Retool backend. DB_CONNECTOR
makes a request to your network-protected resources, orCODE_EXECUTOR
executes custom Python.
Data and encryption
By default, Retool-managed clusters only store encrypted names and IDs of objects in Temporal Cloud—only minimal customer data is sent to the cluster. All encrypted data in a Retool-managed cluster is retained for 14 days. Please reach out if you would like to make changes to this retention period.
All data sent between your deployment and Temporal Cloud is encrypted with your ENCRYPTION_KEY
.
Data processing, including query and code execution, occurs within your VPC by workers that you self-host. All coordination and orchestration happens in an isolated namespace within the Retool-managed cluster. No code results or query data is sent outside of your VPC by default.
The process occurs within your VPC. All coordination and orchestration happens in an isolated namespace within the Retool-managed cluster. No query data is sent from your VPC.
Retool-managed clusters perform automatic mTLS rotation. If necessary, you can manually perform this when required.
Local clusters
If you deploy multiple instances of Self-hosted Retool with Workflows (e.g., staging and production instances), they must each have their own local cluster namespace.
A local cluster is one that is created when deploying Self-hosted Retool with Workflows. Unlike a self-hosted cluster in your VPC, a local cluster is only used by its Self-hosted Retool instance.
Local clusters enable you to get up and running quickly without needing to commit to a dedicated Temporal cluster, but can become complex to scale and manage. A Retool-managed cluster, or a self-managed cluster on Temporal Cloud or in your VPC, centralizes the cluster as each instance can use it via a resource-isolated namespace.
Large scale deployments
Workflows can be deployed to scale to hundreds of concurrent block runs per second. There are three options for running more than 10 workflows per second:
- Use a Retool-managed cluster.
- Use a self-managed cluster.
- Use Cassandra as the Temporal datastore. If you anticipate running workflows at a higher scale, please reach out to us to work through a deployment strategy that is best for your use case.
Telemetry
Retool uses a Prometheus Metrics exporter to expose Temporal Worker metrics. This is specified in the Temporal worker's runtime options.
telemetryOptions: {
metrics: {
prometheus: { bindAddress: '0.0.0.0:9090' }
}
}
You can optionally specify the endpoint for an OpenTelemetry collector if you prefer to use one with the WORKFLOW_TEMPORAL_OPENTELEMETRY_COLLECTOR
environment variable.
Metrics are available for scraping using the /metrics
route.
For more information about telemetry, refer to Collect self-hosted telemetry data.