Skip to main content

Temporal clusters for Self-hosted Retool

Learn about Temporal clusters for Self-hosted Retool deployments.

Temporal

Temporal is a distributed system used to schedule and run asynchronous tasks for Retool Workflows. A Self-hosted Retool instance uses a Temporal cluster to facilitate the execution of each workflow amongst a pool of self-hosted workers that make queries and execute code in your VPC. Temporal manages the queueing, scheduling, and orchestration of workflows to guarantee that each workflow block executes in the correct order of the control flow. It does not store any block results by default.

You can use a Retool-managed cluster on Temporal Cloud, which is recommended for most use cases. You can also use an existing self-managed cluster that is hosted on Temporal Cloud or in your own infrastructure. Alternatively, you can spin up a new self-hosted cluster alongside your Self-hosted Retool instance.

Recommended

You should use a Retool-managed cluster if:

  • You are on a version greater than 3.6.14.
  • Your organization is on the Enterprise plan.
  • You don't have an existing cluster which you prefer to use.
  • Your cluster only needs to be used for a Self-hosted Retool deployment.
  • You don't want to manage the cluster directly.
  • You have a single or multi-instance Retool deployment, where each instance requires its own namespace.

Retool admins can enable Retool-managed Temporal. To get started, navigate to the Retool Workflows page and click Enroll now. Once you update your configuration, return to the page and click Complete setup.

It can take a few minutes to initialize a namespace in Retool-managed Temporal.

Retool-managed Temporal clusters are hosted on Temporal Cloud. Your Self-hosted Retool deployment communicates with the cluster when building, deploying, and executing Retool Workflows. All orchestration data to Temporal is fully encrypted and uses the private encryption key set for your deployment.

Compare options

In general, Retool recommends using a Retool-managed Temporal cluster. You can also use either:

  • A self-managed external cluster: Either Temporal Cloud or self-hosted within your infrastructure.
  • A local cluster: A self-hosted cluster within your Self-hosted Retool instance.
Retool-managedSelf-managed externalLocal
HostingExternally on Temporal CloudEither externally on Temporal Cloud or self-hosted internally within your own VPCLocally as part of the Self-hosted Retool instance
AvailabilityAll Self-hosted Retool instances (3.6.14+) on an Enterprise planAll Self-hosted Retool instances and for purposes outside of Retool.Only to the Self-hosted Retool instance
ManagementManaged by RetoolSelf-managed on Temporal Cloud, self-managed and self-hosted if within your own VPCSelf-managed and self-hosted
Scale and performanceAt least 4 AWS regions are available to choose fromIf using Temporal Cloud, at least 10 AWS regions are available to choose from; if self-hosted, low latency due to hosting within your infrastructureLow latency due to local deployment alongside self-hosted Retool
Uptime99.99% uptime99.99% uptimeDependent on DevOps
ConfigurationMinimalModerateHigh
CostNo minimum contractSee Temporal Cloud pricingNo contract
SecurityResource-isolated namespace in a Temporal Cloud cluster, in which only orchestration data (workflow IDs and block names) is stored; all data encrypted with private keyOnly orchestration data (workflow IDs and block names), encrypted with private key, is stored in Temporal Cloud. All other data remains internal on VPC for self-hosted.All data remains local to the Self-hosted Retool instance

Egress

Whether you use a Retool-managed or self-managed cluster, egress is required for Self-hosted Retool to use Temporal. Both MAIN_BACKEND and WORKFLOW_TEMPORAL_WORKER containers must be able to connect.

If you're using a Retool-managed cluster or a self-managed cluster on Temporal Cloud, your instance must have egress to the internet (public network) on ports for the following domains:

PortDomains
443*.retool.com, *.tryretool.com, *.temporal.io
7233*.tmprl.cloud

Lifecycle

When the Temporal cluster is needed:

  1. The MAIN_BACKEND service enqueues workflows with Temporal.
  2. A WORKFLOWS_TEMPORAL_WORKER receives instructions from Temporal for the specific tasks required to run a workflow, leveraging the Self-hosted Retool backend.
  3. DB_CONNECTOR makes a request to your network-protected resources, or CODE_EXECUTOR executes custom Python.

Data and encryption

By default, Retool-managed clusters only store encrypted names and IDs of objects in Temporal Cloud—only minimal customer data is sent to the cluster. All encrypted data in a Retool-managed cluster is retained for 14 days. Please reach out if you would like to make changes to this retention period.

All data sent between your deployment and Temporal Cloud is encrypted with your ENCRYPTION_KEY.

Data processing, including query and code execution, occurs within your VPC by workers that you self-host. All coordination and orchestration happens in an isolated namespace within the Retool-managed cluster. No code results or query data is sent outside of your VPC by default.

The process occurs within your VPC. All coordination and orchestration happens in an isolated namespace within the Retool-managed cluster. No query data is sent from your VPC.

Retool-managed clusters perform automatic mTLS rotation. If necessary, you can manually perform this when required.

Local clusters

If you deploy multiple instances of Self-hosted Retool with Workflows (e.g., staging and production instances), they must each have their own local cluster namespace.

A local cluster is one that is created when deploying Self-hosted Retool with Workflows. Unlike a self-hosted cluster in your VPC, a local cluster is only used by its Self-hosted Retool instance.

Local clusters enable you to get up and running quickly without needing to commit to a dedicated Temporal cluster, but can become complex to scale and manage. A Retool-managed cluster, or a self-managed cluster on Temporal Cloud or in your VPC, centralizes the cluster as each instance can use it via a resource-isolated namespace.

Large scale deployments

Workflows can be deployed to scale to hundreds of concurrent block runs per second. There are three options for running more than 10 workflows per second:

  • Use a Retool-managed cluster.
  • Use a self-managed cluster.
  • Use Cassandra as the Temporal datastore. If you anticipate running workflows at a higher scale, please reach out to us to work through a deployment strategy that is best for your use case.

Telemetry

Retool uses a Prometheus Metrics exporter to expose Temporal Worker metrics. This is specified in the Temporal worker's runtime options.

telemetryOptions: {
metrics: {
prometheus: { bindAddress: '0.0.0.0:9090' }
}
}

You can optionally specify the endpoint for an OpenTelemetry collector if you prefer to use one with the WORKFLOW_TEMPORAL_OPENTELEMETRY_COLLECTOR environment variable.

Metrics are available for scraping using the /metrics route.

For more information about telemetry, refer to Collect self-hosted telemetry data.