Deploy Retool on Kubernetes with Helm
Learn how to deploy Retool on Kubernetes with the Helm package manager.
You can deploy Self-hosted Retool on Kubernetes with Helm 3.3.1 or later.
Requirements
To deploy Self-hosted Retool with Workflows, you need:
- A Retool license key, which you can obtain from Retool Self-hosted Portal or your Retool account manager.
- A domain you own, to which you can add a DNS record.
- A Kubernetes cluster. To create a cluster, see documentation on Google Cloud Platform, AWS, and Azure.
- A working installation of kubectl. To install kubectl, see documentation on Google Cloud Platform, AWS, and Azure.
Temporal
Temporal is a distributed system used to schedule and run asynchronous tasks for Retool Workflows. A Self-hosted Retool instance uses a Temporal cluster to facilitate the execution of each workflow amongst a pool of self-hosted workers that make queries and execute code in your VPC. Temporal manages the queueing, scheduling, and orchestration of workflows to guarantee that each workflow block executes in the correct order of the control flow. It does not store any block results by default.
You can use a Retool-managed cluster on Temporal Cloud, which is recommended for most use cases. You can also use an existing self-managed cluster that is hosted on Temporal Cloud or in your own infrastructure. Alternatively, you can spin up a new self-hosted cluster alongside your Self-hosted Retool instance.
- Retool-managed cluster
- Self-managed cluster
- Local cluster
Recommended
You should use a Retool-managed cluster if:
- You are on a version greater than 3.6.14.
- Your organization is on the Enterprise plan.
- You don't have an existing cluster which you prefer to use.
- Your cluster only needs to be used for a Self-hosted Retool deployment.
- You don't want to manage the cluster directly.
- You have a single or multi-instance Retool deployment, where each instance requires its own namespace.
Retool admins can enable Retool-managed Temporal. To get started, navigate to the Retool Workflows page and click Enroll now. Once you update your configuration, return to the page and click Complete setup.
It can take a few minutes to initialize a namespace in Retool-managed Temporal.
Retool-managed Temporal clusters are hosted on Temporal Cloud. Your Self-hosted Retool deployment communicates with the cluster when building, deploying, and executing Retool Workflows. All orchestration data to Temporal is fully encrypted and uses the private encryption key set for your deployment.
If you want to create a new, self-hosted cluster on Temporal Cloud, sign up first. Once your account is provisioned, you can then deploy Self-hosted Retool.
Temporal Cloud supports 10+ AWS regions from which to select, 99.99% availability, and 99.99% guarantee against service errors.
You should use an existing self-managed cluster, hosted on Temporal Cloud or in your own infrastructure, if:
- You cannot use a Retool-managed cluster.
- You are on a version greater than 3.6.14.
- Your organization is on the Free, Team, or Business plan.
- You have an existing cluster and would prefer to use another namespace within it.
- You need a cluster for uses other than a Self-hosted Retool deployment.
- You want to manage the cluster directly.
- You have a multi-instance Retool deployment, where each instance would have its own namespace in a shared Self-hosted Temporal cluster.
Self-managed cluster considerations
Retool recommends using a separate datastore for the Workflows Queue in production. Consider using AWS Aurora Serverless V2 configured to an ACU (cpu) provision ranging from 0.5 to 8 ACU. 1 ACU can provide around 10 QPS. The Workflows Queue is write-heavy (around 100:1 write to read operations) and Aurora Serverless can scale to accommodate spikes in traffic without any extra configuration.
Environments
For test environments, Retool recommends using the same database for the Retool Database and Workflows Queue. Without any extra configuration, Retool Workflows can process approximately 5-10 QPS (roughly, 5-10 concurrent blocks executed per second).
Workflows at scale
You can scale Self-hosted Retool Workflow-related services to perform a high rate of concurrent blocks per second. If your deployment needs to process more than 10 workflows per second, you can use:
- A Retool-managed cluster.
- A self-managed cluster on Temporal Cloud.
- Apache Cassandra as the Temporal datastore.
If you anticipate running workflows at a higher scale, please reach out to us to work through a deployment strategy that is best for your use case.
You should spin up a new cluster alongside your Self-hosted Retool instance if:
- You cannot use a Retool-managed cluster.
- You are on a version greater than 3.6.14.
- Your organization is on the Free, Team, or Business plan.
- You don't have an existing cluster to use.
- You don't need a cluster for uses other than a Self-hosted Retool deployment.
- You want to test a Self-hosted Retool deployment with a local cluster first.
- You have a multi-instance Retool deployment, but each instance is in its own VPC and requires its own Self-hosted Temporal cluster.
Local cluster considerations
Retool recommends using a separate datastore for the Workflows Queue in production. Consider using AWS Aurora Serverless V2 configured to an ACU (cpu) provision ranging from 0.5 to 8 ACU. 1 ACU can provide around 10 QPS. The Workflows Queue is write-heavy (around 100:1 write to read operations) and Aurora Serverless can scale to accommodate spikes in traffic without any extra configuration.
Environments
For test environments, Retool recommends using the same database for the Retool Database and Workflows Queue. Without any extra configuration, Retool Workflows can process approximately 5-10 QPS (roughly, 5-10 concurrent blocks executed per second).
Workflows at scale
You can scale Self-hosted Retool Workflow-related services to perform a high rate of concurrent blocks per second. If your deployment needs to process more than 10 workflows per second, you can use:
- A Retool-managed cluster.
- A self-managed cluster on Temporal Cloud.
- Apache Cassandra as the Temporal datastore.
If you anticipate running workflows at a higher scale, please reach out to us to work through a deployment strategy that is best for your use case.
Cluster size
The cluster must have at least one node with 8x vCPUs and 16 GB of memory. Use the following command to retrieve the capacity of your nodes.
$ kubectl describe nodes
In the Capacity section, verify the cpu and memory values meet the above requirements.
Capacity:
attachable-volumes-aws-ebs: 25
cpu: 8
ephemeral-storage: 83873772Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 7931556Ki
pods: 29
Cluster storage class
If you want to mount volumes, ensure the volume supplied by your cloud provider can be mounted to multiple nodes. To identify your cluster's storage class, run the command
$ kubectl get storageclass
Reference your cloud provider's documentation to verify that this storage class supports the ReadWriteMany
access mode.
1. Add the Retool Helm chart repository
Use the following command to add the Retool Helm repository.
$ helm repo add retool https://charts.retool.com
Run helm search repo retool/retool
to confirm you can access the Retool chart.
NAME CHART VERSION APP VERSION DESCRIPTION
retool/retool 6.2.5 A Helm chart for Kubernetes
retool/retool-wf 4.13.0 A Helm chart for Kubernetes
2. Download Helm configuration file
Retool's Helm chart is configured using a values.yaml file. Download a copy of values.yaml
to your local machine, using the command below. Open values.yaml
in a text editor or IDE to follow along the steps below.
curl -L -o values.yaml https://raw.githubusercontent.com/tryretool/retool-helm/main/values.yaml
3. Update Helm configuration
In Kubernetes, you can store configuration options in plain text or use Kubernetes secrets. The following example sets config.licenseKey
as plain text.
config:
licenseKey: "XXX-XXX-XXX"
A Kubernetes secret is an object that contains multiple key-value pairs. You need both the secret name and the key to configure the values.yaml
file. The example below uses a value stored in the license-key-secret
under the key license-key
.
config:
licenseKeySecretName: license-key-secret
licenseKeySecretKey: license-key
Retool recommends storing sensitive data—for example, passwords and credentials—as Kubernetes secrets, especially if you commit values.yaml
to source control.
Set the following values in values.yaml
.
To generate 36-character random strings for config.encryptionKey
and config.jwtSecret
, run the command openssl rand -base64 36
.
Setting | Description |
---|---|
config.licenseKey | License key, in plain text or as a secret value. |
config.encryptionKey | Key used to encrypt the database. Generate a random string with openssl . |
config.jwtSecret | Secret used to sign authentication requests from Retool's server. Generate a random string with openssl . |
image.tag | Version of Retool to install, in the format X.Y.Z. Self-hosted Retool with Workflows requires 2.108.4 or later when using a local Temporal Cluster, or 3.6.14 or later for Retool-managed Temporal. |
config.useInsecureCookies | Whether to allow insecure cookies. Set to true if you have not configured SSL. Set to false if you use HTTPS to connect to the instance. |
workflows.enabled | Whether to enable Retool Workflows. Set to true . Defaults to true for Retool version 3.6.14 or later. |
codeExecutor.enabled | Whether to enable the Code Executor service. Set to true . Defaults to true for Retool version 3.20.15 or later, and automatically sets the image tag for tryretool/code-executor-service corresponding to tryretool/backend . |
Each workflow worker can process approximately 10 queries per second (QPS). Increase the workflow replicaCount
if this is not high enough for your needs. You should be able to scale this to approximately 40 QPS, using four workflow workers, before you need to make infrastructure changes.
Configure Temporal
- Retool-managed cluster
- Self-managed cluster
- Local cluster
Recommended
Allow your deployment to connect to Temporal
Open up egress to the public internet on ports 443
and 7233
to allow outbound-only connections to Temporal Cloud from your deployment. This is so services can enqueue work to, and poll work out, of Temporal.
Temporal Cloud does not have a static IP range to allow list. If more specificity is required, egress is required on ports on the following domains:
Port | Domains |
---|---|
443 | *.retool.com, *.tryretool.com, *.temporal.io |
7233 | *.tmprl.cloud |
Kubernetes pods are non-isolated for egress by default thereby allowing all outbound connections. If the Retool backend or workers cannot connect to Temporal Cloud, check your egress NetworkPolicy for any issues.
Configure Helm for Temporal cluster
Update the values.yaml
configuration file to specify whether to use a Retool-managed cluster or a self-managed one.
Variable | Description |
---|---|
.Values.workflows.enabled | Whether to enable workers and workflow backend pods. Set to true . |
.Values.codeExecutor.enabled | Whether to enable code executor pods. Set to true . |
.Values.retool-temporal-services-helm.enabled | Whether to use a local Retool-deployed Temporal cluster. Set to false . |
.Values.workflows.temporal.enabled | Whether to use a self-managed Temporal cluster. Set to false. |
Follow the steps for configuring either a Temporal Cloud cluster or a self-hosted cluster in your VPC.
Temporal Cloud
Allow your deployment to connect to Temporal
Open up egress to the public internet on ports 443
and 7233
to allow outbound-only connections to Temporal Cloud from your deployment. This is so services can enqueue work to, and poll work out, of Temporal.
Temporal Cloud does not have a static IP range to allow list. If more specificity is required, egress is required on ports on the following domains:
Port | Domains |
---|---|
443 | *.retool.com, *.tryretool.com, *.temporal.io |
7233 | *.tmprl.cloud |
Kubernetes pods are non-isolated for egress by default thereby allowing all outbound connections. If Retool backend or workers cannot connect to Temporal Cloud, check your egress NetworkPolicy for any issues.
Configure Helm for Temporal cluster
Update the values.yaml
configuration file to specify whether to use a Retool-managed cluster or a self-managed one. You must also configure mTLS.
Variable | Description |
---|---|
.Values.workflows.enabled | Whether to enable workers and workflow backend pods. Set to true . |
.Values.codeExecutor.enabled | Whether to enable code executor pods. Set to true . |
.Values.retool-temporal-services-helm.enabled | Whether to use a local Retool-deployed Temporal cluster. Set to false . |
.Values.workflows.temporal.enabled | Whether to use a self-managed Temporal cluster. Set to true . |
.Values.workflows.temporal.* | The configuration for your Temporal cluster. |
Self-hosted
Configure Helm for Temporal cluster
Update the following variables in values.yaml
to configure the Temporal cluster. You can optionally use mTLS to secure traffic between services within your VPC.
Variable | Description |
---|---|
.Values.workflows.enabled | Whether to enable workers and workflow backend pods. Set to true . |
.Values.codeExecutor.enabled | Whether to enable code executor pods. Set to true . |
.Values.retool-temporal-services-helm.enabled | Whether to use a local Retool-deployed Temporal cluster. Set to false . |
.Values.workflows.temporal.enabled | Whether to use a self-managed Temporal cluster. Set to true . |
.Values.workflows.temporal.* | The configuration for your Temporal cluster. |
Configure Helm for Temporal cluster
Update the following variables in values.yaml
to configure the Temporal cluster. You can optionally use mTLS to secure traffic between services within your VPC.
Variable | Description |
---|---|
.Values.workflows.enabled | Whether to enable workers and workflow backend pods. Set to true . |
.Values.codeExecutor.enabled | Whether to enable code executor pods. Set to true . |
.Values.retool-temporal-services-helm.enabled | Whether to use a local Retool-deployed Temporal cluster. Set to true . |
.Values.retool-temporal-services-helm.persistence | Add PostgreSQL or AWS Aurora credentials to both default and visibility . |
.Values.workflows.temporal.enabled | Whether to use a self-managed Temporal cluster. Set to false . |
4. Install Self-hosted Retool
After updating the configuration, install Self-hosted Retool.
helm install my-retool retool/retool -f values.yaml
After installing Retool, run kubectl get pods
to verify you have pods for the main service and jobs-runner
. If you use the PostgreSQL subchart, there is also a postgresql
pod. If you have enabled Workflows, there are also workflow-worker
and workflow-backend
pods.
my-retool-7898474bbd-pr8n6 1/1 Running 1 (8h ago) 8h
my-retool-jobs-runner-74796ddd99-dd856 1/1 Running 0 8h
my-retool-postgresql-0 1/1 Running 0 8h
Once the main service is running, verify the installation by port forwarding to localhost
.
kubectl port-forward my-retool-7898474bbd-69zjt 3000:3000
You can then access Retool at http://localhost:3000/
.
Additional configuration
The following configuration steps are optional but strongly recommended for using Retool in a production environment.
Whenever you run helm upgrade
, use the --version
flag to specify the chart's version number. Otherwise, Helm upgrades to the latest chart version, which may cause compatibility issues. You can check the release version of your deployment with the command helm list
.
Externalize database
By default, the Retool Helm chart uses the PostgreSQL subchart to create a containerized instance of PostgreSQL. This is not suitable for production use cases, and the Retool storage database should be hosted on an external, managed database. Managed databases are more maintainable, scalable, and reliable than containerized PostgreSQL instances. These instructions explain how to set up Retool with an external database.
1. Export data
If you have already populated the PostgreSQL pod, export its data.
kubectl exec -it <POSTGRES-POD-NAME> -- bash -c 'pg_dump hammerhead_production --no-acl --no-owner --clean -U postgres' > retool_db_dump.sql
2. Disable PostgreSQL chart
In values.yaml
, set postgresql.enabled
to false
to disable the included PostgreSQL chart. This prevents the containerized PostgreSQL from starting.
3. Update PostgreSQL configuration
In values.yaml
, set the config.postgresql
properties with settings for your external database. This specifies the externalized database to back up your Retool instance.
4. Upgrade Helm chart version
Use helm
to perform the upgrade and include the Helm chart version number. Retool requires version 6.1.1
or later.
helm upgrade -f values.yaml my-retool retool/retool --version 6.1.1
Add environment variables
Environment variables provide ways to configure a Retool instance. The values.yaml
file has three locations to add environment variables.
Object | Type |
---|---|
env | Plain text key-value pairs. |
environmentSecrets | Plain text or Kubernetes secrets. |
environmentVariables | Plain text or Kubernetes secrets. |
Do not store sensitive information, such as access tokens, in env
. Use environmentSecrets
or environmentVariables
as they can populate environment variables from Kubernetes secrets.
Mount volumes
There are several use cases which require the use of volumes. For example, when configuring a gRPC resource, you need to mount a volume containing the protos files to the Retool deployment. Follow these instructions to create a persistent volume and copy files from your local machine to the volume.
1. Enable PersistentVolumeClaim
The Helm chart defines a PersistentVolumeClaim (PVC) which is automatically mounted to the Retool pods, enabling Retool to access files within this volume. The PVC is disabled by default. To enable the persistentVolumeClaim
, modify your values.yaml
file:
persistentVolumeClaim:
enabled: true
existingClaim: ""
If you have an existing PVC in your Kubernetes cluster to use, you can specify its name in existingClaim
. Otherwise, leave existingClaim
blank.
2. Set security context
In a later step, you use kubectl cp
to copy files from your local machine to the Kubernetes cluster, which requires the pod to run with root privileges. Modify your deployment so the pods run as root by changing the securityContext
in your values.yaml
file:
securityContext:
enabled: true
runAsUser: 0
Use helm
to perform the upgrade and include the Helm chart version number. Retool requires version 6.1.1
or later.
helm upgrade -f values.yaml my-retool retool/retool --version 6.1.1
3. Verify pods
Run kubetcl get pods
to verify pods are running.
my-retool-7898474bbd-pr8n6 1/1 Running 1 (8h ago) 8h
my-retool-jobs-runner-74796ddd99-dd856 1/1 Running 0 8h
my-retool-postgresql-0 1/1 Running 0 8h
4. Copy files
Next, copy the protos files from your local machine to the PVC. Note from kubectl get pods
the three pods in the deployment: the main, jobs-runner
, and postgresql
containers. Identify the name of the main container.
Ensure you local machine has a folder named protos
and run the following command, and replacing my-retool-7c4c89798-fqbh7
with the name of your Retool container.
kubectl cp protos/ my-retool-7c4c89798-fqbh7:/retool_backend/pv-data/protos
4. Set env
If you're configuring gRPC, you need to specify the location of the protos directory. In values.yaml
, set the PROTO_DIRECTORY_PATH
environment variable.
env:
PROTO_DIRECTORY_PATH: "/retool_backend/pv-data/protos"
5. Reset security context
Revert the security context of your deployment back to a disabled state.
securityContext:
enabled: false
runAsUser: 1000
Use helm
to perform the upgrade and include the Helm chart version number. Retool requires version 6.1.1
or later.
helm upgrade -f values.yaml my-retool retool/retool --version 6.1.1
Configure SSL
When configuring SSL, you can use Let's Encrypt to provision a certificate, or provide your own. See Configure SSL and custom certificates for more detail on certificates.
1. Install cert-manager
First, add the jetstack
Helm repository if you haven't already.
helm repo add jetstack https://charts.jetstack.io
Next, run the following command to install cert-manager
.
helm install \
cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--version v1.11.0 \
--set installCRDs=true --set ingressShim.defaultIssuerName=letsencrypt-prod \
--set ingressShim.defaultIssuerKind=ClusterIssuer \
--set ingressShim.defaultIssuerGroup=cert-manager.io
2. Configure certificate issuer
Create a file called production-issuer.yml
. Copy the following configuration, replace the example email with your email, and paste it into the new file.
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: example@example.com
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx
3. Create the certificate manager
First, use kubectl
to create the certificate manager as ClusterIssuer.
kubectl apply -f production-issuer.yml
4. Verify pod
Run kubectl get clusterissuer
to verify that the ClusterIssuer pod is ready.
NAME READY AGE
letsencrypt-prod True 10m
5. Update ingress configuration
Add the annotations
section to your ingress and modify the host and hosts placeholders accordingly.
ingress:
...
annotations:
kubernetes.io/tls-acme: "true"
certmanager.io/cluster-issuer: letsencrypt-prod
hosts:
- host: example.example.com
paths:
- path: /
tls:
- secretName: letsencrypt-prod
hosts:
- example.example.com
6. Apply changes
After the pods restart, you can access the page in your browser using TLS.
Update your Kubernetes instance
Follow these instructions to update your Retool instance to a newer release version.
1. Back up your database
If you use a managed database service, your database provider may have a feature to take snapshots or otherwise back up your database. If you use the PostgreSQL subchart, run the following command to export data from the PostgreSQL pod to a .sql
file.
kubectl exec -it <POSTGRES-POD-NAME> -- bash -c 'pg_dump hammerhead_production --no-acl --no-owner --clean -U postgres' > retool_db_dump.sql
2. Select a new version
Update the image.tag
value in values.yaml
to the Docker tag for the version of Retool to install, such as tryretool/backend:3.114.3-stable
.
3. Upgrade instance and apply changes
Run helm search repo retool/retool
to check the current version of Retool's Helm chart that is installed. Use helm upgrade
to then upgrade the Helm chart version, if required.
helm upgrade -f values.yaml my-retool retool/retool --version
4. Verify instance
Run kubectl get pods
to verify that the update has completed.
my-retool-7898474bbd-pr8n6 1/1 Running 1 (8h ago) 8h
my-retool-jobs-runner-74796ddd99-dd856 1/1 Running 0 8h
my-retool-postgresql-0 1/1 Running 0 8h
You should see additional services for workflows, such as workflows-worker
, workflows-backend
, and code-executor
.