Deploy Retool with Helm
Learn how to deploy Retool with Helm.
You can deploy self-hosted Retool with Helm following the instructions in this guide.
Requirements
To deploy Retool with Helm, you need:
- A Retool license key, which you can obtain from my.retool.com.
- A domain you own, to which you can add a DNS record.
- A Kubernetes cluster. To create a cluster, see documentation on Google Cloud Platform, AWS, and Azure.
- A working installation of kubectl. To install kubectl, see documentation on Google Cloud Platform, AWS, and Azure.
- A working installation of Helm v3 on version 3.3.1 or later.
Cluster size
The cluster must have at least one node with 2x vCPUs and 8 GB of memory. Use the following command to retrieve the capacity of your nodes.
$ kubectl describe nodes
In the Capacity section, verify the cpu and memory values meet the above requirements.
Capacity:
attachable-volumes-aws-ebs: 25
cpu: 2
ephemeral-storage: 83873772Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 7931556Ki
pods: 29
Cluster storage class
If you want to mount volumes, ensure the volume supplied by your cloud provider can be mounted to multiple nodes. To identify your cluster's storage class, run the command
$ kubectl get storageclass
Reference your cloud provider's documentation to verify that this storage class supports the ReadWriteMany
access mode.
1. Add the Retool Helm repository
Use the following command to add the Retool Helm repository.
$ helm repo add retool https://charts.retool.com
"retool" has been added to your repositories
Search for the repo to confirm you can access the Retool chart.
$ helm search repo retool/retool
NAME CHART VERSION APP VERSION DESCRIPTION
retool/retool 5.0.0 A Helm chart for Kubernetes
2. Modify values.yaml
Retool's Helm chart is configured using a values.yaml
file. To retrieve a copy of values.yaml
, clone the retool-helm repository to your local machine. Open values.yaml
in a text editor or IDE to follow along the steps below.
git clone https://github.com/tryretool/retool-helm.git
In Kubernetes, you can store configuration options in plain text, or by using Kubernetes secrets. The following example sets config.licenseKey
as plain text.
config:
licenseKey: "XXX-XXX-XXX"
A Kubernetes secret is an object which contains multiple key-value pairs. You need both the secret name and the key to configure the values.yaml
file. The example below uses a value stored in the license-key-secret
under the key license-key
.
config:
licenseKeySecretName: license-key-secret
licenseKeySecretKey: license-key
Retool recommends storing sensitive data—for example, passwords and credentials—as Kubernetes secrets, especially if you commit values.yaml
to source control.
Required values
Set the following values in values.yaml
.
Setting | Description |
---|---|
config.licenseKey | License key, in plain text or as a secret value. |
config.encryptionKey | Key used to encrypt the database. Generate a random string with openssl . |
config.jwtSecret | Secret used to sign authentication requests from Retool's server. Generate a random string with openssl . |
image.tag | Version of Retool to install, in the format X.Y.Z. For example, 2.115.2. |
config.useInsecureCookies | Set to true if you have not configured SSL. Leave as false if you use HTTPS to connect to the instance. |
Use
openssl
for random string generationTo generate 36-character random strings for
config.encryptionKey
andconfig.jwtSecret
, run the command$ openssl rand -base64 36
.
3. Install Retool
After updating the configuration, install Retool.
$ helm install my-retool retool/retool -f values.yaml
After installing Retool, verify you have pods for the main service and jobs-runner
. If you use the PostgreSQL subchart, there is also a postgresql
pod.
$ kubectl get pods
my-retool-7898474bbd-pr8n6 1/1 Running 1 (8h ago) 8h
my-retool-jobs-runner-74796ddd99-dd856 1/1 Running 0 8h
my-retool-postgresql-0 1/1 Running 0 8h
Once the main service is running, verify the installation by port forwarding to localhost
.
kubectl port-forward my-retool-7898474bbd-69zjt 3000:3000
You can then access Retool at http://localhost:3000/
.
Additional steps
The following steps are optional. On production instances, Retool strongly recommends you externalize your database, configure SSL, and keep up-to-date with the latest version of Retool. Setting environment variables is often necessary to configure SSO, source control, and other self-hosted features.
Set --version in Helm upgrades
Whenever you run
helm upgrade
, use the--version
flag to specify the chart's version number. Otherwise, Helm upgrades to the latest chart version, which may cause compatibility issues. You can check the release version of your deployment with the commandhelm list
.
Externalize database
By default, the Retool Helm chart uses the PostgreSQL subchart to create a containerized instance of PostgreSQL. This is not suitable for production use cases, and the Retool storage database should be hosted on an external, managed database. Managed databases are more maintainable, scalable, and reliable than containerized PostgreSQL instances. These instructions explain how to set up Retool with an external database.
- If you have already populated the Postgres container, export data from the Postgres container.
kubectl exec -it <POSTGRES-POD-NAME> -- bash -c 'pg_dump hammerhead_production --no-acl --no-owner --clean -U postgres' > retool_db_dump.sql
-
In
values.yaml
, setpostgresql.enabled
tofalse
to disable the included PostgreSQL chart. This prevents the containerized PostgreSQL from starting. -
In
values.yaml
, set theconfig.postgresql
properties with settings for your external database. This specifies the externalized database to back up your Retool instance. -
Upgrade your release. Replace
5.0.1
with the version number of your Helm chart.
helm upgrade -f values.yaml my-retool retool/retool --version 5.0.1
Update Retool
- Back up your database. If you use a managed database service, your database provider may have a feature to take snapshots or otherwise back up your database. If you use the PostgreSQL subchart, run the following command to export data from the PostgreSQL pod to a
.sql
file.
kubectl exec -it <POSTGRES-POD-NAME> -- bash -c 'pg_dump hammerhead_production --no-acl --no-owner --clean -U postgres' > retool_db_dump.sql
-
Identify the appropriate release version on Docker Hub. See Retool's self-hosted release notes to learn about version-specific features.
-
In
values.yaml
, updateimage.tag
to the desired Retool version. -
Upgrade your release. Replace
5.0.1
with your version number of your Helm chart.
helm upgrade -f values.yaml my-retool retool/retool --version 5.0.1
- Verify that your pods are running.
$ kubectl get pods
my-retool-7898474bbd-pr8n6 1/1 Running 1 (8h ago) 8h
my-retool-jobs-runner-74796ddd99-dd856 1/1 Running 0 8h
my-retool-postgresql-0 1/1 Running 0 8h
Add environment variables
The values.yaml
file has three locations to add environment variables.
Object | Type |
---|---|
env | Plain text key-value pairs |
environmentSecrets | Plain text or Kubernetes secrets |
environmentVariables | Plain text or Kubernetes secrets |
If you store sensitive information, such as credentials or tokens, in your repository, you should not store these variables in env
. Instead, use environmentSecrets
or environmentVariables
, as they can populate environment variables from Kubernetes secrets.
Mount volumes
There are several use cases which require the use of volumes. For example, when configuring a gRPC resource, you need to mount a volume containing the protos files to the Retool deployment. Follow these instructions to create a persistent volume and copy files from your local machine to the volume.
1. Enable PersistentVolumeClaim
The Helm chart defines a PersistentVolumeClaim (PVC) which is automatically mounted to the Retool pods, enabling Retool to access files within this volume. The PVC is disabled by default. To enable the persistentVolumeClaim
, modify your values.yaml
file:
persistentVolumeClaim:
enabled: true
existingClaim: ""
If you have an existing PVC in your Kubernetes cluster to use, you can specify its name in existingClaim
. Otherwise, leave existingClaim
blank.
2. Set security context
In a later step, you use kubectl cp
to copy files from your local machine to the Kubernetes cluster, which requires the pod to run with root privileges. Modify your deployment so the pods run as root by changing the securityContext
in your values.yaml
file:
securityContext:
enabled: true
runAsUser: 0
Upgrade your release. Replace 5.0.1
with the version number of your Helm chart.
helm upgrade my-retool retool/retool -f values.yaml --version 5.0.1
Verify that your pods are in a ready state before continuing.
$ kubectl get pods
my-retool-7898474bbd-pr8n6 1/1 Running 1 (8h ago) 8h
my-retool-jobs-runner-74796ddd99-dd856 1/1 Running 0 8h
my-retool-postgresql-0 1/1 Running 0 8h
3. Copy files
Now, copy the protos files from your local machine to the PVC. Note from kubectl get pods
the three pods in the deployment: the main, jobs-runner
, and postgresql
containers. Identify the name of the main container. Ensure you local machine has a folder named protos
and run the following command, and replacing my-retool-7c4c89798-fqbh7
with the name of your Retool container.
kubectl cp protos/ my-retool-7c4c89798-fqbh7:/retool_backend/pv-data/protos
4. Set env
If you're configuring gRPC, you need to specify the location of the protos directory. In values.yaml
, set the PROTO_DIRECTORY_PATH
environment variable.
env:
PROTO_DIRECTORY_PATH: "/retool_backend/pv-data/protos"
5. Reset security context
Now, return the security context of your deployment.
securityContext:
enabled: false
runAsUser: 1000
Upgrade your deployment. Remember to replace 5.0.1
with the version number of your Helm chart.
helm upgrade my-retool retool/retool -f values.yaml --version 5.0.1
Configure SSL
- Run the following command to install
cert-manager
.
helm install \
cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--version v1.11.0 \
--set installCRDs=true --set ingressShim.defaultIssuerName=letsencrypt-prod \
--set ingressShim.defaultIssuerKind=ClusterIssuer \
--set ingressShim.defaultIssuerGroup=cert-manager.io
- Create a file called
production-issuer.yml
. Copy the following configuration, replace the example email with your email, and paste it into the new file.
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: [email protected]
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx
- Create the cert manager as ClusterIssuer.
kubectl apply -f production-issuer.yml
- Verify that the pod is ready.
$ kubectl get clusterissuer
NAME READY AGE
letsencrypt-prod True 10m
- Add the
annotations
section to your ingress and modify the host and hosts placeholders accordingly.
ingress:
...
annotations:
kubernetes.io/tls-acme: "true"
certmanager.io/cluster-issuer: letsencrypt-prod
hosts:
- host: example.example.com
paths:
- path: /
tls:
- secretName: letsencrypt-prod
hosts:
- example.example.com
- Apply the changes. After the pods restart, you can access the page in your browser using TLS. Remember to replace
5.0.1
with the version number of your Helm chart.
helm upgrade my-retool retool/retool -f values.yaml --version 5.0.1
Updated 6 days ago