Amazon EKS Helm Installation
Retool does not recommend deploying to physical machines for team development and deployments. Consider deploying to managed Kubernetes services such as Amazon EKS, Azure Kubernetes Service, or Google Kubernetes Engine.
The following example focuses on using Amazon EKS for educational purposes.
Requirements
- Amazon AWS Account with Admin access / IAM user
- Retool Enterprise License Key
- Install Terraform
- Install Helm
- Install Docker Desktop (includes kubectl)
- AWS CLI
- A registered domain available within Route53
The following steps are outlined in the Retool Docs here. The following lab covers:
- Spinning up an Amazon EKS Cluster using Terraform
- Deploying Retool without an Ingress via Helm
- Deploying Let’s Encrypt infrastructure for SSL/TLS certificates
- Deploying NGINX Ingress Controller with an Ingress manifest that uses Let’s Encrypt
- Updating Amazon Route53 to have an A Record with a domain
- Updating Amazon Route53 A Record to direct traffic to ELB
Spin up Amazon EKS cluster
Amazon EKS clusters can be configured via Terraform and a generic example is available here at Terraform.
https://developer.hashicorp.com/terraform/tutorials/kubernetes/eks
Edit the main.tf file so as to provide sufficient resources for the lab. Retool requires 2 vCPU and 8 GB of RAM
eks_managed_node_groups = {
one = {
name = "node-group-1"
instance_types = ["t3.large"]
min_size = 1
max_size = 2
desired_size = 2
}
two = {
name = "node-group-2"
instance_types = ["t3.large"]
min_size = 1
max_size = 2
desired_size = 2
}
}
This requires that you have downloaded your AWS Access Key and Secret and installed it locally using the following command. Enter in the values that AWS Identity Access Manager provides for your IAM user and specify the region us-east-2.
aws configure --profile retool
AWS Access Key ID [None]:
AWS Secret Access Key [None]:
Default region name [None]:
Default output format [None]:
Once this is defined, you can use the profile retool to refer to these credentials. Using the following series of commands you can run the terraform template to create an Amazon EKS cluster with n-number of EC2 managed nodes. The first command initializes Terraform, the second performs a planning phase and outputs the results to tfplan. The final command applies this plan to the retool profile, deploying the Amazon EKS cluster in us-east-2.
AWS_PROFILE=retool; AWS_REGION=us-east-2; terraform init
AWS_PROFILE=retool; AWS_REGION=us-east-2; terraform plan -out tfplan
AWS_PROFILE=retool; AWS_REGION=us-east-2; terraform apply tfplan
Terraform will take approximately 20-30 minutes to complete the setup of the Amazon EKS cluster including Networking, Security, Compute, Storage and other tasks. When complete a summary of the outputs is provided:
Apply complete! Resources: 63 added, 0 changed, 0 destroyed.
Outputs:
cluster_endpoint = "https://35434C6C2B1BE7340E8413E3B5AD54DD.gr7.us-east-2.eks.amazonaws.com"
cluster_name = "education-eks-t56pyDqT"
cluster_security_group_id = "sg-0406fd6724d40a4a4"
region = "us-east-2"
To be able to access this cluster we need to run an AWS CLI command to update the .kube/config file in the same shell that was used to run the Terraform commands above:
aws eks --region $(terraform output -raw region) update-kubeconfig \
--name $(terraform output -raw cluster_name)
Added new context arn:aws:eks:us-east-2:<awsaccountid>:cluster/education-eks-t56pyDqT to /Users/criley/.kube/config
Once this is complete, access the Amazon EKS cluster using the following command:
kubectl get nodes
Download Helm values.yaml and update
- Download the Helm values.yaml using the following command:
curl -L -o values.yaml https://raw.githubusercontent.com/tryretool/retool-helm/main/values.yaml
Add retool and jetstack repo to helm:
helm repo add retool https://charts.retool.com
helm repo add jetstack https://charts.jetstack.io
Confirm that you can access the retool and jetstack chart
helm search repo retool/retool
helm search repo jetstack/cert-manager
Use the following commands to generate random unique base64 entries for jwt_secret, encryption_key.
openssl rand -base64 16 # take result and put into jwtSecret line
openssl rand -base64 16 # take result and put into encryptionKey line
Update the values.yaml to replace the licenseKey
, encryptionKey
, and jwtSecret
as identified below.
config:
licenseKey: "ENTER RETOOL LICENSE KEY HERE"
# licenseKeySecretName is the name of the secret where the Retool license key is stored (can be used instead of licenseKey)
# licenseKeySecretName:
# licenseKeySecretKey is the key in the k8s secret, default: license-key
# licenseKeySecretKey:
useInsecureCookies: true
# Timeout for queries, in ms.
# dbConnectorTimeout: 120000
auth:
google:
clientId:
clientSecret:
# clientSecretSecretName is the name of the secret where the google client secret is stored (can be used instead of clientSecret)
# clientSecretSecretName:
# clientSecretSecretKey is the key in the k8s secret, default: google-client-secret
# clientSecretSecretKey:
domain:
encryptionKey: "ENTER encryptionKey from openssl command"
# encryptionKeySecretName is the name of the secret where the encryption key is stored (can be used instead of encryptionKey)
# encryptionKeySecretName:
# encryptionKeySecretKey is the key in the k8s secret, default: encryption-key
# encryptionKeySecretKey:
jwtSecret: "ENTER jwtSecret from openssl command"
# jwtSecretSecretName is the name of the secret where the jwt secret is stored (can be used instead of jwtSecret)
# jwtSecretSecretName:
# jwtSecretSecretKey is the key in the k8s secret, default: jwt-secret
# jwtSecretSecretKey:
Update limits
Retool values.yaml has limits / request for the Retool App and Workflows pods. This lab focuses on the Retool App pods. Update the values.yaml to what is shared below to provide sufficient headroom.
resources:
# If you have more than 1 replica, the minimum recommended resources configuration is as follows:
# - cpu: 2048m
# - memory: 4096Mi
# If you only have 1 replica, please double the above numbers.
limits:
cpu: 2048m
memory: 8192Mi
requests:
cpu: 256m
memory: 1024Mi
Specify Retool version to install
Update image.tag
in values.yaml to specify the DockerHub tag to use (for example 3.75.11-stable
). Learn more about self-hosted versioning here.
Deploy Retool via Helm
Next deploy Retool via Helm using the following command:
helm install my-retool retool/retool -f values.yaml
Check the status of the deployment:
kubectl get pods
Verify that the api pod logs show successful health checks using the following command to examine the log:
kubectl logs -f <api pod id>
SSL Configuration
To support SSL, this section will focus on the installation of a NGINX Ingress Controller, configuration of a Kubernetes Ingress (to inform the NGINX Ingress Controller what it will do), installation of a cluster issuer/certificate manager using Let’s Encrypt.
Install Cert Manager / Cluster Issuer
Run the following helm command to install the cert-manager into Kubernetes:
helm install \
cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--version v1.11.0 \
--set installCRDs=true --set ingressShim.defaultIssuerName=letsencrypt-prod \
--set ingressShim.defaultIssuerKind=ClusterIssuer \
--set ingressShim.defaultIssuerGroup=cert-manager.io
- Create the following manifest called production-issuer.yaml.
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: example@example.com
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx
- Run the following command to apply to the Kubernetes cluster:
kubectl apply -f production-issuer.yaml
NGINX Ingress Controller
- Deploy an NGINX Ingress Controller to the Kubernetes cluster. This will request from AWS, an Elastic Load Balancer and have the resulting traffic flow through the NGINX Ingress Controller.
helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx --create-namespace
- Confirm that it is running by confirm the nginx-ingress namespace is created and that a pod is running:
kubectl get pods -n ingress-nginx
Update Helm App (values.yaml) with Kubernetes Ingress
In order to route HTTPS traffic into the NGINX Controller, a Kubernetes Ingress needs to be defined for configuring the traffic routing and SSL parameters. The following provides the configuration update in values.yaml.
...
useInsecureCookies: false
...
ingress:
enabled: true
# For k8s 1.18+
ingressClassName: nginx
labels: {}
annotations:
kubernetes.io/tls-acme: "true"
certmanager.io/cluster-issuer: letsencrypt-prod
hosts:
- host: <domain>
paths:
- path: /
tls:
- secretName: letsencrypt-prod
hosts:
- <domain>
# servicePort: service-port
pathType: ImplementationSpecific
Update the Kubernetes cluster / NGINX Ingress Controller with these changes:
helm upgrade my-retool retool/retool -f values.yaml
Once this is applied, an ingress should exist and can be viewed using the following command:
kubectl get ingress
Next we want to confirm an Elastic Load Balancer has been created within Amazon EC2 and that its health check is passing for the Target Group.
The final step is to update Amazon Route 53 to direct traffic to the ELB. Go to Amazon Route53, edit or create an A Record for the domain of interest and direct it to the ELB ARN. After a few minutes, try and access the Retool Platform using the following URL:
https://your_domain
Debugging: One can examine the logs for the Cluster-Issuer, api pod, nginx-ingress to help see if certificates are being generated and traffic is flowing to the Retool platform (api pod).
Cleanup of Retool Installation
The following steps can be used to cleanup the Retool Platform K8S artifacts.
Uninstall the Helm charts
helm uninstall my-retool
helm uninstall nginx-ingress -n nginx-ingress
Delete the Secret
kubectl delete secret retoolsecrets
Delete the Persistent Volume Claim
kubectl get pvc
kubectl delete pvc pvc_name
Destroy the Amazon EKS environment
terraform destroy
...
[Enter y] y