Deploy Retool on Kubernetes with manifests
Learn how to deploy Retool on Kubernetes with manually configured manifests.
You can deploy Self-hosted Retool on Kubernetes using manually configured manifests. You can use this approach if you do not manage packages with Helm.
Requirements
To deploy Self-hosted Retool with Workflows, you need:
- A Retool license key, which you can obtain from Retool Self-hosted Portal or your Retool account manager.
- A domain you own, to which you can add a DNS record.
- A Kubernetes cluster. To create a cluster, see documentation on Google Cloud Platform, AWS, and Azure.
- A working installation of kubectl. To install kubectl, see documentation on Google Cloud Platform, AWS, and Azure.
Temporal
Temporal is a distributed system used to schedule and run asynchronous tasks for Retool Workflows. A Self-hosted Retool instance uses a Temporal cluster to facilitate the execution of each workflow amongst a pool of self-hosted workers that make queries and execute code in your VPC. Temporal manages the queueing, scheduling, and orchestration of workflows to guarantee that each workflow block executes in the correct order of the control flow. It does not store any block results by default.
You can use a Retool-managed cluster on Temporal Cloud, which is recommended for most use cases. You can also use an existing self-managed cluster that is hosted on Temporal Cloud or in your own infrastructure. Alternatively, you can spin up a new self-hosted cluster alongside your Self-hosted Retool instance.
- Retool-managed cluster
- Self-managed cluster
- Local cluster
Recommended
You should use a Retool-managed cluster if:
- You are on a version greater than 3.6.14.
- Your organization is on the Enterprise plan.
- You don't have an existing cluster which you prefer to use.
- Your cluster only needs to be used for a Self-hosted Retool deployment.
- You don't want to manage the cluster directly.
- You have a single or multi-instance Retool deployment, where each instance requires its own namespace.
Retool admins can enable Retool-managed Temporal. To get started, navigate to the Retool Workflows page and click Enroll now. Once you update your configuration, return to the page and click Complete setup.
It can take a few minutes to initialize a namespace in Retool-managed Temporal.
Retool-managed Temporal clusters are hosted on Temporal Cloud. Your Self-hosted Retool deployment communicates with the cluster when building, deploying, and executing Retool Workflows. All orchestration data to Temporal is fully encrypted and uses the private encryption key set for your deployment.
If you want to create a new, self-hosted cluster on Temporal Cloud, sign up first. Once your account is provisioned, you can then deploy Self-hosted Retool.
Temporal Cloud supports 10+ AWS regions from which to select, 99.99% availability, and 99.99% guarantee against service errors.
You should use an existing self-managed cluster, hosted on Temporal Cloud or in your own infrastructure, if:
- You cannot use a Retool-managed cluster.
- You are on a version greater than 3.6.14.
- Your organization is on the Free, Team, or Business plan.
- You have an existing cluster and would prefer to use another namespace within it.
- You need a cluster for uses other than a Self-hosted Retool deployment.
- You want to manage the cluster directly.
- You have a multi-instance Retool deployment, where each instance would have its own namespace in a shared Self-hosted Temporal cluster.
Self-managed cluster considerations
Retool recommends using a separate datastore for the Workflows Queue in production. Consider using AWS Aurora Serverless V2 configured to an ACU (cpu) provision ranging from 0.5 to 8 ACU. 1 ACU can provide around 10 QPS. The Workflows Queue is write-heavy (around 100:1 write to read operations) and Aurora Serverless can scale to accommodate spikes in traffic without any extra configuration.
Environments
For test environments, Retool recommends using the same database for the Retool Database and Workflows Queue. Without any extra configuration, Retool Workflows can process approximately 5-10 QPS (roughly, 5-10 concurrent blocks executed per second).
Workflows at scale
You can scale Self-hosted Retool Workflow-related services to perform a high rate of concurrent blocks per second. If your deployment needs to process more than 10 workflows per second, you can use:
- A Retool-managed cluster.
- A self-managed cluster on Temporal Cloud.
- Apache Cassandra as the Temporal datastore.
If you anticipate running workflows at a higher scale, please reach out to us to work through a deployment strategy that is best for your use case.
You should spin up a new cluster alongside your Self-hosted Retool instance if:
- You cannot use a Retool-managed cluster.
- You are on a version greater than 3.6.14.
- Your organization is on the Free, Team, or Business plan.
- You don't have an existing cluster to use.
- You don't need a cluster for uses other than a Self-hosted Retool deployment.
- You want to test a Self-hosted Retool deployment with a local cluster first.
- You have a multi-instance Retool deployment, but each instance is in its own VPC and requires its own Self-hosted Temporal cluster.
Local cluster considerations
Retool recommends using a separate datastore for the Workflows Queue in production. Consider using AWS Aurora Serverless V2 configured to an ACU (cpu) provision ranging from 0.5 to 8 ACU. 1 ACU can provide around 10 QPS. The Workflows Queue is write-heavy (around 100:1 write to read operations) and Aurora Serverless can scale to accommodate spikes in traffic without any extra configuration.
Environments
For test environments, Retool recommends using the same database for the Retool Database and Workflows Queue. Without any extra configuration, Retool Workflows can process approximately 5-10 QPS (roughly, 5-10 concurrent blocks executed per second).
Workflows at scale
You can scale Self-hosted Retool Workflow-related services to perform a high rate of concurrent blocks per second. If your deployment needs to process more than 10 workflows per second, you can use:
- A Retool-managed cluster.
- A self-managed cluster on Temporal Cloud.
- Apache Cassandra as the Temporal datastore.
If you anticipate running workflows at a higher scale, please reach out to us to work through a deployment strategy that is best for your use case.
Cluster size
The cluster must have at least one node with 8x vCPUs and 16 GB of memory. Use the following command to retrieve the capacity of your nodes.
$ kubectl describe nodes
In the Capacity section, verify the cpu and memory values meet the above requirements.
Capacity:
attachable-volumes-aws-ebs: 25
cpu: 8
ephemeral-storage: 83873772Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 7931556Ki
pods: 29
Cluster storage class
If you want to mount volumes, ensure the volume supplied by your cloud provider can be mounted to multiple nodes. To identify your cluster's storage class, run the command
$ kubectl get storageclass
Reference your cloud provider's documentation to verify that this storage class supports the ReadWriteMany
access mode.
- Retool-managed cluster
- Self-managed cluster
- Local cluster
Self-hosted Retool with Workflows deployments on Kubernetes are configured using a set of manifests. To retrieve a copy of the manifests, download the retool-onpremise
repository to your local machine. Open the kubernetes
directory in an IDE to follow along the steps below.
curl -L -O https://github.com/tryretool/retool-onpremise/archive/master.zip && unzip master.zip \
&& cd retool-onpremise-master/kubernetes
Self-hosted Retool with Workflows deployments on Kubernetes are configured using a set of manifests. To retrieve a copy of the manifests, download the retool-onpremise
repository to your local machine. Open the kubernetes
directory in an IDE to follow along the steps below.
curl -L -O https://github.com/tryretool/retool-onpremise/archive/master.zip && unzip master.zip \
&& cd retool-onpremise-master/kubernetes
Self-hosted Retool with Workflows deployments on Kubernetes are configured using a set of manifests. To retrieve a copy of the manifests, download the retool-onpremise
repository to your local machine. Open the kubernetes-with-temporal
directory in an IDE to follow along the steps below.
curl -L -O https://github.com/tryretool/retool-onpremise/archive/master.zip && unzip master.zip \
&& cd retool-onpremise-master/kubernetes-with-temporal
1. Configure version
Self-hosted Retool requires 3.6.14 or later for Retool-managed Temporal cluster.
Set the image
value in the following files to the Docker tag for the version of Retool to install, such as tryretool/backend:3.114.3-stable
.
retool-container.yaml
retool-jobs-runner.yaml
retool-workflows-worker-container.yaml
retool-workflows-backend-container.yaml
In retool-code-executor-container.yaml
, change the image
tag to indicate the version of Retool's code executor service to install. This must match the version of Retool for which you're deploying.
image: tryretool/code-executor-service:3.114.3-stable
2. Update configuration
- Retool-managed cluster
- Self-managed cluster
- Local cluster
Copy retool-secrets.template.yaml
to a new file named retool-secrets.yaml
. This file sets the configuration options for your deployment, and stores them as Kubernetes secrets.
cp retool-secrets.template.yaml retool-secrets.yaml
Set the following secrets for Retool. These values must be Base64-encoded.
To generate 36-character, Base64-encoded random strings for config.encryptionKey
and config.jwtSecret
, run openssl rand -base64 36
.
Setting | Description |
---|---|
data.jwt_secret | Secret used to sign authentication requests from Retool's server. Generate a base64 encoded string with openssl . |
data.encryption_key | Key used to encrypt the database. Generate a base64 encoded string with openssl . |
data.license_key | License key. Encode your license key using the command echo -n \<licensekey\> | base64 / |
data.postgres_password | Password for Retool's internal database. Generate a Base64-encoded string with openssl . |
Copy retool-secrets.template.yaml
to a new file named retool-secrets.yaml
. This file sets the configuration options for your deployment, and stores them as Kubernetes secrets.
cp retool-secrets.template.yaml retool-secrets.yaml
Set the following secrets for Retool. These values must be Base64-encoded.
To generate 36-character, Base64-encoded random strings for config.encryptionKey
and config.jwtSecret
, run openssl rand -base64 36
.
Setting | Description |
---|---|
data.jwt_secret | Secret used to sign authentication requests from Retool's server. Generate a base64 encoded string with openssl . |
data.encryption_key | Key used to encrypt the database. Generate a base64 encoded string with openssl . |
data.license_key | License key. Encode your license key using the command echo -n \<licensekey\> | base64 / |
data.postgres_password | Password for Retool's internal database. Generate a Base64-encoded string with openssl . |
Copy retool-secrets.template.yaml
to a new file named retool-secrets.yaml
. This file sets the configuration options for your deployment, and stores them as Kubernetes secrets.
cp retool-secrets.template.yaml retool-secrets.yaml
Set the following secrets for Retool. These values must be Base64-encoded.
To generate 36-character, Base64-encoded random strings for config.encryptionKey
and config.jwtSecret
, run openssl rand -base64 36
.
Setting | Description |
---|---|
data.jwt_secret | Secret used to sign authentication requests from Retool's server. Generate a base64 encoded string with openssl . |
data.encryption_key | Key used to encrypt the database. Generate a base64 encoded string with openssl . |
data.license_key | License key. Encode your license key using the command echo -n \<licensekey\> | base64 / |
data.postgres_password | Password for Retool's internal database. Generate a Base64-encoded string with openssl . |
Copy retool-temporal-secrets.template.yaml
to a new file named retool-temporal-secrets.yaml
. This file sets the configuration options for Temporal in your deployment, and stores them as Kubernetes secrets. These values must be Base64-encoded.
cp retool-temporal-secrets.template.yaml retool-temporal-secrets.yaml
Setting | Description |
---|---|
data.postgres_password | The same database password used above. |
- Retool-managed cluster
- Self-managed cluster
- Local cluster
Allow your deployment to connect to Temporal
Open up egress to the public internet on ports 443
and 7233
to allow outbound-only connections to Temporal Cloud from your deployment. This is so services can enqueue work to, and poll work out, of Temporal.
Temporal Cloud does not have a static IP range to allow list. If more specificity is required, egress is required on ports on the following domains:
Port | Domains |
---|---|
443 | *.retool.com, *.tryretool.com, *.temporal.io |
7233 | *.tmprl.cloud |
Follow the steps for configuring either a Temporal Cloud cluster or a self-hosted cluster in your VPC.
Temporal Cloud
Allow your deployment to connect to Temporal
Open up egress to the public internet on ports 443
and 7233
to allow outbound-only connections to Temporal Cloud from your deployment. This is so services can enqueue work to, and poll work out, of Temporal.
Temporal Cloud does not have a static IP range to allow list. If more specificity is required, egress is required on ports on the following domains:
Port | Domains |
---|---|
443 | *.retool.com, *.tryretool.com, *.temporal.io |
7233 | *.tmprl.cloud |
Configure environment variables for Temporal cluster
Set the following environment variables in the relevant configuration file: retool-container.yaml
, retool-workflows-worker-container.yaml
, and retool-workflows-backend-container.yaml
.
Temporal Cloud requires security certificates for secure access.
Variable | Description |
---|---|
WORKFLOW_TEMPORAL_CLUSTER_NAMESPACE | The namespace in your Temporal cluster for each Retool deployment you have (e.g., retool-prod ). Default is workflows . |
WORKFLOW_TEMPORAL_CLUSTER_FRONTEND_HOST | The frontend host of the cluster. |
WORKFLOW_TEMPORAL_CLUSTER_FRONTEND_PORT | The port with which to connect. Default is 7233 . |
WORKFLOW_TEMPORAL_TLS_ENABLED | Whether to enable mTLS. Set to true . |
WORKFLOW_TEMPORAL_TLS_CRT | The base64-encoded mTLS certificate. |
WORKFLOW_TEMPORAL_TLS_KEY | The base64-encoded mTLS key. |
Self-hosted
If you use a PostgreSQL database as a persistence store, the PostgreSQL user must have permissions to CREATE DATABASE
. If this is not possible, you can manually create the required databases in your PostgreSQL cluster: temporal
and temporal_visibility
.
Configure environment variables for Temporal cluster
Set the following environment variables for MAIN_BACKEND
and WORKFLOW_TEMPORAL_WORKER
services, if not already configured.
Variable | Description |
---|---|
WORKFLOW_TEMPORAL_CLUSTER_NAMESPACE | The namespace in your Temporal cluster for each Retool deployment you have (e.g., retool-prod ). Default is workflows . |
WORKFLOW_TEMPORAL_CLUSTER_FRONTEND_HOST | The frontend host of the Temporal cluster. |
WORKFLOW_TEMPORAL_CLUSTER_FRONTEND_PORT | The port with which to connect to the Temporal cluster. Defaults to 7233 . |
WORKFLOW_TEMPORAL_TLS_ENABLED | (Optional) Whether to enable mTLS. |
WORKFLOW_TEMPORAL_TLS_CRT | (Optional) The base64-encoded mTLS certificate. |
WORKFLOW_TEMPORAL_TLS_KEY | (Optional) The base64-encoded mTLS key. |
Use the default values in the configuration.
Additional configuration
The following configuration steps are optional but strongly recommended for using Retool in a production environment.
Externalize database
By default, the Retool Kubernetes installation uses a PostgreSQL pod to create a containerized instance of PostgreSQL. This is not suitable for production use cases, and the Retool storage database should be hosted on an external, managed database. Managed databases are more maintainable, scalable, and reliable than containerized PostgreSQL instances. These instructions explain how to set up Retool with an external database.
1. Export data
If you have already populated the PostgreSQL pod, export its data.
kubectl exec -it <POSTGRES-POD-NAME> -- bash -c 'pg_dump hammerhead_production --no-acl --no-owner --clean -U postgres' > retool_db_dump.sql
2. Encrypt database password
echo -n <password> | base64
3. Set the PostgreSQL credentials
- Retool-managed cluster
- Self-managed cluster
- Local cluster
In retool-secrets.yaml
, set the value of postgres_password
as the Base64-encoded password. Set the remaining environment variables for the managed PostgreSQL instance as follows:
Setting | Description |
---|---|
POSTGRES_DB | The database name created for Retool state. This applies to all retool-* containers (except the retool-code-executor-container ). |
POSTGRES_HOST | The database host and port for the PostgreSQL instance. This applies to all retool-* (except retool-code-executor-container ). |
POSTGRES_PORT | Defaults to 5432 . |
POSTGRES_USER | The user for the PostgreSQL instance. This applies to all retool-* (except the retool-code-executor-container ). |
In retool-secrets.yaml
, set the value of postgres_password
as the Base64-encoded password. Set the remaining environment variables for the managed PostgreSQL instance as follows:
Setting | Description |
---|---|
POSTGRES_DB | The database name created for Retool state. This applies to all retool-* containers (except the retool-code-executor-container ). |
POSTGRES_HOST | The database host and port for the PostgreSQL instance. This applies to all retool-* (except retool-code-executor-container ). |
POSTGRES_PORT | Defaults to 5432 . |
POSTGRES_USER | The user for the PostgreSQL instance. This applies to all retool-* (except the retool-code-executor-container ). |
In retool-secrets.yaml
and in temporal/retool-temporal-secrets.yaml
, set the value of postgres_password
as the Base64-encoded password.
Set the remaining environment variables for the managed PostgreSQL instance as follows:
Setting | Description |
---|---|
POSTGRES_DB | The database name created for Retool state. This applies to all retool-* containers (except the retool-code-executor-container ). |
POSTGRES_HOST | The database host and port for the PostgreSQL instance. This applies to all retool-* (except retool-code-executor-container ) and temporal-* containers. You must also configure this in temporal-configmaps.yaml . |
POSTGRES_PORT | Defaults to 5432 . |
POSTGRES_USER | The user for the PostgreSQL instance. This applies to all retool-* (except the retool-code-executor-container ) and temporal-* containers. You must also configure this in temporal-configmaps.yaml . |
5. Apply changes to the manifests
Use kubectl
to apply manifest changes.
- Retool-managed cluster
- Self-managed cluster
- Local cluster
kubectl apply -f -R kubernetes
kubectl apply -f -R kubernetes
kubectl apply -f -R kubernetes-with-temporal
Add environment variables
Environment variables provide ways to configure a Retool instance.
1. Update manifests
Configure environment variables in the following files:
retool-container.yaml
retool-jobs-runner.yaml
retool-workflows-worker-container.yaml
retool-workflows-backend-container.yaml
The following example configures the DBCONNECTOR_QUERY_TIMEOUT_MS
variable, but this pattern applies to other environment variables as well.
env:
- name: DBCONNECTOR_QUERY_TIMEOUT_MS
value: 360000
2. Apply changes to the manifests
Use kubectl
to apply manifest changes.
- Retool-managed cluster
- Self-managed cluster
- Local cluster
kubectl apply -f -R kubernetes
kubectl apply -f -R kubernetes
kubectl apply -f -R kubernetes-with-temporal
3. Verify pods
Run kubetcl get pods
to verify pods are running.
NAME READY STATUS RESTARTS AGE
api-76464f5576-vc5f4 1/1 Running 1 (8h ago) 8h
jobs-runner-5cfb79cbfd-b49rd 1/1 Running 0 8h
postgres-69c485649c-lkjgc 1/1 Running 0 8h
...
Mount volumes
There are several use cases which require the use of volumes. For example, when configuring a gRPC resource, you need to mount a volume containing the protos files to the Retool deployment. Follow these instructions to create a persistent volume and copy files from your local machine to the volume.
1. Set security context
In a later step, you use kubectl cp
to copy files from your local machine to the Kubernetes cluster, which requires the pod to run with root privileges.
Modify your deployment so the pods run as root by adding the securityContext
in the retool-container.yaml
file:
spec:
securityContext:
runAsUser: 0
fsGroup: 2000
2. Apply changes to the manifest
Use kubectl
to apply manifest changes.
kubectl apply -f retool-container.yaml
3. Verify pods
Run kubectl get pods
to verify that pods are ready.
NAME READY STATUS RESTARTS AGE
api-76464f5576-vc5f4 1/1 Running 1 (8h ago) 8h
jobs-runner-5cfb79cbfd-b49rd 1/1 Running 0 8h
postgres-69c485649c-lkjgc 1/1 Running 0 8h
...
4. Copy protos files
Next, copy the protos files from your local machine to the PVC. Ensure you local machine has a folder named protos
and run the following command. Replace api-76464f5576-vc5f4
with the name of your main Retool container, retrieved from kubectl get pods
.
kubectl cp protos/ api-76464f5576-vc5f4:/retool_backend/pv-data/protos
5. Set directory path
If you're configuring gRPC, specify the location of the protos directory. In retool-container.yaml
, set the PROTO_DIRECTORY_PATH
environment variable.
env:
- name: PROTO_DIRECTORY_PATH
value: "/retool_backend/pv-data/protos"
6. Reset security context
Reset the security context of your deployment by removing the securityContext
field, or by defining a non-root user.
Apply changes to the manifest.
kubectl apply -f retool-container.yaml
Configure SSL
When configuring SSL, you can use Let's Encrypt to provision a certificate, or provide your own. See Configure SSL and custom certificates for more detail on certificates.
- Provide your own CA
- Use Let's Encrypt
1. Generate self-signed certificate
Generate a self-signed certificate and private key using openssl
. Replace KEY_FILE
, CERT_FILE
, and HOST
with your key file name, certificate file name, and hostname, or set environment variables naming each.
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ${KEY_FILE} -out ${CERT_FILE} -subj "/CN=${HOST}/O=${HOST}" -addext "subjectAltName = DNS:${HOST}"
2. Create a TLS secret
Create a TLS secret using kubectl
. Replace KEY_FILE
and CERT_FILE
with your key and certificate file names.
kubectl create secret tls ${CERT_NAME} --key ${KEY_FILE} --cert ${CERT_FILE}
3. Install the Ingress-Nginx Controller
If you haven't already, install the Nginx-Ingress Controller using the instructions for your environment. To confirm the installation was successful, run the following command. Its output should contain an entry for the ingress-nginx-controller pod.
kubectl get pods -n ingress-nginx
4. Create an ingress resource
Create an ingress resource with the following manifest. Replace retool.example.com
with your domain, and testsecret-tls
with your TLS secret. See the TLS section of the Kubernetes Ingress documentation for more information.
If you haven't already, install the Ingress-Nginx Controller using the instructions for your environment. To confirm the installation was successful, run the following command. Its output should contain an entry for the ingress-nginx-controller pod.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tls-example-ingress
annotations:
kubernetes.io/tls-acme: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- retool.example.com
secretName: testsecret-tls
rules:
- host: retool.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api
port:
number: 3000
1. Install cert-manager
Use kubectl
to install cert-manager.
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.11.0/cert-manager.yam./l
2. Configure certificate issuer
Create a file called production-issuer.yml
. Copy the following configuration, replace the example email with your email, and paste it into the new file.
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: example@example.com
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx
3. Create the certificate manager
First, use kubectl
to create the certificate manager as ClusterIssuer.
kubectl apply -f production-issuer.yml
4. Verify pod
Run kubectl get clusterissuer
to verify that the ClusterIssuer pod is ready.
NAME READY AGE
letsencrypt-prod True 10m
5. Update ingress configuration
Add the annotations
section to your ingress and modify the host and hosts placeholders accordingly.
ingress:
...
annotations:
kubernetes.io/tls-acme: "true"
certmanager.io/cluster-issuer: letsencrypt-prod
hosts:
- host: example.example.com
paths:
- path: /
tls:
- secretName: letsencrypt-prod
hosts:
- example.example.com
6. Apply changes to the manifests
Use kubectl
to apply manifest changes. After the pods restart, you can access the page in your browser using TLS.
Update your Kubernetes instance
Follow these instructions to update your Retool instance to a newer release version.
1. Back up your database
If you use a managed database service, your database provider may have a feature to take snapshots or otherwise back up your database. If you use the PostgreSQL subchart, run the following command to export data from the PostgreSQL pod to a .sql
file.
kubectl exec -it <POSTGRES-POD-NAME> -- bash -c 'pg_dump hammerhead_production --no-acl --no-owner --clean -U postgres' > retool_db_dump.sql
2. Select a new version
Set the image
value in the following files to the Docker tag for the version of Retool to install, such as tryretool/backend:3.114.3-stable
.
retool-container.yaml
retool-jobs-runner.yaml
retool-workflows-worker-container.yaml
retool-workflows-backend-container.yaml
image: tryretool/backend:3.114.3-stable
3. Apply changes to the manifests
Use kubectl
to apply manifest changes.
- Retool-managed cluster
- Self-managed cluster
- Local cluster
kubectl apply -f -R kubernetes
kubectl apply -f -R kubernetes
kubectl apply -f -R kubernetes-with-temporal
4. Verify pods
Run kubectl get pods
to verify that the update has completed.
NAME READY STATUS RESTARTS AGE
api-76464f5576-vc5f4 1/1 Running 1 (8h ago) 8h
jobs-runner-5cfb79cbfd-b49rd 1/1 Running 0 8h
postgres-69c485649c-lkjgc 1/1 Running 0 8h
...