Self-hosted Retool quickstart
Learn about the fundamental concepts of self-hosted Retool.
This guide serves as an introduction to self-hosted Retool. It covers many of the concepts and terminology you would come across when deploying and operating a self-hosted instance of Retool. After reading this page, you should have a good understanding of self-hosted Retool.
You can deploy a self-hosted Retool instance in your virtual private cloud (VPC) or behind your virtual private network (VPN). Businesses in the healthcare and finance industries often deploy Retool to remain compliant by running on-premise deployments.
Each deployment instance uses a distributed set of containers with services for different functions. These work together to securely run the platform within your VPN or VPC.
Read the system architecture guide to learn more about the containers, services, and dependencies for self-hosted deployment instances.
Requirements
You can deploy self-hosted Retool on almost any Linux-based VM cloud provider using Docker Compose. The available deployment methods vary in complexity and scalability, so you should choose an option that lets you get started quickly, is provisioned appropriately, and sets you up for long-term success.
VM configuration
Choose between a single VM deployment or an orchestrated container deployment method based on your background and use case.
- Single VM deployments
- Orchestrated VM deployments
Use a Docker Compose-based method to deploy Retool if you and your team:
- Are currently evaluating Retool or deploying Retool for the first time.
- Have less experience with Docker or DevOps concepts.
- Need a lightweight, low cost, and low maintenance deployment method.
- Want to deploy Retool on a small-scale or single-server environment.
Retool images run on Linux machines using x86
processors—arm64
is not supported. You must ensure the VM meets the following recommended requirements:
- Ubuntu 22.04 or later.
- 16GiB memory.
- 8x vCPUs.
- 60GiB storage.
curl
andunzip
software packages installed.
The 60 GiB of storage is required to support the PostgreSQL container included by default in Retool's deployment configuration files.
If you're evaluating a large production use case or need any of our Enterprise plan features, please book a call.
More complex and scalable deployment methods, such as Kubernetes or Elastic Container Service (ECS), might be appropriate if you and your team:
- Already use the chosen deployment method.
- Have experience with Docker or DevOps concepts.
- Require scalability, high availability, and resilience.
When deploying Retool using container orchestration tools such as Kubernetes, your cluster should contain at least one node that matches the specifications above. Refer to deployment guides and your provider's documentation for more detail.
Storage database
By default, many of the deployment guides include a containerized instance of PostgreSQL alongside Retool, but it is possible and recommended to externalize the database to support a stateless deployment.
The minimum recommended version for the PostgreSQL database is version 13. Your PostgreSQL database must also enable the uuid-ossp module and use the Read Committed isolation level.
Network
Deployments must be allowed access to Retool's IP addresses or domains. If you make use of outbound firewall rules, include the following IP addresses or domains in its allowlist. These allow your deployment to connect to Retool's license check, user authentication, and usage reporting services.
IP addresses
35.92.202.168/29
44.211.178.248/29
35.92.202.168
35.92.202.169
35.92.202.170
35.92.202.171
35.92.202.172
35.92.202.173
35.92.202.174
35.92.202.175
44.211.178.248
44.211.178.249
44.211.178.250
44.211.178.251
44.211.178.252
44.211.178.253
44.211.178.254
44.211.178.255
Domains
licensing.tryretool.com
invites.tryretool.com
email-service.retool.com
p.tryretool.com
specs.tryretool.com
Each data source you use with Retool is represented by a resource, such as APIs and external databases. Your deployment instance must have access to any of its resources to interact with data.
Refer to the requirements documentation for more details on network requirements.
Architecture
Each deployment instance uses a distributed set of containers with services for different functions. These work together to securely run the platform within your VPN or VPC.
Containers and services
A standard deployment instance uses the following containers and services that interact with each other. The SERVICE_TYPE
environment variable values determine what services a container runs.
Container | Image | Repository | Services |
---|---|---|---|
api | backend | tryretool/backend | MAIN_BACKEND DB_CONNECTOR DB_SSH_CONNECTOR |
jobs-runner | backend | tryretool/backend | JOBS_RUNNER |
workflows-worker | backend | tryretool/backend | WORKFLOW_TEMPORAL_WORKER |
workflows-backend | backend | tryretool/backend | WORKFLOW_BACKEND DB_CONNECTOR DB_SSH_CONNECTOR |
code-executor | code-executor-service | tryretool/code-executor-service | No service type. |
Topography
api
The api container manages the core functionality for a Retool deployment instance, such as:
- Frontend interactions (e.g., building apps and workflows).
- Hosting apps and workflows.
- Managing users and permissions.
- Services
- Network ingress and egress
- Replicas
MAIN_BACKEND
The core service for a Retool deployment instance. It handles most logic for frontend interactions.
DB_CONNECTOR
Handles query requests to resources (databases, APIs, etc.).
DB_SSH_CONNECTOR
Manages connections for DB_CONNECTOR
.
api requires ingress from client interactions, such as loading an app in a browser. These are typically handled by a load balancer (e.g., nginx) which proxies the requests.
Container | Network ingress | Network egress |
---|---|---|
code-executor | ||
postgres | ||
retooldb-postgres | ||
Temporal | ||
Resources |
You can replicate api as you scale to manage higher volumes of workflow traffic.
jobs-runner
The jobs-runner container manages background tasks, runs database migrations for Retool version upgrades, and Source Control.
- Services
- Network ingress and egress
- Replicas
JOBS_RUNNER
Performs database migrations and manages Source Control.
Container | Network ingress | Network egress |
---|---|---|
postgres |
You must not replicate jobs-runner. It performs tasks and migrations that must be performed as a single container.
workflows-worker
The workflows-worker container continuously polls the Temporal cluster for tasks required to either start or execute blocks within a workflow. It makes requests to code-executor to execute blocks and process results, then reports back to Temporal to continue or complete workflow execution.
- Services
- Network ingress and egress
- Replicas
WORKFLOW_TEMPORAL_WORKER
Polls the Temporal cluster for tasks required to start or execute workflow blocks.
Container | Network ingress | Network egress |
---|---|---|
code-executor | ||
postgres | ||
retooldb-postgres | ||
Temporal | ||
Resources |
You can replicate workflows-worker as you scale to manage higher volumes of workflow traffic.
workflows-backend
The workflows-backend container receives and processes requests from code-executor to process workflows. It also handles the upload of workflows logs, block results, and run status updates.
- Services
- Network ingress and egress
- Replicas
WORKFLOW_BACKEND
Handles query, retry, and asynchronous logic.
DB_CONNECTOR
Handles query requests to resources (databases, APIs, etc.).
DB_SSH_CONNECTOR
Manages connections for DB_CONNECTOR
.
Container | Network ingress | Network egress |
---|---|---|
code-executor | ||
postgres | ||
retooldb-postgres | ||
Temporal | ||
Resources |
You can replicate workflows-backend as you scale to manage higher volumes of workflow traffic.
code-executor
The code-executor container executes single block runs from the Workflows IDE, multiple blocks as part of a Workflow execution, and any arbitrary user code written in a Workflow. It executes JavaScript and Python code directly, or makes requests to workflows-backend to run resource queries against databases, APIs, etc.
- Services
- Network ingress and egress
- Replicas
By default, the code-executor uses nsjail to sandbox code execution. nsjail requires privileged container access. If your deployment does not support privileged access, the code executor automatically detects this and runs code without sandboxing. Without sandboxing, use of custom JS libraries and custom Python libraries is not allowed.
Container | Network ingress | Network egress |
---|---|---|
api | ||
workflows-backend | ||
workflows-worker |
You can replicate code-executor as you scale to manage higher volumes of workflow traffic.
postgres
postgres manages the internal PostgreSQL database in which an instance stores its apps, resources, users, permissions, and internal data. All deployment instances contain a default postgres
container for testing purposes. You must externalize this database before using the deployment instance in production.
retooldb-postgres
You can optionally configure your deployment to use Retool Database with a separate PostgreSQL database. You can use retooldb-postgres
to get started but Retool recommends you migrate to an externally hosted database for use in production.
Other dependencies
Self-hosted deployment instances have additional dependencies.
Resources
Each data source you use with Retool is represented by a resource, such as APIs and external databases. Your deployment instance must have access to any of its resources to interact with data.
Temporal
Temporal is a distributed system used to schedule and run asynchronous tasks for Retool Workflows. A Self-hosted Retool instance uses a Temporal cluster to facilitate the execution of each workflow amongst a pool of self-hosted workers that make queries and execute code in your VPC. Temporal manages the queueing, scheduling, and orchestration of workflows to guarantee that each workflow block executes in the correct order of the control flow. It does not store any block results by default.
You can use a Retool-managed cluster on Temporal Cloud, which is recommended for most use cases. You can also use an existing self-managed cluster that is hosted on Temporal Cloud or in your own infrastructure. Alternatively, you can spin up a new self-hosted cluster alongside your Self-hosted Retool instance.
In general, Retool recommends using a Retool-managed Temporal cluster. You can also use either:
- A self-managed external cluster: Either Temporal Cloud or self-hosted within your infrastructure.
- A local cluster: A self-hosted cluster within your Self-hosted Retool instance.
Retool-managed | Self-managed external | Local | |
---|---|---|---|
Hosting | Externally on Temporal Cloud | Either externally on Temporal Cloud or self-hosted internally within your own VPC | Locally as part of the Self-hosted Retool instance |
Availability | All Self-hosted Retool instances (3.6.14+) on an Enterprise plan | All Self-hosted Retool instances and for purposes outside of Retool. | Only to the Self-hosted Retool instance |
Management | Managed by Retool | Self-managed on Temporal Cloud, self-managed and self-hosted if within your own VPC | Self-managed and self-hosted |
Scale and performance | At least 4 AWS regions are available to choose from | If using Temporal Cloud, at least 10 AWS regions are available to choose from; if self-hosted, low latency due to hosting within your infrastructure | Low latency due to local deployment alongside self-hosted Retool |
Uptime | 99.99% uptime | 99.99% uptime | Dependent on DevOps |
Configuration | Minimal | Moderate | High |
Cost | No minimum contract | See Temporal Cloud pricing | No contract |
Security | Resource-isolated namespace in a Temporal Cloud cluster, in which only orchestration data (workflow IDs and block names) is stored; all data encrypted with private key | Only orchestration data (workflow IDs and block names), encrypted with private key, is stored in Temporal Cloud. All other data remains internal on VPC for self-hosted. | All data remains local to the Self-hosted Retool instance |
Docker images
Retool uses Docker images that configure the containers and their respective services. Each container runs one or more services to distribute functionality. This approach allows for efficient scaling of a Retool deployment instance as needs grow.
Retool maintains two image repositories on Docker Hub:
- The backend image is used for most containers and services.
- The code-executor-service image is used to execute blocks of code in Retool Workflows.
Each Docker Hub repository tag corresponds to a particular version of Retool. You specify the version to use when deploying or upgrading a release. For instance:
-
tryretool/backend:3.114.3-stable
-
tryretool/code-executor-service:3.114.3-stable
Releases
Retool maintains two release channels for self-hosted Retool: Stable and Edge.
- Stable
- Edge
Retool releases a version on the Stable channel every 13 weeks (quarterly). A Stable release is generally four versions behind the cloud-hosted version at the time.
Preparation and testing of a Stable version occurs approximately four weeks prior to its release. Stable releases are rigorously tested before they are published. As the release cycle is less frequent, administrators can more easily maintain and upgrade deployments.
Retool supports each Stable release for six months. During this time, Retool will release patch updates that contain bug fixes or security updates. Patch updates do not contain functionality changes and can be applied more quickly than performing a full version upgrade.
Retool provides versioned product documentation for supported Stable releases. When browsing Retool Docs, use the version dropdown menu in the navbar to switch to a relevant version.
After six months, a Stable release is considered deprecated. You can continue using a deprecated release but it will no longer receive updates. At this time, you should upgrade to the latest Stable release.
Releases on the Edge channel occur weekly. Each release occurs one week after the equivalent release for cloud-hosted Retool.
Edge releases are available for organizations that want the latest features or to use private beta functionality. Retool recommends most organizations use Stable releases unless you have a specific need for Edge releases and can keep your deployment up-to-date.
Retool supports only the most recent release on the Edge channel. As Edge releases are weekly, bug fixes and improvements are included in the next release. All previous releases are then considered deprecated.