Skip to main content

Self-hosted Retool quickstart

Learn about the fundamental concepts of self-hosted Retool.

This guide serves as an introduction to self-hosted Retool. It covers many of the concepts and terminology you would come across when deploying and operating a self-hosted instance of Retool. After reading this page, you should have a good understanding of self-hosted Retool.

You can deploy a self-hosted Retool instance in your virtual private cloud (VPC) or behind your virtual private network (VPN). Businesses in the healthcare and finance industries often deploy Retool to remain compliant by running on-premise deployments.

Each deployment instance uses a distributed set of containers with services for different functions. These work together to securely run the platform within your VPN or VPC.

Read the system architecture guide to learn more about the containers, services, and dependencies for self-hosted deployment instances.

Requirements

You can deploy self-hosted Retool on almost any Linux-based VM cloud provider using Docker Compose. The available deployment methods vary in complexity and scalability, so you should choose an option that lets you get started quickly, is provisioned appropriately, and sets you up for long-term success.

VM configuration

Choose between a single VM deployment or an orchestrated container deployment method based on your background and use case.

Use a Docker Compose-based method to deploy Retool if you and your team:

  • Are currently evaluating Retool or deploying Retool for the first time.
  • Have less experience with Docker or DevOps concepts.
  • Need a lightweight, low cost, and low maintenance deployment method.
  • Want to deploy Retool on a small-scale or single-server environment.

Retool images run on Linux machines using x86 processors—arm64 is not supported. You must ensure the VM meets the following recommended requirements:

  • Ubuntu 22.04 or later.
  • 16GiB memory.
  • 8x vCPUs.
  • 60GiB storage.
  • curl and unzip software packages installed.

The 60 GiB of storage is required to support the PostgreSQL container included by default in Retool's deployment configuration files.

Storage database

By default, many of the deployment guides include a containerized instance of PostgreSQL alongside Retool, but it is possible and recommended to externalize the database to support a stateless deployment.

The minimum recommended version for the PostgreSQL database is version 13. Your PostgreSQL database must also enable the uuid-ossp module and use the Read Committed isolation level.

Network

Deployments must be allowed access to Retool's IP addresses or domains. If you make use of outbound firewall rules, include the following IP addresses or domains in its allowlist. These allow your deployment to connect to Retool's license check, user authentication, and usage reporting services.

IP addresses
CIDR IP addresses
35.92.202.168/29
44.211.178.248/29
Individual IP addresses
35.92.202.168
35.92.202.169
35.92.202.170
35.92.202.171
35.92.202.172
35.92.202.173
35.92.202.174
35.92.202.175
44.211.178.248
44.211.178.249
44.211.178.250
44.211.178.251
44.211.178.252
44.211.178.253
44.211.178.254
44.211.178.255
Domains
Domains
licensing.tryretool.com
invites.tryretool.com
email-service.retool.com
p.tryretool.com
specs.tryretool.com

Each data source you use with Retool is represented by a resource, such as APIs and external databases. Your deployment instance must have access to any of its resources to interact with data.

Refer to the requirements documentation for more details on network requirements.

Architecture

Each deployment instance uses a distributed set of containers with services for different functions. These work together to securely run the platform within your VPN or VPC.

Containers and services

A standard deployment instance uses the following containers and services that interact with each other. The SERVICE_TYPE environment variable values determine what services a container runs.

ContainerImageRepositoryServices
apibackendtryretool/backendMAIN_BACKEND
DB_CONNECTOR
DB_SSH_CONNECTOR
jobs-runnerbackendtryretool/backendJOBS_RUNNER
workflows-workerbackendtryretool/backendWORKFLOW_TEMPORAL_WORKER
workflows-backendbackendtryretool/backendWORKFLOW_BACKEND
DB_CONNECTOR
DB_SSH_CONNECTOR
code-executorcode-executor-servicetryretool/code-executor-serviceNo service type.
Topography

api

The api container manages the core functionality for a Retool deployment instance, such as:

  • Frontend interactions (e.g., building apps and workflows).
  • Hosting apps and workflows.
  • Managing users and permissions.
MAIN_BACKEND

The core service for a Retool deployment instance. It handles most logic for frontend interactions.

DB_CONNECTOR

Handles query requests to resources (databases, APIs, etc.).

DB_SSH_CONNECTOR

Manages connections for DB_CONNECTOR.

jobs-runner

Thee jobs-runner container manages background tasks, runs database migrations for Retool version upgrades, and Source Control.

JOBS_RUNNER

Performs database migrations and manages Source Control.

workflows-worker

The workflows-worker container continuously polls the Temporal cluster for tasks required to either start or execute blocks within a workflow. It makes requests to code-executor to execute blocks and process results, then reports back to Temporal to continue or complete workflow execution.

WORKFLOW_TEMPORAL_WORKER

Polls the Temporal cluster for tasks required to start or execute workflow blocks.

workflows-backend

The workflows-backend container receives and processes requests from code-executor to process workflows. It also handles the upload of workflows logs, block results, and run status updates.

WORKFLOW_BACKEND

Handles query, retry, and asynchronous logic.

DB_CONNECTOR

Handles query requests to resources (databases, APIs, etc.).

DB_SSH_CONNECTOR

Manages connections for DB_CONNECTOR.

code-executor

The code-executor container executes single block runs from the Workflows IDE, multiple blocks as part of a Workflow execution, and any arbitrary user code written in a Workflow. It executes JavaScript and Python code directly, or makes requests to workflows-backend to run resource queries against databases, APIs, etc.

By default, the code-executor uses nsjail to sandbox code execution. nsjail requires privileged container access. If your deployment does not support privileged access, the code executor automatically detects this and runs code without sandboxing. Without sandboxing, use of custom JS libraries and custom Python libraries is not allowed.

postgres

postgres manages the internal PostgreSQL database in which an instance stores its apps, resources, users, permissions, and internal data. All deployment instances contain a default postgres container for testing purposes. You must externalize this database before using the deployment instance in production.

retooldb-postgres

You can optionally configure your deployment to use Retool Database with a separate PostgreSQL database. You can use retooldb-postgres to get started but Retool recommends you migrate to an externally hosted database for use in production.

Other dependencies

Self-hosted deployment instances have additional dependencies.

Resources

Each data source you use with Retool is represented by a resource, such as APIs and external databases. Your deployment instance must have access to any of its resources to interact with data.

Temporal

Temporal is a distributed system used to schedule and run asynchronous tasks for Retool Workflows. A Self-hosted Retool instance uses a Temporal cluster to facilitate the execution of each workflow amongst a pool of self-hosted workers that make queries and execute code in your VPC. Temporal manages the queueing, scheduling, and orchestration of workflows to guarantee that each workflow block executes in the correct order of the control flow. It does not store any block results by default.

You can use a Retool-managed cluster on Temporal Cloud, which is recommended for most use cases. You can also use an existing self-managed cluster that is hosted on Temporal Cloud or in your own infrastructure. Alternatively, you can spin up a new self-hosted cluster alongside your Self-hosted Retool instance.

In general, Retool recommends using a Retool-managed Temporal cluster. You can also use either:

  • A self-managed external cluster: Either Temporal Cloud or self-hosted within your infrastructure.
  • A local cluster: A self-hosted cluster within your Self-hosted Retool instance.
Retool-managedSelf-managed externalLocal
HostingExternally on Temporal CloudEither externally on Temporal Cloud or self-hosted internally within your own VPCLocally as part of the Self-hosted Retool instance
AvailabilityAll Self-hosted Retool instances (3.6.14+) on an Enterprise planAll Self-hosted Retool instances and for purposes outside of Retool.Only to the Self-hosted Retool instance
ManagementManaged by RetoolSelf-managed on Temporal Cloud, self-managed and self-hosted if within your own VPCSelf-managed and self-hosted
Scale and performanceAt least 4 AWS regions are available to choose fromIf using Temporal Cloud, at least 10 AWS regions are available to choose from; if self-hosted, low latency due to hosting within your infrastructureLow latency due to local deployment alongside self-hosted Retool
Uptime99.9% uptime99.9% uptimeDependent on DevOps
ConfigurationMinimalModerateHigh
CostNo minimum contractSee Temporal Cloud pricingNo contract
SecurityResource-isolated namespace in a Temporal Cloud cluster, in which only orchestration data (workflow IDs and block names) is stored; all data encrypted with private keyOnly orchestration data (workflow IDs and block names), encrypted with private key, is stored in Temporal Cloud. All other data remains internal on VPC for self-hosted.All data remains local to the Self-hosted Retool instance

Docker images

Retool uses Docker images that configure the containers and their respective services. Each container runs one or more services to distribute functionality. This approach allows for efficient scaling of a Retool deployment instance as needs grow.

Retool maintains two image repositories on Docker Hub:

  • The backend image is used for most containers and services.
  • The code-executor-service image is used to execute blocks of code in Retool Workflows.

Each Docker Hub repository tag corresponds to a particular version of Retool. You specify the version to use when deploying or upgrading a release. For instance, the Docker Hub tags for Retool 3.33.10 Stable are:

Releases

Retool maintains two release channels for self-hosted Retool: Stable and Edge.

Retool releases a version on the Stable channel every 13 weeks (quarterly). A Stable release is generally four versions behind the cloud-hosted version at the time.

Preparation and testing of a Stable version occurs approximately four weeks prior to its release. Stable releases are rigorously tested before they are published. As the release cycle is less frequent, administrators can more easily maintain and upgrade deployments.

Retool supports each Stable release for six months. During this time, Retool will release patch updates that contain bug fixes or security updates. Patch updates do not contain functionality changes and can be applied more quickly than performing a full version upgrade.

Retool provides versioned product documentation for supported Stable releases. When browsing Retool Docs, use the version dropdown menu in the navbar to switch to a relevant version.

After six months, a Stable release is considered deprecated. You can continue using a deprecated release but it will no longer receive updates. At this time, you should upgrade to the latest Stable release.