Caching in Retool


Are you updated?

This documentation only reflects the most current version of Retool. If you are a cloud user, you are already on the most current version and do not have to worry!

For on-premise deployments, speak with your admin to update your Retool instance.

How caching works with Retool

Caching in Retool is treated in several different ways, and allows users to better manage their data. By caching your query and request data, you can speed up the performance and load speeds of an application dramatically.

Queries are cached server side inside of the Retool database connector. Whether a query will be routed to your database or return the cached results is based on the inputs. If the exact same query is run using the same inputs within the Cache duration lifespan, the previously saved result will be returned from retool's routing server rather than being sent to the resource.

A common example of caching in Retool would be for SQL or database queries that return a very large amount of results. Querying your database for 5000+ rows of data can be very taxing on the memory and resources of your application. By caching the results from this query, you free up the time and resources that the query would originally be using.

So when used correctly, caching a query can dramatically reduce your application's load times.


Due to Retool's JavaScript editor being run in a sandbox, the window object is not accessible from within a query or function.

How to start using the cache for queries

At the bottom of the Advanced tab, there will be a checkbox to enable the caching of the selected query. Check this box and then provide the number of seconds you would like this query's results to be cached for ( ex: 3600 for 1 hour )

To invalidate or remove the cache, simply click the Invalidate cache text shown above. At this time, there is no way to programmatically invalidate the cache.

On-Premise Deployment

Redis caching

Without a Redis connected resource, the cache will be stored in local memory on the Retool server.

With a Redis connected resource, the cache is additionally stored within Redis (automatically setup once connected). This enables greater flexibility around caching behavior and also gives you visibility on how data is stored.

To view the Redis-specific environment variables, visit our Environment Variables docs. Once the Redis cluster is setup and connected to Retool, Retool will automatically leverage the Redis instance as a caching layer for queries with caching enabled.

Connecting Retool to Redis on AWS ElastiCache

  1. Log into your AWS console into your appropriate region.
  2. Create an Amazon ElastiCache cluster
  3. Choose Redis as your cluster engine. Do not enable cluster mode. Cluster mode will handle spreading keys across multiple shards. By default, the Redis shard should be able to store 13 GB of data which is substantial.
  1. Enter the name and the description for the cluster. Leave the rest of the fields as is.
  • Port: 6379 by default. A custom port can be specified as an additional layer of protection though for most use cases is not required See more here.
  • Node type: Default large. Automatically set to be able to store 13GB of data.
  • Multi-AZ: True by default. Multi-AZ is especially useful when we have strong high availability requirements for the use case (many users, critical workflow). Redis is usually highly available even without this feature as long as we replicate to at least 2 nodes. This feature ensures that if the primary node for the cluster fails then it will follow a protocol to assign a new primary immediately (takes the read replica with the fastest recorded latency times). For small-medium use cases or testing purposes it’s easier to set this to false and if it needs to be changed retroactively it can be by modifying the replication group
    -Number of Replicas: 2 by default. This will replicate the data into 2 other nodes as well.
  1. Advanced Redis Settings: Ensure we are under the same VPC for this Redis instance as the instance where Retool is deployed. Otherwise, the two will not be able to communicate. For Multi-AZs, we will need to provide two subnets.
  1. Data Management: (Optional, but recommended) Enable automatic backups. If you experience unexplained issues, you can retrieve the previous backup.

  2. Create Cluster.

Monitoring the Redis instance

  1. Log into the Retool instance (or create a new one in the same VPC as both Retool and Redis) and enter the following:
$sudo wget
$sudo tar xvzf redis-stable.tar.gz
$cd redis-stable
$sudo CC=clang make BUILD_TLS=yes # only is TLS is enabled
  1. Read the Redis instance by running either of the two commands below. The cluster endpoint should be the reader hostname. The port number should be the port number entered above (by default, it is 6379).
# If no password is setup, run:
redis-cli -h <cluster-endpoint> -c -p <port number> 

# Otherwise...
redis-cli -h <cluster-endpoint> --tls -a <your-password> -p <port number>
  1. Once in the Redis CLI, run some get and set commands to verify expected behavior. Running the INFO command displays stats for the Redis cluster.


Is the cache shared between users in the same organization?

By default, the cache is shared across users. That is, if caching is enabled and the request coming from the Retool frontend is exactly the same (if two users run the query with the same requests, even with different auth tokens, a possibly common scenario), the cached results will be returned.

To disable this behavior and have caching be done on a per-user basis, Retools admins should disable Cache queries across all users under Organization Settings -> Advanced.

How is the cache invalidated?

Right now you can’t programmatically invalidate the cache. You can only press the invalidate cache button in the Query editor.

  • TTL (Time to Live) The time limit runs out
  • If you aren’t backed by Redis and the Retool worker crashes / goes down for any reason, the in-memory and file system caches might get wiped

Did this page help you?