Skip to main content

Testing best practices

Retool does not currently provide a native testing framework. Instead, customers use a range of approaches depending on their risk tolerance and organizational maturity.

In practice, the majority of Retool apps in production today are validated through manual testing, and for many internal tools this works well: teams rely on careful review, peer sign-off, and real usage to catch issues, and they benefit from being able to iterate and fix problems quickly.

Some teams take a more structured approach to manual testing by formalizing the process rather than relying on ad hoc validation. This can include maintaining test checklists for critical workflows, using separate resource environments or instances, (e.g., dev/staging/prod), limiting publish permissions, using Git-based Source Control with required reviews before merges, and requiring explicit sign-off before changes go live. While this is still manual, the repeatability and discipline of the process can make it a reliable approach for many organizations.

Other teams supplement manual testing with external UI or integration testing tools such as WebDriver-based frameworks (e.g., Playwright, Cypress, Selenium). These approaches can provide additional rigor, but they are not Retool-specific and often require engineering investment to set up and maintain, particularly in environments with SSO, multiple instances, or complex permission models. Because of this, they tend to be most common among teams with dedicated engineering support and higher testing maturity.

The goal for most teams is to choose a level of rigor that matches the criticality of their apps and their operational constraints.