Skip to main content

Collect self-hosted telemetry data

Learn how to collect telemetry data from your self-hosted deployment instance.

Requirements and limitations

This feature is in development. It is currently available for self-hosted instances deployed using retool-helm 6.20 or above, and running the most recent Edge or Stable release.

Organizations with Self-hosted deployment instances can collect telemetry data. This data can be either forwarded to Retool or to a custom destination.

When forwarded to Retool, your deployment's health is continually monitored. This allows Retool to have more insight into potential issues and improves the level of support when diagnosing issues.

Telemetry data collection is not enabled by default. You must configure your deployment instance to start collecting and forwarding telemetry data.

Types of telemetry data

When enabled, your deployment produces the following types telemetry data:

  • Container Metrics (CPU, memory, network usage)
  • Retool Runtime Metrics (frontend performance, backend request counts and latency)
  • Container Logs (request logs, error logs, info logs)
Source nameSent to RetoolDescription
metrics_statsdRetool internal metrics. This includes frontend performance, backend request count, latency, etc.
metrics_statsd_rawSame as metrics_statsd, but without any identifying tags added by the telemetry collector.
metricsAll collected metrics. This includes container health metrics and all metrics from metrics_statsd.
metrics_rawSame as metrics, but without any identifying tags added by the telemetry collector.
container_logsAll logs from the containers in the Retool deployment, except audit_logs and debug_logs excluded and deployment identifying tags added.
container_logs_rawAll logs from the containers in the Retool deployment, without any exclusion, tagging, or other processing done.
audit_logsRetool audit logs which are printed to container stdout, if any. Requires the relevant config to enable that feature.
debug_logsDebug level logs, if any. These are separated so as to avoid accidentally forwarding high volumes of debug logs to destinations.

Collection and forwarding

The telemetry collector container contains two services: grafana-agent and vector. You can configure vector to send data to either Retool or to custom destinations.

telemetry collector uses a secure TLS connection with short-lived client certificates when sending data to Retool. Data is securely stored on Amazon S3 buckets in us-west-2 and not shared with any other third-parties or subprocessors.

Enable telemetry data collection

Use the Helm CLI or update your Helm configuration file to enable telemetry collection.

helm upgrade --set telemetry.enabled=true ...
Specify telemetry version

The telemetry image uses the same release version as the main backend by default. If necessary, you can specify a version tag to use using the image.tag option:

tag: 3.52.0-stable

If set, the telemetry image is fixed to the specified tag. Retool does not recommend including a tag unless you have a specific use case.

Send telemetry data to Retool

Once you enable telemetry data collection, data is sent to Retool by default. You can control this behavior using the sendToRetool variable. Set this to disabled if you do not want to send data to Retool.

enabled: true
sendToRetool: false

Send telemetry data to custom destinations

You specify custom destinations using the customVectorConfig variable with sink configurations.

sinks: ...

Retool supports sending to any custom destination supported by Vector. Refer to the Vector sinks reference documentation for a complete list of supported sink types and configuration.

Each sink must include a list of telemetry sources for which it forwards.

Example configuration

The following example illustrates a completed telemetry configuration.

address: ${NODE_IP}:8125
when_full: drop_newest
- metrics_statsd_filtered
mode: udp
type: statsd
condition: '!match(string!(.name), r''retool\.express\.request\..+'')'
- metrics_statsd_raw
type: filter
enabled: true
- name: NODE_IP
fieldPath: status.hostIP
cpu: 2000m
memory: 4Gi