Skip to main content

Connect to Amazon S3

Amazon S3 (Simple Storage Service) is a scalable object storage service for storing and retrieving data. With Retool's Amazon S3 integration, you can build apps and automations that upload files, download data, generate signed URLs, and manage objects in your S3 buckets.

Retool's S3 integration also works with S3-compatible providers such as Digital Ocean Spaces, MinIO, and Wasabi.

What you can do with Amazon S3

  • List and browse files: Display files and folders from S3 buckets in tables or file browsers.
  • Upload files: Accept file uploads from users and store them in S3.
  • Download files: Retrieve files from S3 for display or processing.
  • Generate signed URLs: Create temporary URLs for secure file access without exposing credentials.
  • Manage files: Copy, delete, and organize files across buckets with tag support.

Before you begin

To connect Amazon S3 to Retool, you need the following:

  • AWS account: Access to an AWS account with S3 buckets.
  • AWS IAM credentials: Access key ID and secret access key with S3 permissions, or an IAM role to assume.
  • S3 bucket: At least one S3 bucket to connect to.
  • CORS configuration: S3 bucket must have CORS enabled for cloud instances.
  • Retool permissions: Ability to create and manage resources in your organization.

Configure CORS policy

Amazon S3 requires CORS (Cross-Origin Resource Sharing) configuration to allow Retool to access your buckets from the browser.

Add the following CORS configuration to your Amazon S3 bucket to allow access from Retool. In the AWS console, navigate to your bucket's Permissions tab, scroll to Cross-origin resource sharing (CORS), and add this configuration:

CORS configuration for cloud instances
CORS configuration
[
{
"AllowedHeaders": ["*"],
"AllowedMethods": ["GET", "PUT", "POST", "DELETE", "HEAD"],
"AllowedOrigins": [
"https://retool.com",
"https://*.retool.com"
],
"ExposeHeaders": ["ETag", "x-amz-meta-custom-header"]
}
]

Replace *.retool.com with your organization's custom domain if applicable.

Create an Amazon S3 resource

Follow these steps to create an Amazon S3 resource in your Retool organization.

1. Create a new resource

In your Retool organization, navigate to Resources in the main navigation and click Create newResource. Search for Amazon S3 and click the Amazon S3 tile to begin configuration.

Use folders to organize your resources by team, environment, or data source type. This helps keep your resource list manageable as your organization grows.

2. Configure connection settings

Configure the following connection settings for your Amazon S3 resource.

Resource name

Give your resource a descriptive name that indicates which bucket or environment it connects to.

examples of descriptive resource names
Descriptive resource names
production_s3
user_uploads_bucket
backup_storage_s3
marketing_assets_s3

S3 bucket

Specify the name of the S3 bucket you want to connect to. You can configure this to connect to a specific bucket or leave it empty to allow access to any bucket at query time.

static bucket name
Specific bucket
my-app-uploads
dynamic bucket with embedded expressions
Environment-based bucket selection
{{ environment === 'production' ? 'prod-bucket' : 'dev-bucket' }}

S3 region

Select the AWS region where your S3 bucket is located (e.g., us-east-1, us-west-2, eu-west-1). This ensures Retool connects to the correct regional endpoint.

Default ACL

Set the default access control list (ACL) for objects uploaded to S3. Common options include private, public-read, authenticated-read, and bucket-owner-full-control. Leave empty to use the bucket's default ACL.

Outbound region

If your organization uses outbound regions, select the region that should be used for requests to Amazon S3. This controls which geographic region your requests originate from.

3. Configure authentication

Choose an authentication method based on your deployment type and security requirements. Amazon S3 supports IAM access keys, IAM role assumption, and AWS credentials chain for self-hosted instances.

Authentication methodUse cases
Access Key + Secret KeyStandard authentication with IAM user credentials. Use for development environments, when you have static credentials from AWS IAM, or when you need predictable non-expiring credentials. Most common method for initial setup.
IAM role (assume role)Enhanced security with temporary credentials and automatic rotation. Use for production environments following AWS security best practices, cross-account access to S3 buckets, or when you want short-lived credentials. Requires configuring trust relationships in AWS IAM.
AWS credentials chain (self-hosted only)Automatic credential discovery from your environment. Use when Retool runs on AWS infrastructure (EC2, ECS, EKS) with instance profiles or task roles, when you prefer environment-based credential management, or to avoid storing static credentials in Retool.

Cloud organizations can authenticate using Access Key + Secret Key or IAM role (assume role).

option A: Access Key + Secret Key (Recommended)

Use an AWS access key ID and secret access key to authenticate with S3.

1. Create an IAM user with S3 permissions:

In the AWS IAM console, create a user or service account with S3 permissions. Attach a policy that grants the necessary permissions for your use case.

example IAM policy for S3 access
IAM policy with S3 permissions
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::my-bucket/*",
"arn:aws:s3:::my-bucket"
]
}
]
}

2. Generate access credentials:

Create an access key ID and secret access key for the IAM user. Store these credentials securely as they provide full access to your S3 buckets and objects.

3. Configure authentication in Retool:

In the Retool resource configuration, paste the AWS access key ID in the Access Key ID field and the AWS secret access key in the Secret Access Key field.

4. Store credentials securely:

Consider using configuration variables or secrets to store credentials instead of hardcoding them in the resource configuration.

option B: IAM role (assume role)

Use an IAM role with temporary credentials for enhanced security and automatic credential rotation.

1. Create an IAM role with S3 permissions:

In the AWS IAM console, create a role with S3 permissions. Attach a policy that grants the necessary permissions for your use case.

2. Configure trust relationship:

Update the role's trust policy to allow Retool to assume the role. You need to add Retool's AWS account ID and optionally require an external ID for additional security.

example trust policy
Trust policy for Retool to assume role
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::RETOOL_AWS_ACCOUNT_ID:root"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"sts:ExternalId": "your-external-id"
}
}
}
]
}

Contact Retool support for the correct AWS account ID to use in the trust policy.

3. Configure role in Retool:

In the Retool resource configuration, select IAM role as the authentication method. Paste the Role ARN (e.g., arn:aws:iam::123456789012:role/RetoolS3Access) and optionally specify an External ID if required by your trust policy.

4. Test the connection:

Click Test connection to verify Retool can assume the role and access your S3 bucket.

4. Test the connection

Click Test connection to verify Retool can authenticate with Amazon S3. If the test succeeds, you see a success message. If it fails, check the following:

  • Authentication: Verify your AWS credentials are correct and not expired.
  • Permissions: Ensure the IAM user or role has the necessary S3 permissions (s3:ListBucket at minimum for testing).
  • CORS configuration: Confirm your S3 bucket has the correct CORS policy for your Retool domain.
  • Bucket name: Check that the bucket name is correct and accessible with your credentials.
  • Region: Verify the S3 region matches your bucket's location.

After testing the connection, click View in console to open the Debug Tools console. The console displays detailed information about the test, including request details (URL, method, headers), response (status code, body), execution time, and error details if the test fails. This information is helpful for troubleshooting connection issues.

5. Save the resource

Click Create resource to save your Amazon S3 resource. You can now use it in queries across your Retool apps and automations.

Query Amazon S3 data

Once you've created an Amazon S3 resource, you can query files and objects in your Retool apps and automations.

Create a query

You can create an Amazon S3 query in a Retool app using Assist to generate queries with natural language, or manually using code.

Use Assist to generate queries from natural language prompts. Assist can create queries to list files, upload data, and manage objects in your S3 resource.

To create a query with Assist:

  1. In the Retool app IDE, click the Assist button at the bottom of the left toolbar to open the Assist panel (if not already visible).
  2. Write a prompt describing what you want to do, referencing your resource using @.
  3. Press Enter to submit the prompt.
  4. Select your Amazon S3 resource when prompted.
  5. Review the generated query and click Run query to add it to your app.
Example prompt
list all files in my-bucket using @Amazon S3

Action types

Amazon S3 queries support multiple action types for different file operations:

Action typeDescriptionUse case
List filesList objects in a bucket with optional prefix filter.Display files in a table or file browser.
Read fileRead file contents into memory as text or base64.Display or process file data in your app.
Download fileDownload file with binary data handling.Allow users to download files to their device.
Upload dataUpload file data to S3 with metadata.Accept file uploads from users and store them.
Generate signed URLCreate temporary URL for file access.Share files securely with time-limited access.
Copy fileCopy object within or between buckets.Duplicate or move files between locations.
Delete fileRemove object from bucket.Delete unwanted files.
Get tagsRetrieve tags for an object.Display file metadata and labels.
Update tagsModify tags for an object.Manage file metadata and organization.
Delete tagsRemove tags from an object.Clean up file metadata.

Query configuration fields

Each query type has specific configuration fields:

Bucket name

Specify the S3 bucket to operate on. If configured in the resource, this defaults to the resource's bucket. You can override it per query to access different buckets.

static bucket name
Specific bucket
my-app-uploads
dynamic bucket with embedded expressions
Environment-based bucket selection
{{ environment === 'production' ? 'prod-uploads' : 'dev-uploads' }}
bucket from user selection
User-selected bucket
{{ bucketSelect.value }}

File key

The path and filename of the object in S3. Use forward slashes (/) to specify folders. S3 treats the entire key as the object identifier, including any path separators.

simple filename
File in bucket root
document.pdf
nested path
File in subfolder
uploads/2026/01/document.pdf
dynamic file key with embedded expressions
Date-based path
{{ 'uploads/' + moment().format('YYYY/MM/') + fileInput1.value[0].name }}
user-specific path
User folder organization
{{ 'users/' + currentUser.id + '/avatar.jpg' }}

Prefix

Filter objects by prefix when listing files. This acts like a folder filter, returning only objects whose keys start with the specified prefix.

folder path prefix
List files in a folder
uploads/documents/
date-based prefix
Filter by date path
uploads/2026/01/
user-specific prefix
User-specific files
{{ 'users/' + currentUser.id + '/' }}

Upload data

The data to upload to S3. Can be file data from a component, base64-encoded data, a string, or serialized JSON.

file from File Input component
Upload file from user
{{ fileInput1.value[0] }}
JSON data as file
Export table data as JSON
{{ JSON.stringify(table1.displayedData) }}
image data (base64)
Upload image component data
{{ image1.value }}
text content
Upload text editor content
{{ textEditor1.value }}

File type

MIME type for the uploaded file. S3 uses this for the Content-Type header, which affects how browsers handle the file when accessed directly.

common MIME types
Static MIME types
image/jpeg
application/pdf
text/csv
application/json
text/plain
dynamic from file input
Use file's original MIME type
{{ fileInput1.value[0].type }}

Signed URL options

Configure the signed URL operation type and expiration time when generating signed URLs. Signed URLs provide temporary access to private S3 objects without requiring AWS credentials.

download (getObject)
Generate download URL
// Signed operation name
getObject

// Expiration (seconds)
3600 // 1 hour
upload (putObject)
Generate upload URL
// Signed operation name
putObject

// Expiration (seconds)
1800 // 30 minutes
dynamic expiration
User-specified expiration
{{ expirationInput.value * 3600 }}  // Convert hours to seconds

Data types and formatting

Amazon S3 queries accept and return specific data types. Understanding these formats helps you work with S3 data effectively in Retool.

Request data types

Use embedded expressions ({{ }}) to provide dynamic values to S3 queries.

Value typeDescriptionExample
StringsBucket names, file keys, prefixes, and MIME types.'my-bucket' or {{ textInput1.value }}
File objectsFile data from File Input components.{{ fileInput1.value[0] }}
Binary dataBase64-encoded strings for images or files.{{ image1.value }}
NumbersExpiration times, max keys, and numeric metadata.3600 or {{ slider1.value }}
JSON stringsSerialized data for upload as JSON files.{{ JSON.stringify(table1.data) }}

Response data types

S3 queries return data in formats specific to the action type.

Action typeResponse formatDescription
List filesArray of objectsEach object contains key, size, lastModified, etag.
Read fileString or base64Text files return as strings. Binary files return as base64.
Download fileBinary downloadTriggers browser download. No data returned to app.
Upload dataObjectContains etag, location, bucket, key.
Generate signed URLObjectContains signedUrl (string) with temporary access URL.
Copy fileObjectContains etag and copySource.
Delete fileBooleanReturns true if successful.
Get tagsObjectKey-value pairs of tags.

Working with file data

When working with files in Retool, use these patterns to handle different file types.

display uploaded files in table
Table Data property
{{ s3ListQuery.data }}

Use transformers to format the display:

Format file size
{{ (item.size / 1024).toFixed(2) + ' KB' }}
Format last modified date
{{ moment(item.lastModified).format('YYYY-MM-DD HH:mm:ss') }}
upload and store file metadata

After uploading a file, store the S3 key in your database to reference it later.

Store upload result
// After successful upload
{
fileKey: {{ s3UploadQuery.data.key }},
fileName: {{ fileInput1.value[0].name }},
fileSize: {{ fileInput1.value[0].size }},
uploadedAt: {{ moment().toISOString() }},
uploadedBy: {{ currentUser.id }}
}

Common use cases

These examples demonstrate the most common Amazon S3 operations in Retool apps.

list and display files in a bucket

List files from an S3 bucket and display them in a Table component with formatted columns.

1. Create list query:

FieldValue
Action typeList files
Bucket namemy-app-uploads
Prefixuploads/
Max keys1000

2. Configure Table component:

Set the Table component's Data property to:

{{ s3ListQuery.data }}

3. Format columns:

Add transformers to format file size and dates:

File size column
{{ (item.size / 1024 / 1024).toFixed(2) + ' MB' }}
Last modified column
{{ moment(item.lastModified).fromNow() }}

Result: Users see a formatted list of files with human-readable sizes and timestamps.

upload files with organized folder structure

Accept file uploads from users and store them in S3 with an organized folder structure by date and user.

1. Add File Input component:

Add a File Input component to your app. Configure it to accept the file types you want (e.g., image/*, .pdf, .csv).

2. Create upload query:

FieldValue
Action typeUpload data
Bucket namemy-app-uploads
File key{{ 'uploads/' + currentUser.id + '/' + moment().format('YYYY/MM/DD/') + fileInput1.value[0].name }}
Upload data{{ fileInput1.value[0] }}
File type{{ fileInput1.value[0].type }}

3. Add upload button with event handler:

Create a Button component with these event handlers:

  1. Run query: s3UploadQuery
  2. On success: Show notification with "File uploaded successfully"
  3. On success: Run query: s3ListQuery (to refresh the file list)

4. Store file metadata:

After successful upload, store the file reference in your database:

Insert file record
INSERT INTO uploaded_files (user_id, file_key, file_name, file_size, uploaded_at)
VALUES (
{{ currentUser.id }},
{{ s3UploadQuery.data.key }},
{{ fileInput1.value[0].name }},
{{ fileInput1.value[0].size }},
{{ moment().toISOString() }}
)

Result: Files are organized by user and date, making them easy to locate and manage.

generate signed URLs for secure file sharing

Create temporary URLs that allow users to access files securely without AWS credentials.

1. Create signed URL query:

FieldValue
Action typeGenerate signed URL
Bucket namemy-app-uploads
File key{{ table1.selectedRow.data.key }}
Signed operation namegetObject
Expiration3600

2. Display URL in Text component:

Show the signed URL to users:

Text component value
{{ s3SignedUrlQuery.data.signedUrl }}

3. Add copy to clipboard button:

Create a Button component that copies the URL to clipboard:

Button event handler
// Action: Copy to clipboard
{{ s3SignedUrlQuery.data.signedUrl }}

4. Add download button:

Create a Button component that opens the signed URL in a new tab:

Button event handler
// Action: Open URL
{{ s3SignedUrlQuery.data.signedUrl }}

Result: Users get a time-limited URL (valid for 1 hour) to access the file without needing AWS credentials. The URL expires automatically for security.

bulk upload from CSV with progress tracking

Upload multiple files referenced in a CSV and track progress.

1. Parse CSV with file references:

Add a File Input component that accepts CSV files. Parse it to get file references:

Transformer to parse CSV
{{
fileInput1.parsedValue.map(row => ({
fileName: row.file_name,
sourcePath: row.source_path,
category: row.category
}))
}}

2. Create bulk upload query:

Configure a query that uploads files in a loop:

JavaScript query for bulk upload
const files = {{ csvParsed.value }};
const results = [];

for (const file of files) {
try {
const uploadResult = await s3UploadQuery.trigger({
additionalScope: {
fileKey: `uploads/${file.category}/${file.fileName}`,
uploadData: file.sourcePath
}
});
results.push({ fileName: file.fileName, status: 'success', key: uploadResult.key });
} catch (error) {
results.push({ fileName: file.fileName, status: 'failed', error: error.message });
}
}

return results;

3. Display progress:

Show upload results in a Table component with status indicators.

Result: Multiple files are uploaded efficiently with clear feedback on success or failure for each file.

delete files with confirmation and undo

Delete files from S3 with user confirmation and the ability to restore from a backup bucket.

1. Add delete button with confirmation:

Add an action column to your Table with a delete button. Configure the button to show a confirmation modal before deleting.

2. Create backup query (optional):

Before deleting, copy the file to a backup bucket:

FieldValue
Action typeCopy file
Bucket namemy-app-uploads
File key{{ table1.selectedRow.data.key }}
Copy destinationbackup-bucket/{{ table1.selectedRow.data.key }}

3. Create delete query:

FieldValue
Action typeDelete file
Bucket namemy-app-uploads
File key{{ table1.selectedRow.data.key }}

4. Chain event handlers:

Configure the delete button with these event handlers:

  1. Show confirmation: "Are you sure you want to delete this file?"
  2. If confirmed, run query: s3BackupQuery
  3. On success, run query: s3DeleteQuery
  4. On success, show notification: "File deleted and backed up"
  5. On success, run query: s3ListQuery (refresh list)

Result: Files are safely backed up before deletion, and users must confirm the action. Files can be restored from the backup bucket if needed.

Best practices

Follow these best practices when working with Amazon S3 in Retool.

Performance

  • Use pagination: Set maxKeys to limit the number of objects returned when listing large buckets. Use continuation tokens for pagination.
  • Filter with prefixes: Use the prefix field to narrow down results when listing files. This reduces data transfer and improves query performance.
  • Cache signed URLs: Generate signed URLs once and reuse them until expiration. Avoid regenerating them on every page load.
  • Batch operations: When uploading or deleting multiple files, use JavaScript queries to batch operations rather than triggering individual queries repeatedly.
  • Stream large files: For large file downloads, use signed URLs and let users download directly from S3 rather than reading files into memory.

Security

  • Use least privilege IAM: Grant only the minimum S3 permissions required for your use case. Avoid using full s3:* permissions.
  • Rotate credentials: Prefer IAM role assumption over static access keys. If using access keys, rotate them regularly.
  • Encrypt sensitive data: Enable server-side encryption (SSE) on your S3 buckets to protect data at rest.
  • Limit signed URL expiration: Set short expiration times for signed URLs (e.g., 1 hour) to minimize exposure if URLs are shared.
  • Use configuration variables: Store AWS credentials in configuration variables rather than hardcoding them in queries.
  • Use resource environments: Organizations on an Enterprise plan can configure multiple resource environments to maintain separate S3 bucket configurations for production, staging, and development.
  • Validate file uploads: Check file types and sizes before uploading to prevent malicious or oversized uploads.

Data organization

  • Use consistent naming: Establish a naming convention for file keys (e.g., user_id/category/date/filename) to keep files organized.
  • Tag objects: Use S3 object tags to categorize files by project, department, or status. This makes filtering and lifecycle management easier.
  • Implement versioning: Enable S3 versioning on buckets to protect against accidental deletions and overwrites.
  • Archive old files: Use S3 lifecycle policies to automatically move old files to cheaper storage classes (e.g., Glacier) or delete them after a retention period.
  • Separate environments: Use different buckets for development, staging, and production environments to prevent accidental data mixing. Configure different credentials for each environment using Retool environments.