Connect to Amazon S3
Learn how to connect to Amazon S3 and upload, download, manage files in Retool.
Amazon S3 (Simple Storage Service) is a scalable object storage service for storing and retrieving data. With Retool's Amazon S3 integration, you can build apps and automations that upload files, download data, generate signed URLs, and manage objects in your S3 buckets.
Retool's S3 integration also works with S3-compatible providers such as Digital Ocean Spaces, MinIO, and Wasabi.
What you can do with Amazon S3
- List and browse files: Display files and folders from S3 buckets in tables or file browsers.
- Upload files: Accept file uploads from users and store them in S3.
- Download files: Retrieve files from S3 for display or processing.
- Generate signed URLs: Create temporary URLs for secure file access without exposing credentials.
- Manage files: Copy, delete, and organize files across buckets with tag support.
Before you begin
To connect Amazon S3 to Retool, you need the following:
- Cloud-hosted organizations
- Self-hosted instances
- AWS account: Access to an AWS account with S3 buckets.
- AWS IAM credentials: Access key ID and secret access key with S3 permissions, or an IAM role to assume.
- S3 bucket: At least one S3 bucket to connect to.
- CORS configuration: S3 bucket must have CORS enabled for cloud instances.
- Retool permissions: Ability to create and manage resources in your organization.
- AWS account: Access to an AWS account with S3 buckets.
- AWS IAM credentials: Access key ID and secret access key with S3 permissions, IAM role to assume, or AWS credentials chain support.
- S3 bucket: At least one S3 bucket to connect to.
- CORS configuration: S3 bucket must have CORS enabled for your Retool instance's domain.
- Retool permissions: Edit all permissions for resources in your organization.
Configure CORS policy
Amazon S3 requires CORS (Cross-Origin Resource Sharing) configuration to allow Retool to access your buckets from the browser.
- Cloud-hosted organizations
- Self-hosted instances
Add the following CORS configuration to your Amazon S3 bucket to allow access from Retool. In the AWS console, navigate to your bucket's Permissions tab, scroll to Cross-origin resource sharing (CORS), and add this configuration:
CORS configuration for cloud instances
[
{
"AllowedHeaders": ["*"],
"AllowedMethods": ["GET", "PUT", "POST", "DELETE", "HEAD"],
"AllowedOrigins": [
"https://retool.com",
"https://*.retool.com"
],
"ExposeHeaders": ["ETag", "x-amz-meta-custom-header"]
}
]
Replace *.retool.com with your organization's custom domain if applicable.
Add the following CORS configuration to your Amazon S3 bucket to allow access from your self-hosted instance. In the AWS console, navigate to your bucket's Permissions tab, scroll to Cross-origin resource sharing (CORS), and add this configuration:
CORS configuration for self-hosted
[
{
"AllowedHeaders": ["*"],
"AllowedMethods": ["GET", "PUT", "POST", "DELETE", "HEAD"],
"AllowedOrigins": ["https://retool.example.com"],
"ExposeHeaders": ["ETag", "x-amz-meta-custom-header"]
}
]
Replace retool.example.com with your actual Retool instance domain.
Create an Amazon S3 resource
Follow these steps to create an Amazon S3 resource in your Retool organization.
1. Create a new resource
In your Retool organization, navigate to Resources in the main navigation and click Create new → Resource. Search for Amazon S3 and click the Amazon S3 tile to begin configuration.
Use folders to organize your resources by team, environment, or data source type. This helps keep your resource list manageable as your organization grows.
2. Configure connection settings
Configure the following connection settings for your Amazon S3 resource.
Resource name
Give your resource a descriptive name that indicates which bucket or environment it connects to.
examples of descriptive resource names
production_s3
user_uploads_bucket
backup_storage_s3
marketing_assets_s3
S3 bucket
Specify the name of the S3 bucket you want to connect to. You can configure this to connect to a specific bucket or leave it empty to allow access to any bucket at query time.
static bucket name
my-app-uploads
dynamic bucket with embedded expressions
{{ environment === 'production' ? 'prod-bucket' : 'dev-bucket' }}
S3 region
Select the AWS region where your S3 bucket is located (e.g., us-east-1, us-west-2, eu-west-1). This ensures Retool connects to the correct regional endpoint.
Default ACL
Set the default access control list (ACL) for objects uploaded to S3. Common options include private, public-read, authenticated-read, and bucket-owner-full-control. Leave empty to use the bucket's default ACL.
- Cloud-hosted organizations
- Self-hosted instances
Outbound region
If your organization uses outbound regions, select the region that should be used for requests to Amazon S3. This controls which geographic region your requests originate from.
Self-hosted instances do not have the outbound region option. Requests originate from your Retool instance's network location.
3. Configure authentication
Choose an authentication method based on your deployment type and security requirements. Amazon S3 supports IAM access keys, IAM role assumption, and AWS credentials chain for self-hosted instances.
| Authentication method | Use cases |
|---|---|
| Access Key + Secret Key | Standard authentication with IAM user credentials. Use for development environments, when you have static credentials from AWS IAM, or when you need predictable non-expiring credentials. Most common method for initial setup. |
| IAM role (assume role) | Enhanced security with temporary credentials and automatic rotation. Use for production environments following AWS security best practices, cross-account access to S3 buckets, or when you want short-lived credentials. Requires configuring trust relationships in AWS IAM. |
| AWS credentials chain (self-hosted only) | Automatic credential discovery from your environment. Use when Retool runs on AWS infrastructure (EC2, ECS, EKS) with instance profiles or task roles, when you prefer environment-based credential management, or to avoid storing static credentials in Retool. |
- Cloud-hosted organizations
- Self-hosted instances
Cloud organizations can authenticate using Access Key + Secret Key or IAM role (assume role).
option A: Access Key + Secret Key (Recommended)
Use an AWS access key ID and secret access key to authenticate with S3.
1. Create an IAM user with S3 permissions:
In the AWS IAM console, create a user or service account with S3 permissions. Attach a policy that grants the necessary permissions for your use case.
example IAM policy for S3 access
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::my-bucket/*",
"arn:aws:s3:::my-bucket"
]
}
]
}
2. Generate access credentials:
Create an access key ID and secret access key for the IAM user. Store these credentials securely as they provide full access to your S3 buckets and objects.
3. Configure authentication in Retool:
In the Retool resource configuration, paste the AWS access key ID in the Access Key ID field and the AWS secret access key in the Secret Access Key field.
4. Store credentials securely:
Consider using configuration variables or secrets to store credentials instead of hardcoding them in the resource configuration.
option B: IAM role (assume role)
Use an IAM role with temporary credentials for enhanced security and automatic credential rotation.
1. Create an IAM role with S3 permissions:
In the AWS IAM console, create a role with S3 permissions. Attach a policy that grants the necessary permissions for your use case.
2. Configure trust relationship:
Update the role's trust policy to allow Retool to assume the role. You need to add Retool's AWS account ID and optionally require an external ID for additional security.
example trust policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::RETOOL_AWS_ACCOUNT_ID:root"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"sts:ExternalId": "your-external-id"
}
}
}
]
}
Contact Retool support for the correct AWS account ID to use in the trust policy.
3. Configure role in Retool:
In the Retool resource configuration, select IAM role as the authentication method. Paste the Role ARN (e.g., arn:aws:iam::123456789012:role/RetoolS3Access) and optionally specify an External ID if required by your trust policy.
4. Test the connection:
Click Test connection to verify Retool can assume the role and access your S3 bucket.
Self-hosted instances can authenticate using AWS credentials chain, Access Key + Secret Key, or IAM role (assume role).
option A: AWS credentials chain (Recommended for AWS-hosted instances)
Use the AWS credentials chain to automatically discover credentials from your environment. This method follows AWS security best practices by avoiding static credentials.
1. Configure AWS credentials in your environment:
Ensure your Retool instance has access to AWS credentials through one of these methods:
- EC2 instance profile: Attach an IAM role to your EC2 instance with S3 permissions.
- ECS task role: Assign an IAM role to your ECS task definition with S3 permissions.
- EKS service account: Use IAM roles for service accounts (IRSA) to grant S3 access.
- Environment variables: Set
AWS_ACCESS_KEY_IDandAWS_SECRET_ACCESS_KEYenvironment variables. - AWS credentials file: Configure credentials in
~/.aws/credentialson the Retool server.
2. Select AWS credentials chain:
In the Retool resource configuration, select AWS credentials chain as the authentication method. No additional credentials are needed as Retool automatically discovers them from your environment.
3. Verify permissions:
The credentials chain checks for credentials in this order: environment variables, credentials file, EC2 instance metadata, ECS task metadata, EKS service account. Ensure at least one of these methods is configured with appropriate S3 permissions.
option B: Access Key + Secret Key
Use an AWS access key ID and secret access key to authenticate with S3.
1. Create an IAM user with S3 permissions:
In the AWS IAM console, create a user or service account with S3 permissions. Attach a policy that grants the necessary permissions for your use case.
2. Generate access credentials:
Create an access key ID and secret access key for the IAM user.
3. Configure authentication in Retool:
In the Retool resource configuration, paste the AWS access key ID in the Access Key ID field and the AWS secret access key in the Secret Access Key field.
option C: IAM role (assume role)
Use an IAM role with temporary credentials for enhanced security and automatic credential rotation.
1. Create an IAM role with S3 permissions:
In the AWS IAM console, create a role with S3 permissions.
2. Configure trust relationship:
Update the role's trust policy to allow your Retool instance to assume the role. This typically involves trusting the IAM role or user that your Retool instance runs as.
3. Configure role in Retool:
In the Retool resource configuration, select IAM role as the authentication method. Paste the Role ARN and optionally specify an External ID if required by your trust policy.
4. Test the connection
Click Test connection to verify Retool can authenticate with Amazon S3. If the test succeeds, you see a success message. If it fails, check the following:
- Authentication: Verify your AWS credentials are correct and not expired.
- Permissions: Ensure the IAM user or role has the necessary S3 permissions (
s3:ListBucketat minimum for testing). - CORS configuration: Confirm your S3 bucket has the correct CORS policy for your Retool domain.
- Bucket name: Check that the bucket name is correct and accessible with your credentials.
- Region: Verify the S3 region matches your bucket's location.
After testing the connection, click View in console to open the Debug Tools console. The console displays detailed information about the test, including request details (URL, method, headers), response (status code, body), execution time, and error details if the test fails. This information is helpful for troubleshooting connection issues.
5. Save the resource
Click Create resource to save your Amazon S3 resource. You can now use it in queries across your Retool apps and automations.
Query Amazon S3 data
Once you've created an Amazon S3 resource, you can query files and objects in your Retool apps and automations.
Create a query
You can create an Amazon S3 query in a Retool app using Assist to generate queries with natural language, or manually using code.
- Assist
- Code
Use Assist to generate queries from natural language prompts. Assist can create queries to list files, upload data, and manage objects in your S3 resource.
To create a query with Assist:
- In the Retool app IDE, click the Assist button at the bottom of the left toolbar to open the Assist panel (if not already visible).
- Write a prompt describing what you want to do, referencing your resource using
@. - Press Enter to submit the prompt.
- Select your Amazon S3 resource when prompted.
- Review the generated query and click Run query to add it to your app.
list all files in my-bucket using @Amazon S3
To manually create an Amazon S3 query in a Retool app:
- In the Retool app IDE, open the Code tab, then click + in the page or global scope.
- Select Resource query.
- Choose your Amazon S3 resource.
- Select an Action type from the dropdown.
You can also create Amazon S3 queries in workflows and agent tools using the same resource.
Action types
Amazon S3 queries support multiple action types for different file operations:
| Action type | Description | Use case |
|---|---|---|
| List files | List objects in a bucket with optional prefix filter. | Display files in a table or file browser. |
| Read file | Read file contents into memory as text or base64. | Display or process file data in your app. |
| Download file | Download file with binary data handling. | Allow users to download files to their device. |
| Upload data | Upload file data to S3 with metadata. | Accept file uploads from users and store them. |
| Generate signed URL | Create temporary URL for file access. | Share files securely with time-limited access. |
| Copy file | Copy object within or between buckets. | Duplicate or move files between locations. |
| Delete file | Remove object from bucket. | Delete unwanted files. |
| Get tags | Retrieve tags for an object. | Display file metadata and labels. |
| Update tags | Modify tags for an object. | Manage file metadata and organization. |
| Delete tags | Remove tags from an object. | Clean up file metadata. |
Query configuration fields
Each query type has specific configuration fields:
Bucket name
Specify the S3 bucket to operate on. If configured in the resource, this defaults to the resource's bucket. You can override it per query to access different buckets.
static bucket name
my-app-uploads
dynamic bucket with embedded expressions
{{ environment === 'production' ? 'prod-uploads' : 'dev-uploads' }}
bucket from user selection
{{ bucketSelect.value }}
File key
The path and filename of the object in S3. Use forward slashes (/) to specify folders. S3 treats the entire key as the object identifier, including any path separators.
simple filename
document.pdf
nested path
uploads/2026/01/document.pdf
dynamic file key with embedded expressions
{{ 'uploads/' + moment().format('YYYY/MM/') + fileInput1.value[0].name }}
user-specific path
{{ 'users/' + currentUser.id + '/avatar.jpg' }}
Prefix
Filter objects by prefix when listing files. This acts like a folder filter, returning only objects whose keys start with the specified prefix.
folder path prefix
uploads/documents/
date-based prefix
uploads/2026/01/
user-specific prefix
{{ 'users/' + currentUser.id + '/' }}
Upload data
The data to upload to S3. Can be file data from a component, base64-encoded data, a string, or serialized JSON.
file from File Input component
{{ fileInput1.value[0] }}
JSON data as file
{{ JSON.stringify(table1.displayedData) }}
image data (base64)
{{ image1.value }}
text content
{{ textEditor1.value }}
File type
MIME type for the uploaded file. S3 uses this for the Content-Type header, which affects how browsers handle the file when accessed directly.
common MIME types
image/jpeg
application/pdf
text/csv
application/json
text/plain
dynamic from file input
{{ fileInput1.value[0].type }}
Signed URL options
Configure the signed URL operation type and expiration time when generating signed URLs. Signed URLs provide temporary access to private S3 objects without requiring AWS credentials.
download (getObject)
// Signed operation name
getObject
// Expiration (seconds)
3600 // 1 hour
upload (putObject)
// Signed operation name
putObject
// Expiration (seconds)
1800 // 30 minutes
dynamic expiration
{{ expirationInput.value * 3600 }} // Convert hours to seconds
Data types and formatting
Amazon S3 queries accept and return specific data types. Understanding these formats helps you work with S3 data effectively in Retool.
Request data types
Use embedded expressions ({{ }}) to provide dynamic values to S3 queries.
| Value type | Description | Example |
|---|---|---|
| Strings | Bucket names, file keys, prefixes, and MIME types. | 'my-bucket' or {{ textInput1.value }} |
| File objects | File data from File Input components. | {{ fileInput1.value[0] }} |
| Binary data | Base64-encoded strings for images or files. | {{ image1.value }} |
| Numbers | Expiration times, max keys, and numeric metadata. | 3600 or {{ slider1.value }} |
| JSON strings | Serialized data for upload as JSON files. | {{ JSON.stringify(table1.data) }} |
Response data types
S3 queries return data in formats specific to the action type.
| Action type | Response format | Description |
|---|---|---|
| List files | Array of objects | Each object contains key, size, lastModified, etag. |
| Read file | String or base64 | Text files return as strings. Binary files return as base64. |
| Download file | Binary download | Triggers browser download. No data returned to app. |
| Upload data | Object | Contains etag, location, bucket, key. |
| Generate signed URL | Object | Contains signedUrl (string) with temporary access URL. |
| Copy file | Object | Contains etag and copySource. |
| Delete file | Boolean | Returns true if successful. |
| Get tags | Object | Key-value pairs of tags. |
Working with file data
When working with files in Retool, use these patterns to handle different file types.
display uploaded files in table
{{ s3ListQuery.data }}
Use transformers to format the display:
{{ (item.size / 1024).toFixed(2) + ' KB' }}
{{ moment(item.lastModified).format('YYYY-MM-DD HH:mm:ss') }}
upload and store file metadata
After uploading a file, store the S3 key in your database to reference it later.
// After successful upload
{
fileKey: {{ s3UploadQuery.data.key }},
fileName: {{ fileInput1.value[0].name }},
fileSize: {{ fileInput1.value[0].size }},
uploadedAt: {{ moment().toISOString() }},
uploadedBy: {{ currentUser.id }}
}
Common use cases
These examples demonstrate the most common Amazon S3 operations in Retool apps.
list and display files in a bucket
List files from an S3 bucket and display them in a Table component with formatted columns.
1. Create list query:
| Field | Value |
|---|---|
| Action type | List files |
| Bucket name | my-app-uploads |
| Prefix | uploads/ |
| Max keys | 1000 |
2. Configure Table component:
Set the Table component's Data property to:
{{ s3ListQuery.data }}
3. Format columns:
Add transformers to format file size and dates:
{{ (item.size / 1024 / 1024).toFixed(2) + ' MB' }}
{{ moment(item.lastModified).fromNow() }}
Result: Users see a formatted list of files with human-readable sizes and timestamps.
upload files with organized folder structure
Accept file uploads from users and store them in S3 with an organized folder structure by date and user.
1. Add File Input component:
Add a File Input component to your app. Configure it to accept the file types you want (e.g., image/*, .pdf, .csv).
2. Create upload query:
| Field | Value |
|---|---|
| Action type | Upload data |
| Bucket name | my-app-uploads |
| File key | {{ 'uploads/' + currentUser.id + '/' + moment().format('YYYY/MM/DD/') + fileInput1.value[0].name }} |
| Upload data | {{ fileInput1.value[0] }} |
| File type | {{ fileInput1.value[0].type }} |
3. Add upload button with event handler:
Create a Button component with these event handlers:
- Run query:
s3UploadQuery - On success: Show notification with "File uploaded successfully"
- On success: Run query:
s3ListQuery(to refresh the file list)
4. Store file metadata:
After successful upload, store the file reference in your database:
INSERT INTO uploaded_files (user_id, file_key, file_name, file_size, uploaded_at)
VALUES (
{{ currentUser.id }},
{{ s3UploadQuery.data.key }},
{{ fileInput1.value[0].name }},
{{ fileInput1.value[0].size }},
{{ moment().toISOString() }}
)
Result: Files are organized by user and date, making them easy to locate and manage.
generate signed URLs for secure file sharing
Create temporary URLs that allow users to access files securely without AWS credentials.
1. Create signed URL query:
| Field | Value |
|---|---|
| Action type | Generate signed URL |
| Bucket name | my-app-uploads |
| File key | {{ table1.selectedRow.data.key }} |
| Signed operation name | getObject |
| Expiration | 3600 |
2. Display URL in Text component:
Show the signed URL to users:
{{ s3SignedUrlQuery.data.signedUrl }}
3. Add copy to clipboard button:
Create a Button component that copies the URL to clipboard:
// Action: Copy to clipboard
{{ s3SignedUrlQuery.data.signedUrl }}
4. Add download button:
Create a Button component that opens the signed URL in a new tab:
// Action: Open URL
{{ s3SignedUrlQuery.data.signedUrl }}
Result: Users get a time-limited URL (valid for 1 hour) to access the file without needing AWS credentials. The URL expires automatically for security.
bulk upload from CSV with progress tracking
Upload multiple files referenced in a CSV and track progress.
1. Parse CSV with file references:
Add a File Input component that accepts CSV files. Parse it to get file references:
{{
fileInput1.parsedValue.map(row => ({
fileName: row.file_name,
sourcePath: row.source_path,
category: row.category
}))
}}
2. Create bulk upload query:
Configure a query that uploads files in a loop:
const files = {{ csvParsed.value }};
const results = [];
for (const file of files) {
try {
const uploadResult = await s3UploadQuery.trigger({
additionalScope: {
fileKey: `uploads/${file.category}/${file.fileName}`,
uploadData: file.sourcePath
}
});
results.push({ fileName: file.fileName, status: 'success', key: uploadResult.key });
} catch (error) {
results.push({ fileName: file.fileName, status: 'failed', error: error.message });
}
}
return results;
3. Display progress:
Show upload results in a Table component with status indicators.
Result: Multiple files are uploaded efficiently with clear feedback on success or failure for each file.
delete files with confirmation and undo
Delete files from S3 with user confirmation and the ability to restore from a backup bucket.
1. Add delete button with confirmation:
Add an action column to your Table with a delete button. Configure the button to show a confirmation modal before deleting.
2. Create backup query (optional):
Before deleting, copy the file to a backup bucket:
| Field | Value |
|---|---|
| Action type | Copy file |
| Bucket name | my-app-uploads |
| File key | {{ table1.selectedRow.data.key }} |
| Copy destination | backup-bucket/{{ table1.selectedRow.data.key }} |
3. Create delete query:
| Field | Value |
|---|---|
| Action type | Delete file |
| Bucket name | my-app-uploads |
| File key | {{ table1.selectedRow.data.key }} |
4. Chain event handlers:
Configure the delete button with these event handlers:
- Show confirmation: "Are you sure you want to delete this file?"
- If confirmed, run query:
s3BackupQuery - On success, run query:
s3DeleteQuery - On success, show notification: "File deleted and backed up"
- On success, run query:
s3ListQuery(refresh list)
Result: Files are safely backed up before deletion, and users must confirm the action. Files can be restored from the backup bucket if needed.
Best practices
Follow these best practices when working with Amazon S3 in Retool.
Performance
- Use pagination: Set
maxKeysto limit the number of objects returned when listing large buckets. Use continuation tokens for pagination. - Filter with prefixes: Use the prefix field to narrow down results when listing files. This reduces data transfer and improves query performance.
- Cache signed URLs: Generate signed URLs once and reuse them until expiration. Avoid regenerating them on every page load.
- Batch operations: When uploading or deleting multiple files, use JavaScript queries to batch operations rather than triggering individual queries repeatedly.
- Stream large files: For large file downloads, use signed URLs and let users download directly from S3 rather than reading files into memory.
Security
- Use least privilege IAM: Grant only the minimum S3 permissions required for your use case. Avoid using full
s3:*permissions. - Rotate credentials: Prefer IAM role assumption over static access keys. If using access keys, rotate them regularly.
- Encrypt sensitive data: Enable server-side encryption (SSE) on your S3 buckets to protect data at rest.
- Limit signed URL expiration: Set short expiration times for signed URLs (e.g., 1 hour) to minimize exposure if URLs are shared.
- Use configuration variables: Store AWS credentials in configuration variables rather than hardcoding them in queries.
- Use resource environments: Organizations on an Enterprise plan can configure multiple resource environments to maintain separate S3 bucket configurations for production, staging, and development.
- Validate file uploads: Check file types and sizes before uploading to prevent malicious or oversized uploads.
Data organization
- Use consistent naming: Establish a naming convention for file keys (e.g.,
user_id/category/date/filename) to keep files organized. - Tag objects: Use S3 object tags to categorize files by project, department, or status. This makes filtering and lifecycle management easier.
- Implement versioning: Enable S3 versioning on buckets to protect against accidental deletions and overwrites.
- Archive old files: Use S3 lifecycle policies to automatically move old files to cheaper storage classes (e.g., Glacier) or delete them after a retention period.
- Separate environments: Use different buckets for development, staging, and production environments to prevent accidental data mixing. Configure different credentials for each environment using Retool environments.