Skip to main content

Retool AI platforms and models

Learn about the different AI platforms and models that Retool supports.

AI platforms use models that are trained on specific types of data to make relevant decisions and perform tasks. For example, you would interact with one type of model to generate chat responses and a different model to generate images. AI models are also known as LLMs (large language models).

In general, a model is a collective group of model instances, each of which vary in terms of functionality or data. For example, there are different instances of OpenAI's GPT-4 model, such as gpt-4 and gpt-4-turbo. You can specify the exact model instance when configuring Retool AI.

Retool-managed OpenAI

You can interact with OpenAI models using either a Retool-managed connection or with your own credentials. Retool recommends using this for testing and development purposes. Once you're ready to use AI in production, switch to your own OpenAI connection using API credentials.

The Retool-managed OpenAI connection provides a limited number of tokens to each organization per day and is rate-limited to 250,000 tokens per hour. If you use Vectors in Retool AI, you can reach this limit very quickly. You can configure AI platforms in Retool AI to use your own API credentials for production use.

Available platforms and models

Retool supports a number of AI platforms and their models for which you can use.

PlatformModels
OpenAIGPT-4
GPT 3.5 Turbo
DALL·E
AnthropicClaude 3
Claude 2.1
Claude 2
Claude Instant 1.2
AWS BedrockAmazon Bedrock is a managed service that can be configured for different AI models, such as Anthropic or Cohere models. The models available for use with Retool AI depend on the models you configure for Bedrock.
Azure OpenAIRather than a predefined set of models from which you select, you create and deploy an Azure OpenAI service with a specific model. You then provide the endpoint and model name when configuring the API credentials.
GoogleGemini 1.5 Pro
Gemini 1.0 Pro
Coherecommand
command-light

AI tokens

Estimate token usage

Use OpenAI's tokenizer tool to calculate how many tokens would result from a passage of text.

AI models use tokens, rather than characters or bytes, to represent the commonality of sequences and usage. In general, a single token can equate to approximately four characters.

The text you provide in an AI query is converted by the model into tokens, after which it can generate the necessary response.

Billing and usage

AI platforms calculate billing costs based on token usage. Refer to each AI platform to find out pricing information and how to monitor usage.