Cohere Embed Multilingual Image 3

The cohere.embed-multilingual-image-v3.0 is a multimodal model that can create text embeddings either from text inputs or from an image. Image input isn't available in the Console and you must use the API. For API, input a base64 encoded image in each run. For example, a 512 x 512 image is converted to about 1,610 tokens.

Available in These Regions

  • Brazil East (Sao Paulo) (dedicated AI cluster only)
  • Germany Central (Frankfurt) (dedicated AI cluster only)
  • India South (Hyderabad)
  • Japan Central (Osaka) (dedicated AI cluster only)
  • UAE East (Dubai) (dedicated AI cluster only)
  • UK South (London) (dedicated AI cluster only)
  • US East (Ashburn) (dedicated AI cluster only)
  • US Midwest (Chicago)

Key Features

  • Input text or image, but not both.
  • To get embeddings for an image, only one image is allowed. You can't combine text and image for the same embedding. Image input through API only.
  • English or multilingual.
  • Model creates a 1,024-dimensional vector for each embedding.
  • Maximum 128,000 tokens per embedding.
  • For API, input a base64 encoded image in each run. For example, a 512 x 512 image is converted to about 1,610 tokens.
  • Use the Cohere Embed Multilingual models when:

On-Demand Mode

You can reach the pretrained foundational models in Generative AI through two modes: on-demand and dedicated. Here are key features for the on-demand mode:
  • You pay as you go for each inference call when you use the models in the playground or when you call the models through the API.

  • Low barrier to start using Generative AI.
  • Great for experimentation, proof of concept, and model evaluation.
  • Available for the pretrained models in regions not listed as (dedicated AI cluster only).
Important

Dynamic Throttling Limit Adjustment for On-Demand Mode

OCI Generative AI dynamically adjusts the request throttling limit for each active tenancy based on model demand and system capacity to optimize resource allocation and ensure fair access.

This adjustment depends on the following factors:

  • The current maximum throughput supported by the target model.
  • Any unused system capacity at the time of adjustment.
  • Each tenancy’s historical throughput usage and any specified override limits set for that tenancy.

Note: Because of dynamic throttling, rate limits are undocumented and can change to meet system-wide demand.

Tip

Because of the dynamic throttling limit adjustment, we recommend implementing a back-off strategy, which involves delaying requests after a rejection. Without one, repeated rapid requests can lead to further rejections over time, increased latency, and potential temporary blocking of client by the Generative AI service. By using a back-off strategy, such as an exponential back-off strategy, you can distribute requests more evenly, reduce load, and improve retry success, following industry best practices and enhancing the overall stability and performance of your integration to the service.

Dedicated AI Cluster for the Model

To reach a model through a dedicated AI cluster in any listed region, you must create an endpoint for that model on a dedicated AI cluster. For the cluster unit size that matches this model, see the following table.

Base Model Fine-Tuning Cluster Hosting Cluster Pricing Page Information Request Cluster Limit Increase
  • Model Name: Cohere Embed Multilingual Image 3
  • OCI Model Name: cohere.embed-multilingual-image-v3.0
Not available for fine-tuning
  • Unit Size: Embed Cohere
  • Required Units: 1
  • Pricing Page Product Name: Embed Cohere - Dedicated
  • For Hosting, Multiply the Unit Price: x1
  • Limit Name: dedicated-unit-embed-cohere-count
  • For Hosting, Request Limit Increase by: 1
Tip

  • In the preceding region list, models in regions that aren't marked with (dedicated AI cluster only) have both on-demand and dedicated AI cluster options. For the on-demand option, you don't need clusters and you can reach the model in the Console playground or through the API.

  • If you don't have enough cluster limits in your tenancy for hosting an Embed model on a dedicated AI cluster, request the dedicated-unit-embed-cohere-count limit to increase by 1.

  • Review the Cohere Embed Multilingual Image 3 cluster performance benchmarks for different use cases.

Release and Retirement Dates

Model Release Date On-Demand Retirement Date Dedicated Mode Retirement Date
cohere.embed-multilingual-image-v3.0 2025-05-14 2026-01-22 cohere.embed-v4.0
Important

For a list of all model time lines and retirement details, see Retiring the Models.

Embedding Model Parameter

When using the embedding models, you can get a different output by changing the following parameter.

Truncate

Whether to truncate the start or end tokens in a sentence, when that sentence exceeds the maximum number of allowed tokens. For example, a sentence has 516 tokens, but the maximum token size is 512. If you select to truncate the end, the last 4 tokens of that sentence are cut off.