Service Limits for Generative AI
Learn the limits for dedicated AI cluster resources in OCI Generative AI.
By default, the number of dedicated AI clusters that you get per tenancy is 0. For the name of the resources that you're requesting an increase for, see the following table and example.
Base Model Related to the Dedicated AI Cluster Resource | Resource | Limit Name | Default Count per Tenancy |
---|---|---|---|
|
Dedicated AI cluster unit count for hosting or fine-tuning the meta.llama-3-70b-instruct models |
dedicated-unit-llama2-70-count |
0 |
|
Dedicated AI cluster unit count for hosting or fine-tuning the meta.llama-3.1-70b-instruct models |
dedicated-unit-llama2-70-count |
0 |
|
Dedicated AI cluster unit count for hosting the meta.llama-3.1-405b-instruct models |
dedicated-unit-llama2-70-count |
0 |
|
Dedicated AI cluster unit count for hosting the meta.llama-3.2-11b-vision-instruct models |
dedicated-unit-llama2-70-count |
0 |
|
Dedicated AI cluster unit count for hosting the meta.llama-3.2-90b-vision-instruct models |
dedicated-unit-llama2-70-count |
0 |
|
Dedicated AI cluster unit count for hosting or fine-tuning the cohere.command-r-16k models |
dedicated-unit-small-cohere-count |
0 |
|
Dedicated AI cluster unit count for hosting or fine-tuning the cohere.command-r-08-2024 models |
dedicated-unit-small-cohere-count |
0 |
|
Dedicated AI cluster unit count for hosting the cohere.command-r-plus models |
dedicated-unit-large-cohere-count |
0 |
|
Dedicated AI cluster unit count for hosting the cohere.command-r-plus-08-2024 models |
dedicated-unit-large-cohere-count |
0 |
|
Dedicated AI cluster unit count for hosting or fine-tuning the cohere.command-light models |
dedicated-unit-small-cohere-count |
0 |
|
Dedicated AI cluster unit count either for hosting or fine-tuning the cohere.command models |
dedicated-unit-large-cohere-count |
0 |
|
Dedicated AI cluster unit count for hosting the cohere.embed-english-v3.0 models |
dedicated-unit-embed-cohere-count |
0 |
|
Dedicated AI cluster unit count for hosting the cohere.embed-multilingual-v3.0 models |
dedicated-unit-embed-cohere-count |
0 |
|
Dedicated AI cluster unit count for hosting the cohere.embed-english-light-v3.0 models |
dedicated-unit-embed-cohere-count |
0 |
|
Dedicated AI cluster unit count for hosting the cohere.embed-multilingual-light-v3.0 models |
dedicated-unit-embed-cohere-count |
0 |
|
Dedicated AI cluster unit count for hosting the meta.llama-2-70b-chat models |
dedicated-unit-llama2-70-count |
0 |
You must request a limit increase to use the following resources:
Meta Llama Family
-
To host a Meta Llama 3.2 11B Vision model, you must request
dedicated-unit-llama2-70-count
to increase by 1. -
To host a Meta Llama 3.2 90B Vision model, you must request
dedicated-unit-llama2-70-count
to increase by 2. -
To host a Meta Llama 3.1 (70B) model, you must request
dedicated-unit-llama2-70-count
to increase by 2. -
To fine-tune a Meta Llama 3.1 (70B) model, you must request
dedicated-unit-llama2-70-count
to increase by 4. -
To host a Meta Llama 3.1 (405B) model, you must request
dedicated-unit-llama2-70-count
to increase by 8.
Cohere Command R Family
-
To host a Cohere Command R (deprecated) model, you must request
dedicated-unit-small-cohere-count
to increase by 1. -
To fine-tune a Cohere Command R (deprecated) model, you must request
dedicated-unit-small-cohere-count
to increase by 8. -
To host a Cohere Command R 08-2024 model, you must request
dedicated-unit-small-cohere-count
to increase by 1. -
To fine-tune a Cohere Command R 08-2024 model, you must request
dedicated-unit-small-cohere-count
to increase by 8. -
To host a Cohere Command R+ (deprecated) model, you must request
dedicated-unit-large-cohere-count
to increase by 2. -
To host a Cohere Command R+ 08-2024 model, you must request
dedicated-unit-large-cohere-count
to increase by 2.
For a complete list of models and their matching dedicated AI cluster limits, see Matching Base Models to Clusters.
To request dedicated AI clusters for your tenancy, read about Requesting a Service Limit Increase.