generative-ai-inference
===========================

Description
-----------
OCI Generative AI is a fully managed service that provides a set of state-of-the-art, customizable large language models (LLMs) that cover a wide range of use cases for text generation, summarization, and text embeddings.

Use the Generative AI service inference CLI to access your custom model endpoints, or to try the out-of-the-box models to `chat <https://docs.cloud.oracle.com/en-us/iaas/tools/oci-cli/latest/oci_cli_docs/cmdref/generative-ai-inference/chat-result/chat-cohere-chat-request.html>`__, `generate text <#/EN/generative-ai-inference/latest/GenerateTextResult/GenerateText>`__, `summarize <#/EN/generative-ai-inference/latest/SummarizeTextResult/SummarizeText>`__, and `create text embeddings <#/EN/generative-ai-inference/latest/EmbedTextResult/EmbedText>`__.

To use a Generative AI custom model for inference, you must first create an endpoint for that model. Use the [Generative AI service management CLI] to `create a custom model <#/EN/generative-ai/latest/Model/>`__ by fine-tuning an out-of-the-box model, or a previous version of a custom model, using your own data. Fine-tune the custom model on a `fine-tuning dedicated AI cluster <#/EN/generative-ai/latest/DedicatedAiCluster/>`__. Then, create a `hosting dedicated AI cluster <#/EN/generative-ai/latest/DedicatedAiCluster/>`__ with an `endpoint <https://docs.cloud.oracle.com/api/#/en/generative-ai/latest/Endpoint/>`__ to host your custom model. For resource management in the Generative AI service, use the [Generative AI service management CLI].

To learn more about the service, see the `Generative AI documentation <https://docs.cloud.oracle.com/iaas/Content/generative-ai/home.htm>`__.

Available Commands
------------------
* :doc:`chat-result </cmdref/generative-ai-inference/chat-result>`

  * :doc:`chat </cmdref/generative-ai-inference/chat-result/chat>`
  * :doc:`chat-cohere-chat-request </cmdref/generative-ai-inference/chat-result/chat-cohere-chat-request>`
  * :doc:`chat-dedicated-serving-mode </cmdref/generative-ai-inference/chat-result/chat-dedicated-serving-mode>`
  * :doc:`chat-generic-chat-request </cmdref/generative-ai-inference/chat-result/chat-generic-chat-request>`
  * :doc:`chat-on-demand-serving-mode </cmdref/generative-ai-inference/chat-result/chat-on-demand-serving-mode>`

* :doc:`embed-text-result </cmdref/generative-ai-inference/embed-text-result>`

  * :doc:`embed-text </cmdref/generative-ai-inference/embed-text-result/embed-text>`

* :doc:`generate-text-result </cmdref/generative-ai-inference/generate-text-result>`

  * :doc:`generate-text-cohere-llm-inference-request </cmdref/generative-ai-inference/generate-text-result/generate-text-cohere-llm-inference-request>`
  * :doc:`generate-text-llama-llm-inference-request </cmdref/generative-ai-inference/generate-text-result/generate-text-llama-llm-inference-request>`

* :doc:`rerank-text-result </cmdref/generative-ai-inference/rerank-text-result>`

  * :doc:`rerank-text </cmdref/generative-ai-inference/rerank-text-result/rerank-text>`
  * :doc:`rerank-text-dedicated-serving-mode </cmdref/generative-ai-inference/rerank-text-result/rerank-text-dedicated-serving-mode>`
  * :doc:`rerank-text-on-demand-serving-mode </cmdref/generative-ai-inference/rerank-text-result/rerank-text-on-demand-serving-mode>`

* :doc:`summarize-text-result </cmdref/generative-ai-inference/summarize-text-result>`

  * :doc:`summarize-text </cmdref/generative-ai-inference/summarize-text-result/summarize-text>`


.. toctree::
    :hidden:

    /cmdref/generative-ai-inference/chat-result
    /cmdref/generative-ai-inference/embed-text-result
    /cmdref/generative-ai-inference/generate-text-result
    /cmdref/generative-ai-inference/rerank-text-result
    /cmdref/generative-ai-inference/summarize-text-result