Package | Description |
---|---|
com.oracle.bmc.generativeaiinference.model |
Modifier and Type | Method and Description |
---|---|
static CohereLlmInferenceRequest.Builder |
CohereLlmInferenceRequest.builder()
Create a new builder.
|
CohereLlmInferenceRequest.Builder |
CohereLlmInferenceRequest.Builder.copy(CohereLlmInferenceRequest model) |
CohereLlmInferenceRequest.Builder |
CohereLlmInferenceRequest.Builder.frequencyPenalty(Double frequencyPenalty)
To reduce repetitiveness of generated tokens, this number penalizes new tokens based on
their frequency in the generated text so far.
|
CohereLlmInferenceRequest.Builder |
CohereLlmInferenceRequest.Builder.isEcho(Boolean isEcho)
Whether or not to return the user prompt in the response.
|
CohereLlmInferenceRequest.Builder |
CohereLlmInferenceRequest.Builder.isStream(Boolean isStream)
Whether to stream back partial progress.
|
CohereLlmInferenceRequest.Builder |
CohereLlmInferenceRequest.Builder.maxTokens(Integer maxTokens)
The maximum number of tokens to predict for each response.
|
CohereLlmInferenceRequest.Builder |
CohereLlmInferenceRequest.Builder.numGenerations(Integer numGenerations)
The number of generated texts that will be returned.
|
CohereLlmInferenceRequest.Builder |
CohereLlmInferenceRequest.Builder.presencePenalty(Double presencePenalty)
To reduce repetitiveness of generated tokens, this number penalizes new tokens based on
whether they’ve appeared in the generated text so far.
|
CohereLlmInferenceRequest.Builder |
CohereLlmInferenceRequest.Builder.prompt(String prompt)
Represents the prompt to be completed.
|
CohereLlmInferenceRequest.Builder |
CohereLlmInferenceRequest.Builder.returnLikelihoods(CohereLlmInferenceRequest.ReturnLikelihoods returnLikelihoods)
Specifies how and if the token likelihoods are returned with the response.
|
CohereLlmInferenceRequest.Builder |
CohereLlmInferenceRequest.Builder.stopSequences(List<String> stopSequences)
The generated text is cut at the end of the earliest occurrence of this stop sequence.
|
CohereLlmInferenceRequest.Builder |
CohereLlmInferenceRequest.Builder.temperature(Double temperature)
A number that sets the randomness of the generated output.
|
CohereLlmInferenceRequest.Builder |
CohereLlmInferenceRequest.toBuilder() |
CohereLlmInferenceRequest.Builder |
CohereLlmInferenceRequest.Builder.topK(Integer topK)
An integer that sets up the model to use only the top k most likely tokens in the
generated output.
|
CohereLlmInferenceRequest.Builder |
CohereLlmInferenceRequest.Builder.topP(Double topP)
If set to a probability 0.0 < p < 1.0, it ensures that only the most likely tokens, with
total probability mass of p, are considered for generation at each step.
|
CohereLlmInferenceRequest.Builder |
CohereLlmInferenceRequest.Builder.truncate(CohereLlmInferenceRequest.Truncate truncate)
For an input that’s longer than the maximum token length, specifies which part of the
input text will be truncated.
|
Copyright © 2016–2024. All rights reserved.