Features
Here are the features that you can configure in the Oracle Android SDK.
Absolute and Relative Timestamps
Feature flag: timestampType: TimestampMode.RELATIVE
You can enable absolute or relative timestamps for chat messages. Absolute timestamps display the exact time for each message. Relative timestamps display only on the latest message and express the time in terms of the seconds, days, hours, months, or years ago relative to the previous message.The precision afforded by absolute timestamps make them ideal for archival tasks, but within the limited context of a chat session, this precision detracts from the user experience because users must compare timestamps to find out the passage of time between messages. Relative timestamps allow users to track the conversation easily through terms like Just Now and A few moments ago that can be immediately understood. Relative timestamps improve the user experience in another way while also simplifying your development tasks: because relative timestamps mark the messages in terms of seconds, days, hours, months, or years ago, you don't need to convert them for timezones.
Configure Relative Timestamps
- Enable timestamps –
enableTimestamp: true
- Enable relative timestamps –
timestampType: 'relative'
timestampType: 'relative'
), an
absolute timestamp displays before the first message of the day as a header. This header
displays when the conversation has not been cleared and older messages are still
available in the history.
- For first 10s
- Between 10s-60s
- Every minute between 1m-60m
- Every hour between 1hr-24hr
- Every day between 1d-30d
- Every month between 1m-12m
- Every year after first year
Action Buttons Layout
Feature flag: actionsLayout
actionsLayout
sets layout direction for the local, global, card and
form actions. When you set this as LayoutOrientation.HORIZONTAL
, these
buttons are laid out horizontally and will wrap if the content overflows.
BotsConfiguration botsConfiguration = new BotsConfiguration.BotsConfigurationBuilder(<Server_URI>, false, getApplicationContext())
.channelId(<CHANNEL_ID>)
.userId(<USER_ID>)
.actionsLayout(actionsLayout)
.build();
Attachment Filtering
(Required) <Enter a short description here.>
Feature flag: shareMenuItems
Before you can configure
shareMenuItems
, you must set enableAttachment
to
true
.
ArrayList<Object> customItems = new ArrayList<>();
ShareMenuCustomItem shareMenuCustomItem1 = new ShareMenuCustomItem("pdf bin", "Label1", 1024, R.drawable.odaas_menuitem_share_file);
ShareMenuCustomItem shareMenuCustomItem2 = new ShareMenuCustomItem("doc", "Label2", R.drawable.odaas_menuitem_share_file);
ShareMenuCustomItem shareMenuCustomItem3 = new ShareMenuCustomItem("csv");
ArrayList<Object> customItems = new ArrayList<>(Arrays.asList(shareMenuCustomItem1,shareMenuCustomItem2,shareMenuCustomItem3,ShareMenuItem.CAMERA));
BotsConfiguration botsConfiguration = new BotsConfiguration.BotsConfigurationBuilder(sharedPreferences.getString(getString(R.string.pref_name_chat_server_host), Settings.CHAT_SERVER_URL), false, getApplicationContext())
.channelId(<CHANNEL_ID>)
.userId(<USER_ID>)
.shareMenuItems(customItems)
.enableAttachment(true)
.build();
ShareMenuCustomItem
object has no value or a null
for the label, as does shareMenuCustomItem3 =
ShareMenuCustomItem('csv')
in the preceding snippet, then a
type
string that’s suffixed to share_
becomes the
label. For shareMenuCustomItem3
, the label is
share_csv
.
You can allow users to upload all file types by setting the
type
of a ShareMenuCustomItem
object
as *
.
public static void shareMenuItems(ArrayList<Object> shareMenuItems)
Bots.shareMenuItems(customItems);
API, where
customItems
is an ArrayList
of
Objects
. Each object can either be of type
ShareMenuItem
enum values or an object of
ShareMenuCustomItem
.ArrayList<Object> customItems = new ArrayList<>();
ShareMenuCustomItem shareMenuCustomItem1 = new ShareMenuCustomItem("pdf bin", "Label1", 1024, R.drawable.odaas_menuitem_share_file);
ShareMenuCustomItem shareMenuCustomItem2 = new ShareMenuCustomItem("doc", "Label2", R.drawable.odaas_menuitem_share_file);
ShareMenuCustomItem shareMenuCustomItem3 = new ShareMenuCustomItem("csv");
customItems.add(shareMenuCustomItem1);
customItems.add(ShareMenuItem.CAMERA);
customItems.add(ShareMenuItem.FILE);
customItems.add(shareMenuCustomItem2);
customItems.add(shareMenuCustomItem3);
Bots.shareMenuItems(customItems);
Auto-Submitting a Field
When a field has the autoSubmit
property set to
true
, the client sends a
FormSubmissionMessagePayload
with the
submittedField
map containing either the valid field values that
have been entered so far. Any fields that are not set yet (regardless of whether they
are required), or fields that violate a client-side validation are not included in the
submittedField
map. If the auto-submitted field itself contains a
value that's not valid, then the submission message is not sent and the client error
message displays for that particular field. When an auto-submit succeeds, the
partialSubmitField
in the form submission message will be set to
the id
of the autoSubmit
field.
Replacing a Previous Input Form
When the end user submits the form, either because a field has
autosubmit
set to true
, the skill can send a new
EditFormMessagePayload
. That message should replace the previous
input form message. By setting the replaceMessage
channel extension
property to true
, you enable the SDK to replace previous input form
message with the current input form message.
Connect and Disconnect Methods
public void
disconnect()
and public void connect()
methods. The
WebSocket is closed after calling the direct
method:Bots.disconnect();
Calling the
following method re-establishes the WebSocket connection if the skill has been in a
disconnected
state:Bots.connect();
public void connect(Botsconfiguration botsconfiguration)
is called
with a new botsconfiguration
object, the existing WebSocket connection
is closed and a new connection is established using the new
botsconfiguration
object.BotsConfiguration botsConfiguration = new BotsConfiguration.BotsConfigurationBuilder(<SERVER_URI>, false, getApplicationContext()) // Configuration to initialize the SDK
.channelId(<CHANNEL_ID>)
.userId(<USER_ID>)
.build();
Bots.connect(botsConfiguration);
Default Client Responses
Feature flag: enableDefaultClientResponse: true
(default:
false
)
Use enableDefaultClientResponse: true
to provide default
client-side responses accompanied by a typing indicator when the skill response has been
delayed, or when there's no skill response at all. If the user sends out the first
message/query, but the skill does not respond with the number of seconds set by the
odaas_default_greeting_timeout
flag, the skill can display a
greeting message that's configured using the
odaas_default_greeting_message
translation string. Next, the client
checks again for the skill's response. The client displays the skill's response if it
has been received, but if it hasn't, then the client displays a wait message (configured
with the odaas_default_wait_message
translation string) at intervals
set by the odaas_default_wait_message_interval
flag. When the wait for
the skill response exceeds the threshold set by the
typingIndicatorTimeout
flag, the client displays a sorry response
to the user and stops the typing indicator. You can configure the sorry response using
the odaas_default_sorry_message
translation string.
Delegation
Feature configuration: messageModifierDelegate
interface MessageModifierDelegate
and pass its instance to the
messageModifierDelegate
property.private MessageDelegate implements MessageModifierDelegate {
@Override
public Message beforeSend(Message message) {
// Handle before send delegate here
}
@Override
public Message beforeDisplay(Message message) {
if (message != null && message.getPayload() != null && message.getPayload().getType() == MessagePayload.MessageType.CARD) {
((CardMessagePayload)message.getPayload()).setLayout(CardLayout.VERTICAL);
}
return message;
}
@Override
public Message beforeNotification(Message message) {
// Handle before notification delegate here
}
}
@Override
public void beforeEndConversation(CompletionHandler completionHandler) {
// Handle before end conversation end delegate here
// Trigger completionHandler.onSuccess() callback after successful execution of the task.
// Trigger completionHandler.onFailure() callback when the task is unsucessful.
}
}
public Message beforeDisplay(Message message)
The public Message beforeDisplay(Message message)
delegate allows a
skill's message to be modified before it is displayed in the conversation.
The modified message that's returned by the delegate displays in the
conversation. If the method returns null
, then the message
is not displayed.
Display the Conversation History
You can either enable or display of a user's local conversation histor after the SDK
has been re-initialized by setting displayPreviousMessages
to
true
or false
in the bots configuration. When set
to false
, previous messages are not displayed for the user, after
re-initialization of SDK.
End the Chat Session
FeatureFlag: enableEndConversation: true
enableEndConversation: true
adds a close button to the header view
that enables users to explicitly end the current chat session. A confirmation prompt
dialog opens when users click this close button and when they confirm the close action,
the SDK sends an event message to the skill that marks the end of the chat session. The
SDK then disconnects the skill from the instance, collapses the chat widget, and erases
the current user's conversation history. The SDK triggers a delegate on
beforeEndConversation(CompletionHandler completionHandler)
which
can be used to perform a task before sending close session request to server. It also
raises a OnChatEnd()
event that you can register for.
Opening the chat widget afterward starts a new chat session.
public static void endChat()
Bots.endChat()
API.Bots.endChat()
CompletionHandler
CompletionHandler
is an event listener that is implemented on the
SDK, which listens for completion of the task being performed on the
beforeEndConversation(CompletionHandler completionHandler)
delegate
in the host application. Refer to the Javadoc included with the SDK available from the
ODA and OMC download page.
Headless SDK
The SDK can be used without its UI. To use it in this mode, import only the
com.oracle.bots.client.sdk.android.core-24.10.aar
package into the
project as described in Add the Oracle Android Client SDK to the Project.
The SDK maintains the connection to server and provides APIs to send messages, receive messages, and get updates for the network status and for other services. You can use the APIs to interact with the SDK and update the UI.
You can send a message using any of the send*()
APIs
available in Bots
class. For example, public static void
sendMessage(String text)
sends text message to skill or digital
assistant.
public static void sendMessage(String text)
text
parameter is the text
message.Bots.sendMessage("I want to order a Pizza");
EventListener
EventListener
interface, which then implements the
functionality for:
void onStatusChange(ConnectionStatus connectionStatus)
– This method is called when the WebSocket connection status changes. ItsconnectionStatus
parameter is the current status of the connection. Refer to the Javadocs included in the SDK (available from the ODA and OMC download page ) for more details about theConnectionStatus
enum.void onMessageReceived(Message message)
– This method is called when a new message is received from the skill. Itsmessage
parameter is the message received from the skill. Refer to the Javadocs included in the SDK (available from the ODA and OMC download page ) for more details about theMessage
class.void onMessageSent(Message message)
- This method is called when a message is sent to the skill. Its message parameter is the message sent to the skill. Refer to the Javadocs included in the SDK (available from the ODA and OMC download page ) for more details about theMessage
class.void onAttachmentComplete()
– This method is called when an attachment upload has completed.
public class BotsEventListener implements EventListener {
@Override
public void onStatusChange(ConnectionStatus connectionStatus) {
// Handle the connection status change
}
@Override
public void onMessageReceived(Message message) {
// Handle the messages received from skill/DA
}
@Override
public void onMessageSent(Message message) {
// Handle the message sent to skill or Digital Assistant
}
@Override
public void onAttachmentComplete() {
// Handle the post attachment upload actions
// Close the attachment upload progress popup if any etc.
}
}
The
instance of type EventListener
should then be passed to
setEventListener(EventListener eventListener)
.
public static void setEventListener(EventListener eventListener)
eventListener
parameter is an instance of type
EventListener
to receive
updates.Bots.setEventListener(new BotsEventListener());
In-Widget Webview
Feature flag: linkHandler
You can configure the link behavior in chat messages to allow users to access web pages from within the chat widget. Instead of having to switch from the conversation to view a page in a tab or separate browser window, a user can remain in the chat because the chat widget opens the link within a Webview.
Configure the In-Widget Webview
Feature flag: webViewConfig
linkHandler
function to
WebviewLinkHandlerType.WEBVIEW
. You can set the size and display of
the webview itself using a webViewConfig
class
object:BotsConfiguration botsConfiguration = new BotsConfiguration.BotsConfigurationBuilder(<SERVER_URI>, false, getApplicationContext()) // Configuration to initialize the SDK
.channelId(<CHANNEL_ID>)
.userId(<USER_ID>)
.linkHandler(WebviewLinkHandlerType.WEBVIEW)
.webViewConfig(new WebViewConfig()
.webViewSize(WebviewSizeWindow.FULL)
.webViewTitleColor(<COLOR_VALUE>)
.webviewHeaderColor(<COLOR_VALUE>)
.clearButtonLabel(<BUTTON_TEXT>)
.clearButtonLabelColor(<COLOR_VALUE>)
.clearButtonIcon(<IMAGE_ID>))
.build();
As illustrated in this code snippet, you can set the following attributes for the
webview.Attribute | Settings |
---|---|
webViewSize |
Sets the screen size of the in-widget webview window with the
WebviewSizeWindow enum, which has two values:
PARTIAL
(WebviewSizeWindow.PARTIAL ) and
FULL (WebviewSizeWindow.FULL ).
|
clearButtonLabel |
Sets the text used for clear/close button in the top right corner
of webview. The default text is DONE .
|
clearButtonIcon |
Sets an icon for the clear button, which appears left-aligned inside the button. |
clearButtonLabelColor |
Sets the color of text of clear button label. |
clearButtonColor |
Sets the background color for the clear button. |
webviewHeaderColor |
Sets the background color for webview header. |
webviewTitleColor |
Sets the color of title in the header. The title is the URL of the web link that has been opened. |
Multi-Lingual Chat
Feature flag: multiLangChat
The Android SDK's native language support enables the chat widget to both detect a user's language and allow the user to select the conversation language from a dropdown menu in the header. Users can switch between languages, but only in between conversations, not during a conversation because the conversation gets reset whenever a user selects a new language.
Enable the Language Menu
multiLangChat
property with an object
containing the supportedLanguage
ArrayList, which is comprised of
language tags (lang
) and optional display labels
(label
). Outside of this array, you can optionally set the default
language with the primary
property as illustrated by the
(primary("en")
in the following
snippet.ArrayList<SupportedLanguage> supportedLanguages = new ArrayList<>();
supportedLanguages.add(new SupportedLanguage("en"));
supportedLanguages.add(new SupportedLanguage("fr", "French"));
supportedLanguages.add(new SupportedLanguage("de", "German"));
MultiLangChat multiLangChat = new MultiLangChat().supportedLanguage(supportedLanguages).primary("en");
BotsConfiguration botsConfiguration = new BotsConfiguration.BotsConfigurationBuilder(<SERVER_URI>, false, getApplicationContext()) // Configuration to initialize the SDK
.channelId(<CHANNEL_ID>)
.userId(<USER_ID>)
.multiLangChat(multiLangChat)
.build();
The
chat widget displays the passed-in supported languages in a dropdown menu that's located in the
header. In addition to the available languages, the menu also includes a
Detect Language option. When a user selects a language from
this menu, the current conversation is reset, and a new conversation is started with the
selected language. The language selected by the user persists across sessions in the
same browser, so the user's previous language is automatically selected when the user
revisits the skill through the page containing the chat widget.
- You need to define a minimum of two languages to enable the dropdown menu to display.
- If you omit the
primary
key, the widget automatically detects the language in the user profile and selects the Detect Language option in the menu.
Disable Language Menu
Starting with Version 21.12, you can also configure and update the chat language
without also having to configure the language selection dropdown menu by passing
primary
in the initial configuration without the
supportedLanguage
ArrayList. The value passed in the
primary
variable is set as the chat language for the
conversation.
Language Detection
If you omit the
primary
property, the widget
automatically detects the language in the user profile and activates
the Detect Language option in the
menu.
You can dynamically update the selected language by calling the
setPrimaryChatLanguage(lang)
API. If the passed
lang
matches one of the supported languages, then that language is
selected. When no match can be found, Detect Language is
activated. You can also activate the Detected Language option by
calling Bots.setPrimaryChatLanguage('und')
API, where
'und'
indicates undetermined.
You can update the chat language dynamically using the
setPrimaryChatLanguage(lang)
API even when the
dropdown menu has not been configured.
Multi-Lingual Chat Quick Reference
To do this... | ...Do this |
---|---|
Display the language selection dropdown to end users. | Define multiLangChat property with
an object containing the supportedLanguage
ArrayList.
|
Set the chat language without displaying the language selection dropdown menu to end users. | Define primary only.
|
Set a default language. | Pass primary with the
supportedLanguage Arraylist. The
primary value must be one of the supported
languages included the array.
|
Enable language detection. | Pass primary as
und .
|
Dynamically update the chat language. | Call the
setPrimaryChatLanguage(lang) API.
|
Share Menu Options
- visual media files (images and videos)
- audio files
- general files like documents, PDFs, and spreadsheets
- location
By passing an ArrayList of Objects to shareMenuItems
shareMenuItems(Arraylist<Object>)
, you can restrict, or filter, the type
of items that are available in the menu, customize the menu's icons and labels, and
limit the upload file size (such as 1024 in the following snippet). These objects can
either be an object of shareMenuCustomItem
, or
ShareMenuItem
enum values that are mapped to the share menu items:
ShareMenuItem.CAMERA
for the camera menu item (if supported by the
device), ShareMenuItem.VISUAL
for sharing an image or video item,
ShareMenuItem.AUDIO
for sharing an audio item, and
ShareMenuItem.FILE
for sharing a file item. Passing either an empty
value or a null value displays all of the menu items that can be passed as
ShareMenuItem
enum values.
ShareMenuCustomItem
object has no value or a null for the label as
does shareMenuCustomItem3 = ShareMenuCustomItem('csv')
in the following
snippet, then a type string that's suffixed to share_
becomes the
label. For shareMenuCustomItem3
, the label is
share_csv
. You can allow users to upload all file types by setting
the type of a ShareMenuCustomItem
object as *
.
This configuration only applies when
enableAttachment
is set to
true
.
ArrayList<Object> customItems = new ArrayList<>();
ShareMenuCustomItem shareMenuCustomItem1 = new ShareMenuCustomItem("pdf bin", "Label1", 1024, R.drawable.odaas_menuitem_share_file);
ShareMenuCustomItem shareMenuCustomItem2 = new ShareMenuCustomItem("doc", "Label2", R.drawable.odaas_menuitem_share_file);
ShareMenuCustomItem shareMenuCustomItem3 = new ShareMenuCustomItem("csv");
ArrayList<Object> customItems = new ArrayList<>(Arrays.asList(shareMenuCustomItem1,shareMenuCustomItem2,shareMenuCustomItem3,ShareMenuItem.CAMERA));
BotsConfiguration botsConfiguration = new BotsConfiguration.BotsConfigurationBuilder(sharedPreferences.getString(getString(R.string.pref_name_chat_server_host), Settings.CHAT_SERVER_URL), false, getApplicationContext())
.channelId(<CHANNEL_ID>)
.userId(<USER_ID>)
.shareMenuItems(customItems)
.enableAttachment(true)
.build();
public static void shareMenuItems()
Bots.shareMenuItems();
API.Bots.shareMenuItems()
public static void shareMenuItems(ArrayList<Object> shareMenuItems)
Bots.shareMenuItems(customItems);
API, where
customItems
is an ArrayList of Objects. Each object can either be
of type ShareMenuItem
enum values or an object of
ShareMenuCustomItem
.ArrayList<Object> customItems = new ArrayList<>();
ShareMenuCustomItem shareMenuCustomItem1 = new ShareMenuCustomItem("pdf bin", "Label1", 1024, R.drawable.odaas_menuitem_share_file);
ShareMenuCustomItem shareMenuCustomItem2 = new ShareMenuCustomItem("doc", "Label2", R.drawable.odaas_menuitem_share_file);
ShareMenuCustomItem shareMenuCustomItem3 = new ShareMenuCustomItem("csv");
customItems.add(shareMenuCustomItem1);
customItems.add(ShareMenuItem.CAMERA);
customItems.add(ShareMenuItem.FILE);
customItems.add(shareMenuCustomItem2);
customItems.add(shareMenuCustomItem3);
Bots.shareMenuItems(customItems);
Speech Recognition
Feature flag: enableSpeechRecognition
Setting the enableSpeechRecognition
feature flag to
true
enables the microphone button to display along with the send
button whenever the user input field is empty.
Setting this property to true
also supports the
functionality enabled by the enableSpeechRecognitionAutoSend
property,
which when also set to true
, enables the user's speech response to be
sent to the chat server automatically while displaying the response as a sent message in
the chat window. You can allow users to first edit (or delete) their dictated messages
before they send them manually by setting
enableSpeechRecognitionAutoSend
to false
.
public static void startRecording(IBotsSpeechListener listener)
Starts recording the user's voice message. The listener
parameter
is an instance of IBotsSpeechListener
to receive the response returned
from the server.
public static boolean isRecording()
Checks whether the voice recording has started or not. Returns true
if the recording has started. Otherwise, it returns false
.
IBotsSpeechListener
IBotsSpeechListener
which
then implements the functionality for the following methods:
void onError(String error)
This method is called when errors occur while establishing the connection to the
server, or when there is either no input given or when too much input is given. Its
error
parameter is the error message.
void onSuccess(String utterance)
This method is called when a final result is received from the server. Its
utterance
parameter is the final utterance received from the
server.
This method is deprecated in Release 20.8.1.
void onSuccess(BotsSpeechResult botsSpeechResult)
This method is called when a final result is received from the server. Its
parameter, botsSpeechResult
, is the final response received from the
server.
void onPartialResult(String utterance)
This method is called when a partial result is received from the server. Its
utterance
parameter is the partial utterance
received from the server.
void onClose(int code, String message)
This method is called when the connection to server closes.
code
– The status codemessage
– The reason for closing the connection
onActiveSpeechUpdate(byte[] speechData)
public class BotsSpeechListener implements IBotsSpeechListener {
@Override
public void onError(String error) {
// Handle errors
}
@Override
public void onSuccess(String utterance) {
// This method was deprecated in release 20.8.1.
// Handle final result
}
@Override
public void onSuccess(BotsSpeechResult botsSpeechResult) {
// Handle final result
}
@Override
public void onPartialResult(String utterance) {
// Handle partial result
}
@Override
public void onClose(int code, String message) {
// Handle the close event of connection to server
}
@Override
public void onOpen() {
// Handle the open event of connection to server
}
@Override
public void onActiveSpeechUpdate(byte[] speechData) {
// Handle the speech update event
}
}
Bots.startRecording(new BotsSpeechListener()); // Start voice recording
if (Bots.isRecording()) {
Bots.stopRecording(); // Stop voice recording
}
Speech Synthesis
- Feature flag:
enableSpeechSynthesis
- Functionality configuration:
speechSynthesisVoicePreferences
- Users can mute or unmute the skill's audio response using a button that's
located in the header of the chat view. You enable this feature by setting the
enableSpeechSynthesis
feature flag totrue
. - You can set the preferred language that read the skill's messages aloud with the
speechSynthesisVoicePreferences
property. This parameter that sets the language and voice is a list ofSpeechSynthesisSetting
instances (described in the SDK's Javadoc that you download from the ODA and OMC download page). This property enables a fallback when the device doesn't support the preferred language or voice. If the device does not support the preferred voice, then the default voice for the preferred language is used instead. When neither the preferred voice or language are supported, then the default voice and language are used.
public static void initSpeechSynthesisService()
onCreate()
method of an Activity to initialize the speech synthesis
service. The initialization of speech synthesis service will be done when the SDK
library initializes only if the enableSpeechSynthesis
feature flag is
set to
true
.public class ConversationActivity extends AppCompatActivity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
Bots.initSpeechSynthesisService();
}
}
public static void startBotAudioResponse(String text)
text
parameter is
the text for the skill's message that's read
aloud.Bots.startBotAudioResponse("What kind of crust do you want?");
This method was deprecated in Release 21.08.
public static void stopBotAudioResponse()
Bots.stopBotAudioResponse()
public static boolean isSpeaking()
Checks if the skill's response is currently being read aloud or not.
true
if the skill's response is currently being read aloud.
Otherwise, it returns
false
.if (Bots.isSpeaking()) {
Bots.stopBotAudioResponse();
}
public static void shutdownBotAudioResponse()
Releases the resources used by the SDK.
onDestroy()
method of
ConversationActivity
.public class ConversationActivity extends AppCompatActivity {
@Override
protected void onDestroy() {
super.onDestroy();
Bots.shutdownBotAudioResponse();
}
}
Speech Service Injection
Feature flag : ttsService
The speechSynthesisService
feature flag allows you to inject any
text-to-speech (TTS) service -- your own, or one provided by a third-party vendor --
into the SDK. To inject a TTS service, you must first set the
enableSpeechSynthesis
feature flag to true
and
then pass an instance of the SpeechSynthesisService
interface to the
speechSynthesisService
flag.
The SpeechSynthesisService Interface
SpeechSynthesisService
interface. It implements these methods:
initTextToSpeechService(@NonNull Application application, @NonNull BotsConfiguration botsConfiguration)
: Initializes a new TTS service.Parameter Description application
The application. This cannot be null. botsConfiguration
The BotsConfiguration
object used to control the features of the library. This cannot be null.speak(String phrase)
: Adds a phrase that's to be spoken to the utterance queue. It'sphrase
parameter is the text to be spoken.isSpeaking()
: Checks whether or not the audio response is being spoken. It returnsfalse
if there is no ongoing audio response is being spoken.Note
This method was deprecated in Release 21.08.stopTextToSpeech()
: Stops any ongoing speech synthesis.Note
This method was deprecated in Release 21.08.shutdownTextToSpeech()
: Releases the resources used by the TextToSpeech engine.getSpeechSynthesisVoicePreferences()
: Returns the voice preferences array which is used to choose the best match for the available voice that's used for speech synthesis.setSpeechSynthesisVoicePreferences(ArrayList<SpeechSynthesisSetting> speechSynthesisVoicePreferences)
: Sets the voice preferences array which is used to choose the best available voice match for speech synthesis. ThespeechSynthesisVoicePreferences
parameter is the voice preference array for speech synthesis.onSpeechSynthesisVoicePreferencesChange(ArrayList<SpeechSynthesisSetting> speechSynthesisVoicePreferences)
: Sets the speech synthesis voice to the best available voice match.NoteWe recommend that you call this method inside the
This method was deprecated in Release 21.08.setSpeechSynthesisVoicePreferences
method after setting the voice preferencesArrayList
. ThespeechSynthesisVoicePreferences
parameter is the voice preference array for speech synthesis.onSpeechRecognitionLocaleChange(Locale speechLocale
): This method gets invoked when the speech recognition language has changed. By overriding this method, you can set the speech synthesis language to the same language as the speech recognition language. ThespeechLocale
parameter is the locale set for speech recognition.
private class TextToSpeechServiceInjection implements SpeechSynthesisService {
@Override
public void initTextToSpeechService(@NonNull Application application, @NonNull BotsConfiguration botsConfiguration) {
// Initialisation of Text to Speech Service.
}
@Override
public void speak(String phrase) {
// Adds a phrase to the utterance queue to be spoken
}
@Override
public boolean isSpeaking() {
// Checks whether the bot audio response is being spoken or not.
return false;
}
@Override
public void stopTextToSpeech() {
// Stops any ongoing speech synthesis
}
@Override
public void shutdownTextToSpeech() {
// Releases the resources used by the TextToSpeech engine.
}
@Override
public ArrayList<SpeechSynthesisSetting> getSpeechSynthesisVoicePreferences() {
// The voice preferences array which is used to choose the best match available voice for speech synthesis.
return null;
}
@Override
public void setSpeechSynthesisVoicePreferences(ArrayList<SpeechSynthesisSetting> speechSynthesisVoicePreferences) {
// Sets the voice preferences array which can be used to choose the best match available voice for speech synthesis.
}
@Override
public SpeechSynthesisSetting onSpeechSynthesisVoicePreferencesChange(ArrayList<SpeechSynthesisSetting> speechSynthesisVoicePreferences) {
// Sets the speech synthesis voice to the best voice match available.
return null;
}
@Override
public void onSpeechRecognitionLocaleChange(Locale speechLocale) {
// If the speech recognition language is changed, the speech synthesis language can also be changed to the same language.
}
}
SpeechSynthesisService#setSpeechSynthesisVoicePreferencesonSpeechSynthesisVoicePreferencesChange(ArrayList<SpeechSynthesisSetting>)
and
SpeechSynthesisService#onSpeechSynthesisVoicePreferencesChange(ArrayList<SpeechSynthesisSetting>)
have been deprecated in this release and have been replaced by
SpeechSynthesisService#setTTSVoice(ArrayList<SpeechSynthesisSetting>)
and SpeechSynthesisService#getTTSVoice()
. Previously,
SpeechSynthesisService#setSpeechSynthesisVoicePreferencesonSpeechSynthesisVoicePreferencesChange
set the speech synthesis voice preference array and
SpeechSynthesisService#onSpeechSynthesisVoicePreferencesChange
set the best voice available for speech synthesis and returned the selected voice.
Now, the same functionality is attained through the new methods:
SpeechSynthesisService#setTTSVoice(ArrayList<SpeechSynthesisSetting>
TTSVoices)
, which sets both the speech synthesis voice preference array
and the best available voice for speech synthesis and
SpeechSynthesisService#getTTSVoice()
, which returns the
selected voice for speech synthesis.
Typing Indicator for User-Agent Conversations
Feature flag: enableSendTypingStatus
When enabled, the SDK sends a RESPONDING
typing event along
with the text that's currently being typed by the user to . This shows a typing
indicator on the agent console. When the user has finished typing, the SDK sends a
LISTENING
event to Oracle B2C
Service or Oracle Fusion
Service. This hides the typing indicator on the agent console.
Similarly, when the agent is typing, the SDK receives a
RESPONDING
event from the service. On receiving this event, the SDK
shows a typing indicator to the user. When the agent is idle, the SDK receives
LISTENING
event from the service. On receiving this event, the SDK
hides the typing indicator that's shown to the user.
sendUserTypingStatus
API enables the same behavior for
headless
mode.public void sendUserTypingStatus(TypingStatus status, String text)
- To show the typing indicator on the agent
console:
Bots.sendUserTypingStatus("RESPONDING", "<Message_Being_Typed>");
- To hide the typing indicator on the agent
console:
Bots.sendUserTypingStatus("LISTENING", "");
- To control user-side typing indicator, use the
onReceiveMessage(Message message)
event. For example:public void onReceiveMessage(Message message) { if (message != null) { MessagePayload messagePayload = message.getPayload(); if (messagePayload instanceof StatusMessagePayload) { StatusMessagePayload statusMessagePayload = (StatusMessagePayload) messagePayload; String status = statusMessagePayload.getStatus(); if (status.equalsIgnoreCase(String.valueOf(TypingStatus.RESPONDING))) { // show typing indicator } else if (status.equalsIgnoreCase(String.valueOf(TypingStatus.LISTENING)) // hide typing indicator } } }
typingStatusInterval
– By default, the SDK sends theRESPONDING
typing event every three seconds to the service. Use this flag to throttle this event. The minimum value that can be set is three seconds.enableAgentSneakPreview
- Oracle B2C Service supports showing the user text as it's being entered. If this flag is set totrue
(the default isfalse
), then the SDK sends the actual text. To protect user privacy, the SDK sends…
instead of the actual text to Oracle B2C Service when the flag is set tofalse
.
Expose Agent Details
Use these APIs to modify agent name, the text color, avatar, agent name initials, text color, and avatar background.
public AgentDetails getAgentDetails()
Bots.getAgentDetails(AgentDetails);
Refer
to the Javadocs for more details about the
AgentDetails
class.
public void setAgentDetails(AgentDetails)
Bots.setAgentDetails(AgentDetails);
public AgentDetails getAgentDetails()
Bots.getAgentDetails(AgentDetails);
Refer
to the Javadocs for more details about the
AgentDetails
class.
Voice Visualizer
When voice support is enabled
(enableSpeechRecognition(true)
), the footer of the chat widget
displays a voice visualizer, a dynamic visualizer graph that indicates the frequency
level of the voice input. The visualizer responds to the modulation of the user's voice
by indicating whether the user is speaking too softly or too loudly. This visualizer is
created using the stream of bytes that are recorded while the user is speaking, which is
also exposed in the IBotsSpeechListener#onActiveSpeechUpdate(byte[])
method for use in headless mode.
Voice mode is indicated when the keyboard icon appears.
When enableSpeechRecognitionAutoSend(true)
, the recognized
text is automatically sent to the skill after the user has finished dictating the
message. The mode then reverts to text input. When
enableSpeechRecognitionAutoSend(false)
, the mode also reverts to
text input, but in this case, users can modify the recognized text before sending the
message to the skill.