Features
Here are the features that you can configure in the Oracle iOS SDK.
Absolute and Relative Timestamps
- Feature flag:
enableTimestamp
- Feature flag:
timestampMode
You can enable absolute or relative timestamps for chat messages. Absolute timestamps display the exact time for each message. Relative timestamps display only on the latest message and express the time in terms of the seconds, days, hours, months, or years ago relative to the previous message.The precision afforded by absolute timestamps make them ideal for archival tasks, but within the limited context of a chat session, this precision detracts from the user experience because users must compare timestamps to find out the passage of time between messages. Relative timestamps allow users to track the conversation easily through terms like Just Now and A few moments ago that can be immediately understood. Relative timestamps improve the user experience in another way while also simplifying your development tasks: because relative timestamps mark the messages in terms of seconds, days, hours, months, or years ago, you don't need to convert them for timezones.
Configure Relative Timestamps
To add a relative timestamp, enableTimestamp
must be
enabled (true
) and timestampMode
, which controls the
style of timestamp, must be timestampMode.relative
. By setting
timestampMode.relative
, an absolute timestamp displays before the
first message of the day as a header. This header displays when the conversation has not
been cleared and older messages are still available in the history.
- For first 10s
- Between 10s-60s
- Every minute between 1m-60m
- Every hour between 1hr-24hr
- Every day between 1d-30d
- Every month between 1m-12m
- Every year after first year
Actions Layout
Use the BotsProperties.actionsLayout
configuration settings to
display the action buttons in horizontal or vertical layouts. The layout can be set for
local actions, global actions, card actions, and form actions. The default value is
horizontal
for all action types.
BotsProperties.actionsLayout = ActionsLayout(local: .horizontal,global: .vertical,card: .horizontal,form: .horizontal)
Agent Avatars
For skills integrated with live agent support, the agentAvatar
setting enables the display of an avatar icon for the messages sent by the agents. You
configure this with the URL of the icon that displays alongside the agent messages.
Dynamically Update Avatars and Agent Details
You can enable the user and agent avatars to be dynamically updated at runtime using the
setUserAvatar(avatarAsset : String)
,
getAgentDetails()
, and setUserAvatar(avatarAsset :
String)
.
Set the User Avatar
setPersonAvatar(avatarAsset : String)
enables the dynamic
updating of the user avatar at runtime. This method sets the user avatar for the all the
messages, including previous messages. The avatarAsset
can be:
- The name of the asset from the project
Assets
folder. - An external link to the image source as shown in the following example.
BotsUIManager.shared().setPersonAvatar(avatarAsset: "https://picsum.photos/200/300")
BotsUIManager.shared().setPersonAvatar(avatarAsset: "userAvatarInAssetsFolder")
Set the Agent Details
You can customize the agent details using the setAgentDetails(agentDetails:
AgentDetails)
API. Along with the agent name, the other attributes that you
can use this API to customize are text color and the avatar. If no agent avatar has been
configured, the avatar can be configured dynamically with the agent name initials. You
can also customize the color of these initials and background color. The
getAgentDetails()
API retrieves the current agent details.
Although these APIs can be called at any time, we recommended using them
with either the onReceiveMessage()
or beforeDisplay()
events.
setAgentDetails(agentDetails: AgentDetails)
All of the parameters of the
AgentDetails
object are
optional.
// to override avatar , name and name text colorlet agentDetails = AgentDetails(name: "Bob", avatarImage: "https://picsum.photos/200/300", nameTextColor: .red)
// to override avatar , namelet agentDetails = AgentDetails(name: "Bob", avatarImage: "https://picsum.photos/200/300")
// to override avatar, name, name text color,avatar initials color , avatar background let agentDetails = AgentDetails(name: "Bob", nameTextColor: .red,avatarTextColor: .blue,avatarBackgroundColor: .green)
BotsUIManager.shared().setAgentDetails(agentDetails: agentDetails)
AgentDetails
object can be modified.
For
example:let agentDetails = AgentDetails()
agentDetails.name = "Bob"
agentDetails.avatarImage = "agentAvatar"
agentDetails.nameTextColor = .red
agentDetails.avatarBackgroundColor = .green
agentDetails.avatarTextColor = .brown
BotsUIManager.shared().setAgentDetails(agentDetails: agentDetails)
Attachment Filtering
Feature flag: shareMenuConfiguration
shareMenuConfiguration
to restrict, or filter, the item types
that are available in the share menu popup, set the file size limit in KB for uploads
(such as 1024 in the following snippet), and customize the menu’s icons and labels. The
default and the max limit is 25 MB.
Before you can configure
shareMenuConfiguration
, you must set
enableAttachment
to
true
.
botsConfiguration.shareMenuConfiguration = ([ShareMenuItem.files, ShareMenuItem.camera, ShareMenuItem.location], [ShareMenuCustomItem(types: [String(kUTTypePDF)], label: "PDF Files", maxSize: 1024), ShareMenuCustomItem(types: [String(kUTTypeText)], label: "Text Files")])
For
the types
, you have to use the CFString for the corresponding file type
and convert it to String
. Any other string will not be valid. You can
allow users to upload all file types by setting the types
as
String(kUTTypeItem)
.
public func shareMenuItems(shareMenuItems: ([ShareMenuItem], [ShareMenuCustomItem]))
BotsManager.shared().shareMenuItems(shareMenuItems: ([ShareMenuItem],
[ShareMenuCustomItem]))
API.BotsManager.shared().shareMenuItems([ShareMenuItem.files, ShareMenuItem.camera, ShareMenuItem.location], [ShareMenuCustomItem(types: [String(kUTTypePDF)], label: "PDF Files", maxSize: 1024), ShareMenuCustomItem(types: [String(kUTTypeText)], label: "Text Files")])
Auto-Submitting a Field
When a field has the autoSubmit
property set to
true
, the client sends a
FormSubmissionMessagePayload
with the
submittedField
map containing either the valid field values that
have been entered so far. Any fields that are not set yet (regardless of whether they
are required), or fields that violate a client-side validation are not included in the
submittedField
map. If the auto-submitted field itself contains a
value that's not valid, then the submission message is not sent and the client error
message displays for that particular field. When an auto-submit succeeds, the
partialSubmitField
in the form submission message will be set to
the id
of the autoSubmit
field.
Connect, Disconnect, and Destroy Methods
The skill can be connected or disconnected, or the SDK can be destroyed, using the
public func destroy()
, public func
disconnect()
, and the public func
connect()
methods.
public func destroy()
Destroys the SDK by closing any active connection, voice recognition, speech
synthesis, file uploads, and by removing the SDK view controller. Once called, none of
the public API methods can be called. They will only be active again after the
initialize(botsConfiguration: BotsConfiguration, completionHandler:
@escaping (ConnectionStatus, Error?) -> ())
method is called again to
initialize the SDK.
public func disconnect()
BotsManager.shared().disconnect()
public func connect()
BotsManager.shared().connect()
public func connect(botsConfiguration: BotsConfiguration)
BotsConfiguration
, the
existing web socket connection is closed, and a new connection is established using the
new channel properties. Other properties set in BotsConfiguration
remain as
is.var botsConfiguration = BotsConfiguration(url: url, userId: userId, channelId: channelId)
BotsManager.shared().connect(botsConfiguration: botsConfiguration)
Default Client Responses
Feature flag: enableDefaultClientResponse
Use enableDefaultClientResponse: true
to provide default
client-side responses accompanied by a typing indicator when the skill response has been
delayed, or when there's no skill response at all. If the user sends out the first
message/query, but the skill does not respond with the number of seconds set by
defaultGreetingTimeout
, the skill can display a greeting message
that's configured using the odais_default_greeting_message
translation
string. Next, the client checks again for the skill's response. The client displays the
skill's response if it has been received, but if it hasn't, then the client displays a
wait message (configured with the odais_default_wait_message
translation string) at intervals set by the defaultWaitMessageInterval
flag. When the wait for the skill response exceeds the threshold set by the
typingStatusTimeout
flag, the client displays a sorry response to
the user and stops the typing indicator. You can configure the sorry response using the
odais_default_sorry_message
translation string.
Delegation
BotsMessageServiceDelegate
protocol and implement the following
methods:
public func beforeDisplay(message: [String: Any]?) -> [String: Any]?
This method allows a skill’s message payload to be modified before it is
displayed in the conversation. The message payload returned by the method is
used to display the message. If it returns nil
, then the
message is not displayed.
public func beforeSend(message: [String: Any]?) -> [String: Any]?
This method allows a user message payload to be modified before it is sent to the
chat server. The message payload returned by the method is sent to the skill. If it
returns nil
, then the message is not sent.
public func beforeSendPostback(action: [String: Any]?) -> [String: Any]?
public func beforeSendPostback(action: [String: Any]?) -> [String:
Any]?
allows a postback action payload to be modified before it is sent to
the chat server. The action payload returned by the method is sent to the skill. If it
returns nil
, then the postback action selected by the user is not sent
to the chat
server.public class ViewController: UIViewController, BotsMessageServiceDelegate {
func beforeSend(message: [String : Any]?) -> [String : Any]? {
// Handle before send delegate here
}
func beforeDisplay(message: [String : Any]?) -> [String : Any]? {
// Handle before display delegate here
}
func beforeSendPostback(action: [String : Any]?) -> [String : Any]? {
// Handle before send postback action delegate here
}
}
BotsMessageServiceDelegate
protocol,
should be assigned to the BotsManager.shared().delegate
property as
shown in the following code snippet for initializing the
SDK:BotsManager.shared().delegate = self
End the Chat Session
Feature flag: enableEndConversation
enableEndConversation
, when set to true
, adds a
close button to the header view that enables users to explicitly end the current chat
session. A confirmation prompt dialog opens when users click this close button and when
they confirm the close action, the SDK sends an event message to the skill that marks
the end of the chat session. The SDK then disconnects the skill from the instance,
collapses the chat widget, and erases the current user's conversation history. The SDK
also raises a chatend
event in the BotsEventListener
protocol that you can implement.
Tip:
The conversation can also be ended by callingBotsManager.shared().endChat()
method, which you can use
when the SDK is initialized in the headless mode.
Headless SDK
The SDK can be used without its UI. The SDK maintains the connection to server and provides APIs to send messages, receive messages, and get updates for the network status and for other services. You can use the APIs to interact with the SDK and update the UI.
You can send a message using any of the send()
APIs
available in BotsManager
class. For example, public func
send(message: UserMessage)
sends text message to skill or digital
assistant.
public func send(message: UserMessage)
This function sends a message to the skill. Its message
parameter is an instance of a class which conforms to the UserMessage
class. In this case, it is
UserTextMessage
.BotsManager.shared().send(message:
UserTextMessage(text: "I want to order a pizza", type: .text))
BotsEventListener
BotsEventListener
protocol which then implements the following
methods:
onStatusChange(ConnectionStatus connectionStatus)
– This method is called when the WebSocket connection status changes. ItsconnectionStatus
parameter is the current status of the connection. Refer to the API docs included in the SDK for more details about theConnectionStatus
enum.onReceiveMessage(message: BotsMessage)
– This method is called when a new message is received from the skill. Itsmessage
parameter is the message received from the skill. Refer to the API docs included in the SDK for more details about theBotsMessage
class.onUploadAttachment(message: BotsAttachmentMessage)
– This method is called when an attachment upload has completed. Itsmessage
parameter is theBotsAttachmentMessage
object for the uploaded attachment.onDestroy()
– This method is called when thedestroy()
method is called.onInitialize()
– This method is called when theinitialize(botsConfiguration: BotsConfiguration, completionHandler: @escaping (ConnectionStatus, Error?) -> ())
method is called. It takes the following parameter:newLanguage
– TheSupportedLanguage
object for the newly set chat language.
beforeEndConversation()
– This method is called when the end conversation session is initiated.chatEnd()
– A callback method triggered after conversation has ended successfully.
extension ViewController: BotsEventListener {
func onReceiveMessage(message: BotsMessage) {
// Handle the messages received from skill or Digital Assistant
}
func onUploadAttachment(message: BotsAttachmentMessage) {
// Handle the post attachment upload actions
}
func onStatusChange(connectionStatus: ConnectionStatus) {
// Handle the connection status change
}
func onInitialize() {
//Handle initialization
}
func onDestroy() {
//Handle destroy
}
func onChatLanguageChange(newLanguage: SupportedLanguage) {
//Handle the language change.
}
func beforeEndConversation(completionHandler: @escaping (EndConversationStatus) -> Void) {
//Do the desired cleanup before session is closed.
return completionHandler(.success) // if cleanup was successfull.
return completionHandler(.success) // if there was en error cleaning up.
}
func chatEnd() {
//Handle successfull session end from server before the SDK is destroyed.
}
}
The
instance which conforms to the BotsEventListener
protocol should be
assigned to the BotsManager.shared().botsEventListener
property as
illustrated in the following code snippet for initializing the
SDK:BotsManager.shared().botsEventListener = self
In-Widget Webview
UI Property: LinkHandler
You can configure the link behavior in chat messages to allow users to access web pages from within the chat widget. Instead of having to switch from the conversation to view a page in a tab or separate browser window, a user can remain in the chat because the chat widget opens the link within a webview.
Configure the In-Widget Webview
UI Property: WebViewConfig
LinkHandler
property to LinkHandlerType.webview
.
WebViewConfig
can be set to a WebViewConfiguration
struct instance.
BotsProperties.LinkHandler = LinkHandlerType.webview
//Set the properties which you want changed from the default values.
BotsProperties.WebViewConfig.webViewSize = WebViewSize.full
BotsProperties.WebViewConfig.clearButtonLabelColor = UIColor.black
As illustrated in this code snippet, you can set the following attributes for the
webview.Attribute | Settings |
---|---|
webViewSize |
Sets the screen size of the in-widget webview window
with WebviewSize attribute, which has two values:
parial (WebviewSize.partial )
and full (WebviewSizeWindow.full ).
|
clearButtonLabel |
Sets the text used for clear/close button in the top
right corner of webview. The default text is taken from the string
set to odais_done in the
Localizable.strings file.
|
clearButtonIcon |
Sets an icon for the clear button, which appears left-aligned inside the button. By default, there's no icon set for the clear button. It's an empty string. |
clearButtonLabelColor |
Sets the color of text of clear button label. The
default color is UIColor.white .
|
clearButtonColor |
Sets the background color for the clear button. The
default color is UIColor.clear .
|
webviewHeaderColor |
Sets the background color for webview header. |
webviewTitleColor |
Sets the color of title in the header. The title is the URL of the web link that has been opened. |
Message Timestamp Formatting
The timestampFormat
flag formats timestamps that display in the
messages. It can accept a string consisting of format tokens like
"hh:mm:ss"
and other formats supported by the Swift DateFormatter.
Multi-Lingual Chat
Feature flag: multiLangChat
The iOS SDK's native language enables the chat widget to detect a user's language or allow users to select the conversation language. Users can switch between languages, but only in between conversations, not during a conversation because the conversation gets reset whenever a user selects a new language.
Enable the Language Menu
multiLangChat
property with an object
containing the supportedLanguages
array, which is comprised of language
tags (lang
) and optional display labels (label
).
Outside of this array, you can optionally set the default language with the
primaryLanguage
variable (MultiLangChat(primaryLanguage:
String)
in the following
snippet). botsConfiguration.multiLangChat = MultiLangChat(
supportedLanguages:[
SupportedLanguage.init(lang: "en", label: "English"),
SupportedLanguage.init(lang: "fr"),
SupportedLanguage.init(lang: "fr-CA", label: "French (Canada)")
],
primaryLanguage: "fr-CA"
)
To properly format language and region codes in localizable .lproj (localization project) files, use a dash (
-
) as the separator, not an underscore
(_
). For example, use
fr-CA
, not fr_CA
. This aligns with how the
.lproj
files are created in the app. When the SDK searches for
an .lproj
file, it first tries to locate one with the exact
languageCode-Region.lproj
format. If it can't find such a file,
the SDK searches for a languageCode.lproj
file. If that is also not
found, the SDK searches for a base.lproj
file. When none of these
can be located, the SDK defaults to using English (en
).
The chat widget displays the passed-in supported languages in a dropdown menu that's located in the header. In addition to the available languages, the menu also includes a Detect Language option. When a user selects a language from this menu, the current conversation is reset, and a new conversation is started with the selected language. The language selected by the user persists across sessions in the same browser, so the user's previous language is automatically selected when the user revisits the skill through the page containing the chat widget.
You can add an event listener for the onChatLanguageChange
event, which
is triggered when a chat language has been selected from the dropdown menu or has been
changed.
- You need to define a minimum of two languages to enable the dropdown menu to display.
- If you omit the
primaryLanguage
attribute, the widget automatically detects the language in the user profile and selects the Detect Language option in the menu. - The
label
key is optional for the natively supported languages:fr
displays as French in the menu,es
displays as Spanish, and so on. - While
label
is optional, if you've added a language that's not one of the natively supported languages, then you should add a label to identify the tag. For example, if you don't definelabel: 'हिंदी'
, for thelang: "hi"
, then the dropdown menu displays hi instead, contributing to a suboptimal user experience.
Disable Language Menu
Starting with Version 21.12, you can also configure and update the chat language
without also having to configure the language selection dropdown menu by passing
MultiLangChat(primaryLanguage: String)
.
Language Detection
In addition to the passed-in languages, the chat widget displays a Detect Language option in the dropdown menu. Selecting this option tells the skill to automatically detect the conversation language from the user's message and, when possible, to respond in the same language.
You can dynamically update the selected language by calling the
BotsManager.shared().setPrimaryLanguage(primaryLanguage:
String)
API. If the passed lang
matches
one of the supported languages, then that language is selected. When no
match can be found, Detect Language is activated. You
can also activate the Detected Language option by
calling BotsManager.shared().setPrimaryLanguage(primaryLanguage:
"und")
API, where "und"
indicates
undetermined or by passing primaryLanguage:nil
.
You can update the chat language dynamically using the
setPrimaryLanguage(primaryLanguage: String)
API
even when the dropdown menu has not been configured.
Multi-Lingual Chat Quick Reference
To do this... | ...Do this |
---|---|
Display the language selection dropdown menu to end users. | Pass MultiLangChat(supportedLanguages:
[SupportedLanguage]) .
|
Set the chat language without displaying the language selection dropdown menu to end users. | Pass MultiLangChat(primaryLanguage:
String) .
|
Set a default language. | Pass MultiLangChat(supportedLanguages:
[SupportedLanguage], primaryLanguage: String) .
|
Enable language detection. | Pass primaryLanguage:nil or
primaryLanguage:"und" .
|
Dynamically update the chat language. | Call the setPrimaryLanguage(primaryLanguage:
String) API.
|
Replacing a Previous Input Form
When the end user submits the form, either because a field has
autosubmit
set to true
, the skill can send a new
EditFormMessagePayload
. That message should replace the previous
input form message. By setting the replaceMessage
channel extension
property to true
, you enable the SDK to replace previous input form
message with the current input form message.
Share Menu Options
- visual media files (images and videos)
- audio files
- general files like documents, PDFs, and spreadsheets
- location
sharePopupConfiguration
setting allows you to restrict the
items that display in the share menu. By passing a tuple of arrays to
ShareMenuConfiguration
-- shareMenuConfiguration =
([ShareMenuItem], [ShareMenuCustomItem])
-- you can restrict, or filter,
the type of items that are available in the menu, customize the menu's icons and labels,
and limit the upload file size. The tuple is has an array of share menu options of type
ShareMenuItem
and an array of share menu options of type
ShareMenuCustomItem
. Pass either as an empty array for all file
types.
Speech Recognition
- Feature flag:
enableSpeechRecognition
- Functionality configuration:
enableAutoSendSpeechResponse
Setting the enableSpeechRecognition
feature flag to
true
enables the microphone button to display in place of the send
button whenever the user input field is empty. The speech is converted to text and sent
to the skill or digital assistant. If the speech is partially recognized, then the
partial result is displayed in a popup that's opened by clicking the microphone
button.
Setting this property to true
also supports the
functionality enabled by the enableAutoSendSpeechResponse
property,
which when also set to true
, enables the user's speech response to be
sent to the chat server automatically while displaying the response as a sent message in
the chat window. You can allow users to first edit (or delete) their dictated messages
before they send them manually by setting
enableSpeechRecognitionAutoSend
to false
.
public func isRecording() -> Bool
Checks whether the voice recording has started or not. Returns true
if the recording has started. Otherwise, it returns false
.
onSpeechResponseReceived(data: String, final: Bool)
function from
the BotsEventListener
protocol can be used to handle all the responses
from the speech
server.BotsManager.shared().startRecording()
if (BotsManager.shared().isRecording()) {
BotsManager.shared().stopRecording() // Stop voice recording
}
Speech Synthesis
- Feature flag:
enableSpeechSynthesis
- Functionality configuration:
speechSynthesisVoicePreferences
- You enable this feature by setting the
enableSpeechSynthesis
feature flag totrue
. - You can set the preferred language that read the skill's messages
aloud with the
speechSynthesisVoicePreferences
property. This property enables a fallback when the device doesn't support the preferred language or voice. If the device does not support the preferred voice, then the default voice for the preferred language is used instead. When neither the preferred voice or language are supported, then the default voice and language are used.
Speech Service Injection
Feature flag: ttsService
The ttsService
feature flag allows you to inject any text-to-speech
(TTS) service -- your own, or one provided by a third-party vendor -- into the SDK. To
inject a TTS service, you must first set the enableSpeechSynthesis
feature flag to true
and then pass an instance of the
TTSService
interface to the ttsService
flag.
The TTSService Protocol
TTSService
interface. It implements the following methods:
speak(text: String)
- This method adds the text that's to be spoken to the utterance queue. Itstext
parameter is the text to be spoken.isSpeaking()
- This method checks whether or not the audio response is being spoken. It returnsfalse
if no audio response is being spoken.stopSpeech()
- This method stops any ongoing speech synthesis.
class CustomTTSService: TTSService {
func speak(text: String) {
// Adds text to the utterance queue to be spoken
}
func stopSpeech() {
// Stops any ongoing speech synthesis
}
func isSpeaking() -> Bool {
// Checks whether the bot audio response is being spoken or not.
}
}
Typing Indicator for User-Agent Conversations
Feature flag: enableSendTypingStatus
When this flag is enabled, the SDK sends a RESPONDING
typing
event along with the text that's currently being typed by the user to Oracle B2C
Service or Oracle Fusion
Service. This shows a typing indicator on the agent console. When the user has finished
typing, the SDK sends a LISTENING
event to the service. This hides the
typing indicator on the agent console.
Similarly, when the agent is typing, the SDK receives a
RESPONDING
event from the service. On receiving this event, the SDK
shows a typing indicator to the user. When the agent is idle, the SDK receives
LISTENING
event from the service. On receiving this event, the SDK
hides the typing indicator that's shown to the user.
sendUserTypingStatus
API enables the same behavior for
headless
mode. public func sendUserTypingStatus(status: TypingStatus, text: String? = nil)
- To show the typing indicator on the agent
console:
BotsManager.shared().sendUserTypingStatus(status: .RESPONDING, text: textToSend)
- To hide the typing indicator on the agent
console:
BotsManager.shared().sendUserTypingStatus(status: .LISTENING)
- To control user-side typing indicator, use the
onReceiveMessage()
event. For example:public func onReceiveMessage(message: BotsMessage) { if message is AgentStatusMessage { if let status = message.payload["status"] as? String { switch status { case TypingStatus.LISTENING.rawValue: hideTypingIndicator() case TypingStatus.RESPONDING.rawValue: showTypingIndicator() } } } }
BotsConfiguration
that
provide additional control:
typingStatusInterval
– By default, the SDK sends theRESPONDING
typing event every three seconds to Oracle B2C Service. Use this flag to throttle this event. The minimum value that can be set is three seconds.enableAgentSneakPreview
- Oracle B2C Service supports showing the user text as it's being entered. If this flag is set totrue
(the default isfalse
), then the SDK sends the actual text. To protect user privacy, the SDK sends…
instead of the text to Oracle B2C Service when the flag is set tofalse
.
Voice Visualizer
When voice support is enabled (botsConfiguration.enableSpeechRecognition =
true
), the footer of the chat widget displays a voice visualizer, a dynamic
visualizer graph that indicates the frequency level of the voice input. The visualizer
responds to the modulation of the user's voice by indicating whether the user is
speaking too softly or too loudly. This visualizer is created using Swift's AVAudioEngine which is exposed in the
onAudioReceived
method in the SpeechEventListener
protocol for use in headless mode.
Voice mode is indicated when the keyboard icon appears.
When botsConfiguration.enableSpeechAutoSendSpeechResponse =
true
, the recognized text is automatically sent to the skill after the user
has finished dictating the message. The mode then reverts to text input. When
botsConfiguration.enableSpeechAutoSendSpeechResponse = false
, the
mode also reverts to text input, but in this case, users can modify the recognized text
before sending the message to the skill.