Ai Analyst Lecture 11
Ai Analyst Lecture 11
Experiences
Introduction
Topics
Gain an understanding of the concepts covered in this section, by
sections below:
1
1
About Watson Assistant Experiences
2
2
New Watson Assistant Experience
3
3
Classic Watson Assistant Experience
About Watson Assistant Experiences
Use IBM Watson Assistant to build your own branded live chatbot,
channel.
No matter what the specific purpose of your chatbot is, some core
These chatbots need to understand only what the user is asking and
reply with the proper answer. Chatbots that are meant to answer
experience.
A dialog-based conversation is the best choice when you want
customer-care experts.
sections below:
1
1
The new Watson Assistant
2
2
Building your Assistant with Action and Step
3
3
Actions Benefits
4
4
Creating your First Assistant
5
5
Creating your First Action
6
6
Creating your First Step – Step 1
7
7
Trying Out your Action
8
8
Adding More Steps to the Conversation – Step 2
9
9
Adding an Agent Handoff Step – Step 3
10
10
Inserting the Context to Pass to the Human Agent – Step 3
11
11
Handoff to Agent Step Wrap-Up – Step 3
12
12
Creating a Final Response – Step 4
13
13
Try it Out
14
14
Get a Real-Life Preview of your Assistant: Preview
The New Watson Assistant
The new Watson Assistant experience is focused on using actions to
publishing, and analyzing your assistant can all now be done in one
● Each assistant has a home page with a task list to help you get started.
● Build conversations with actions, which represent the tasks that you
want your assistant to help your customers with.
○ Each action contains a series of steps that represent individual
exchanges with a customer.
● When you publish, you can review and debug your work in a draft
environment before you go live to your customers.
● Use a new suite of analytics to improve your assistant.
○ Review which actions are being completed to see what your
customers want help with, determine whether your assistant
understands and addresses customer needs, and decide how can
you make your assistant better.
You can easily switch back and forth between the new experience and
the user input that starts the action (for example, ”I want to
customer.
Steps represent the clarification questions, final answers, or human
Everything else that the step needs to function like the flow logic,
agent (with the account number as context) for a cable bill. For
internet or phone bills the assistant guides the user to the online
billing portal.
the conversation turns that follow the initial customer input that
might reply with “We are open Monday through Friday from 9 AM to
5 PM”.
More commonly, an action requires multiple steps to fully
https://www.ibm.com/blogs/watson/2021/12/getting-started-with-
the-new-watson-assistant-part-i-the-build-guide/
Actions Benefits
Using actions is the best choice when you want to approach the
benefits:
1
2
1. Create a Watson Assistant service. You can use an existing Watson
Assistant service if you already have one.
2. After you create the service instance, the dashboard for the instance
opens. Click Launch Watson Assistant to open the tool and create your
assistant.
3. Give your assistant a name that describes the group of topics it covers.
In this example, Billing Assistant. Choose the language it uses to
respond to your customers and click Create assistant.
4. The Home page of your assistant is displayed.
When you create a new action, Watson Assistant prompts you for an
1. From the navigation panel on the left click Actions to create a new
action and then click Create a new action.
2. In the “New action” window, at the prompt "What does your customer
say to start this interaction?", type an example of your customers
interactions for this action. In this example, “I want to pay my cable bill
please”.
1. Add a step to ask a clarification question, for example, the type of account, which has a bill that
needs to be paid. Step 1 is created by default.
2. Add the clarification question in the “Assistant says” text box.
3. Select the type of customer response that the assistant should wait for. In this case, Options is
the best choice.
4. Add the three options for Cable, Internet, and Phone, and apply your changes.
Finally, the first step looks as shown in the figure on the right.
that it properly recognizes what you ask it. Try typing something other than
customer; for example, if the customer asks. What are your business
hours?, a one-step action might reply with “We are open Monday
customer's request.
For example, if the customer wants to pay a bill, the assistant needs
In this example:
2. In the “Assistant says” field, type the clarification question to ask for the account
3. Select Number from the response drop-down list as the type of customer response.
Next, you need to add some flow logic to the flow. In this example, given the
way that this flow works, an account number should be gathered only for cable
bills. To handle this scenario, you need to add a condition to your step. To do
triggered. In this case, you want to condition on the answer to step 1 being
“Cable” but not Internet or Phone. To set it up, make sure that you have the
2. Enter some text related to getting the user to an agent to pay their bill. For example,
“Let me get you to an agent who can help you pay your cable bill!.”
4. For this step, you don’t need to gather any information from the user, so you can leave
5. Now, set up the assistant to route this conversation to a human agent. Change the And
then setting to Connect to agent (connect to a human agent), which also ends the action
Figure 11-20. Inserting the context to pass to the human agent – Step 3
● Insert the context that you gathered for the human agent to review.
● The customer wants to pay their Cable bill.
● The account number. Select step 2 (the account number) as the field that you want
to insert
● Apply your changes.
Hint: To insert the content of the variables that you collected into
text, start with the “$” sign and a quick select box appears as shown
in the figure.
● Step 4 tells the customer, To pay your <type of bill> bill, you can head to our online
portal <link to portal>
● To insert a variable like the type of bill to be paid, click the Insert a variable button
that is located over the text box.
● To add a link to the text, highlight the text that you want to use and then click the Link button.
● Enter the URL in the “Insert link” window.
● The settings for the link look like the figure.
● Your steps now are completed and look like shown in the figure.
● Click Try it to test a few scenarios.
Try it Out
front, the assistant skips the question about the type of bill and
progress and has an inline preview for you to test. You can also share
your work with others on your team quickly with a sharable URL
through the reading material and, slides included in the sections below:
When to Use Classic Watson Assistant?
Reasons for using classic Watson Assistant include:
● The assistant was built before the new Watson Assistant became available
in October 2021.
● Your developers are familiar and comfortable with the classic experience.
● You want greater control over the logic of the flow.
○ The dialog editor exposes more of the underlying artifacts (such as
intents and entities) used to build the AI models.
○ The dialog flow uses an if-then-else style structure that might be
familiar to developers, but not to content designers or customer-care
experts.
● The following features are available in dialog-based conversations but not in
Actions.
○ Contextual entities
○ Detection of other system entities
○ Image response type
○ Digression support
○ Webhook (log all messages) support
Want to get started with actions, but need features that are available from a
dialog? Use both. Dialog is your primary conversation with users, but you
can call an action from your dialog to perform a discrete task. For more
https://cloud.ibm.com/docs/watson-assistant?topic=watson-assistant-di
alog-call-action
Components
The Watson Assistant main components are:
● Assistants
● Dialog skills
● Intents
● Entities
● Dialog
Assistant
An assistant is a cognitive bot that you can customize for your business
needs and deploy across multiple channels to help your customers where
You customize the assistant by adding to it the skills that it needs to satisfy
requests with which your customers typically need help. You provide
information about the subjects or tasks that your users ask about and how
they ask about them. Then, the service dynamically builds a machine
learning model that is tailored to understand the same and similar user
requests.
Dialog Skill
Skill. Sometimes referred to as dialog skill.
A Dialog skill acts as a container that contains the training data and logic
● Intents
● Entities
● Dialog
As you add information, the data is used to build a machine learning model
that can recognize these and similar user inputs. Each time that you add or
change the training data, the training process is triggered to ensure that
the underlying model stays up-to-date as your customer needs and the
Intent
An intent represents the purpose of a user's input, such as a question about
want to do and what you want your application to be able to handle on their
behalf. For example, you might want your application to help your
You define an intent for each type of user request that you want your
application to support.
Teach Watson Assistant about your intents: After you decide the business
requests that you want your application to handle for your customers, you
must teach Watson Assistant about them. For each business goal (such as
that your customers typically use to indicate their goal. For example, “I
Ideally, find real-world user utterance examples that you can extract from
specific business. For example, if you are an insurance company, your user
examples might look more like this, “I want to buy a new XYZ insurance
plan.”
To train the dialog skill to recognize your intents, supply many examples of
user input and indicate to which intents they map. The examples that you
provide are used by the service to build a machine learning model that can
recognize the same and similar types of utterances and map them to the
appropriate intent.
In the tool, the name of an intent is always prefixed with the # character.
Start with a few intents and test them as you iteratively expand the scope of
the application.
Intent examples:
action, or verbs.
Entity
Entities represent information in the user input that is relevant to the
user's purpose.
If intents represent verbs (the action a user wants to do), entities represent
nouns (the object of, or the context for, that action). For example, when the
intent is to get a weather forecast, the relevant location and date entities
purpose. By recognizing the entities that are mentioned in the user's input,
the service can reply with a more targeted response or perform a required
action.
Entity Evaluation
The Assistant service looks for terms in the user input that match the
\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}\b
This email regex example can capture all mentions of emails in their
● System entity: A synonym entity that is prebuilt for you by IBM. They cover
commonly used categories, such as numbers, dates, and times. You simply
enable a system entity to start using it.
Each node includes conditions for the node to be active, and also an output
Condition: Specifies the information that must be present in the user input
Response: The utterance that the service uses to respond to the user.
requests.
To handle more complex tasks, you can add child nodes to ask the user for
additional information.
A child node is processed after its parent node. NODE 1 is the parent node
Conditions usually evaluate the intents and entities that are identified in
the user responses, but they can also evaluate information that is stored in
the context.
Responses
Responses are messages that are based on the identified intents and entities that are
communicated to the user when the dialog node is activated. You can add variations of the
response for a more natural experience or add conditions to pick one response out of many in the
The figure shows an example of adding different variations for greetings if the node is triggered
Rich Responses
In addition to the default response type of text, you can return responses
In this example, the service uses information that it collected earlier about
the user's location to tailor its response and provide information about the
store nearest the user. The conditional responses are based on context
values.
This single node now provides the equivalent function of four separate
nodes.
user within that node. Slots collect information at the users' pace. Details
that the user provides are saved, and the service asks only for the details
Example:
The user wants to reserve a table in a restaurant. The needed information is
the number of guests, the date, the time, and the name of the restaurant.
When asked, the user might provide values for multiple slots at once. For
dining at 7 PM”. This one input contains two of the missing required
values: the number of guests and time of the reservation. The service
recognizes and stores both of them, each one in its corresponding slot. It
then displays the prompt that is associated with the next empty slot, which
in this case asks the user, “Where would you like to eat?” to determine the
restaurant slot and stores the response. After the user replies, the service
asks, “What day will this take place?” to determine the date slot, stores it,
Dialog Flow
As it travels down the tree, if the service finds a condition that is met, it
triggers that node. It then moves from left to right on the triggered node to
check the user input against any child node conditions. As it checks the
If none of the conditions evaluates to true, then the response from the last
node in the tree, which typically has a special anything_else condition that
When you start to build the dialog, you must determine the branches to
include and where to place them. The order of the branches is important
because nodes are evaluated from first to last. The first node whose
condition matches the input is used; any nodes that come later in the tree
This diagram shows a mockup of a dialog tree that is built with the GUI
dialog builder tool. It contains two root dialog nodes. A typical dialog tree
would likely have many more nodes, but this depiction provides a glimpse
nodes and each have a condition on an entity value. The second child node
defines two responses. The first response is returned to the user if the value
of the context variable matches the value that is specified in the condition.
topic and then in the root response ask a follow-up question that is
question about discounts and ask a follow-up question about whether the
user is a member of any associations with which the company has special
The second root node is a node with slots. It also has conditions on an
intent value. It defines a set of slots, one for each piece of information that
you want to collect from the user. Each slot asks a question to elicit the
answer from the user. It looks for a specific entity value in the user's reply
This type of node is useful for collecting details that you might need to
perform a transaction on the user's behalf. For example, if the user's intent
is to book a flight, the slots can collect the origin and destination location
However, the new experience provides a simplified user interface, an improved deployment
To switch between the new and classic experiences, following these steps:
1. From the Watson Assistant interface, click the Manage icon to open your account menu.
2. Select Switch to classic experience or Switch to new experience from the account menu.
You don’t lose any work if you switch to a different experience, and other users of the same
instance are not affected. However, keep in mind that any work you do in one experience is not
available in the other experience. You can switch back and forth at any time.