0% found this document useful (0 votes)
19 views33 pages

Ai Analyst Lecture 11

The document provides an overview of IBM Watson Assistant, detailing its two experiences: the New Watson Assistant experience, which simplifies chatbot creation and enhances user interaction, and the Classic Watson Assistant experience, which includes core components like skills and dialog. The New experience focuses on actions and steps to facilitate customer conversations, allowing users to build, test, and analyze their assistants through an intuitive interface. Users can switch between experiences and migrate from Classic to New while leveraging improved features and a streamlined process.

Uploaded by

didewas302
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views33 pages

Ai Analyst Lecture 11

The document provides an overview of IBM Watson Assistant, detailing its two experiences: the New Watson Assistant experience, which simplifies chatbot creation and enhances user interaction, and the Classic Watson Assistant experience, which includes core components like skills and dialog. The New experience focuses on actions and steps to facilitate customer conversations, allowing users to build, test, and analyze their assistants through an intuitive interface. Users can switch between experiences and migrate from Classic to New while leveraging improved features and a streamlined process.

Uploaded by

didewas302
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

Section 1: Watson Assistant

Experiences
Introduction
Topics
Gain an understanding of the concepts covered in this section, by

going through the reading material and, slides included in the

sections below:

​ 1
​ 1
​ About Watson Assistant Experiences
​ 2
​ 2
​ New Watson Assistant Experience
​ 3
​ 3
​ Classic Watson Assistant Experience
About Watson Assistant Experiences
Use IBM Watson Assistant to build your own branded live chatbot,

which is known as an assistant, into any device, application, or

channel.

Your chatbot, which is also known as an assistant, connects to the

customer engagement resources you already use to deliver an

engaging, unified problem-solving experience to your customers.


Watson Assistant provides two experiences to build assistants:

●​ New Watson Assistant experience


●​ Classic Watson Assistant experience

New Watson Assistant Experience


●​ In October 2021, the new Watson Assistant became available.
●​ Watson Assistant was revamped to make it easier and faster to build,
publish, and improve a chatbot.
●​ The new Watson Assistant has a build experience that is tailored to the
people who directly interact with customers daily, such as customer
service representatives and customer-care experts.
●​ The new experience is focused on using actions to build customer
conversations,
●​ Steps within the action represent back-and-forth interactions between
your assistant and your customer.
●​ The new Watson Assistant automatically handles things that might go
wrong during a conversation, such as topic changes, vague requests,
misunderstandings, and asking for a human agent.
●​ The new Watson Assistant navigation is more intuitive and follows the
order of the steps that are recommended to get your assistant live.

Classic Watson Assistant Experience


The classic experience refers to the Watson Assistant build

experience that was available before the new experience was

announced in October 2021.

No matter what the specific purpose of your chatbot is, some core

fundamentals are always involved:


●​ Skills
●​ Intents
●​ Entities
●​ Dialog

Each of these functions and their application to the Watson Assistant

service are explained in detail in the Classic Watson Assistant topic

The dialog component is optional because some chatbots are

developed to answer user questions in a question-and-answer

manner that is similar to the approach that is used to answer

frequently asked questions (FAQs).

These chatbots need to understand only what the user is asking and

reply with the proper answer. Chatbots that are meant to answer

FAQs do not have to engage in a conversation with the user and

therefore they do not need a dialog component.

Customers that built their assistants by using the classic experience

can migrate to the new experience or remain in the classic

experience.
A dialog-based conversation is the best choice when you want

greater control over the logic of the flow.

The dialog flow uses an if-then-else style structure that might be

familiar to developers, but not to content designers or

customer-care experts.

Section 2: New Watson Assistant


Introduction
Topics
Gain an understanding of the concepts covered in this section, by

going through the reading material and, slides included in the

sections below:

​ 1
​ 1
​ The new Watson Assistant
​ 2
​ 2
​ Building your Assistant with Action and Step
​ 3
​ 3
​ Actions Benefits
​ 4
​ 4
​ Creating your First Assistant
​ 5
​ 5
​ Creating your First Action
​ 6
​ 6
​ Creating your First Step – Step 1
​ 7
​ 7
​ Trying Out your Action
​ 8
​ 8
​ Adding More Steps to the Conversation – Step 2
​ 9
​ 9
​ Adding an Agent Handoff Step – Step 3
​ 10
​ 10
​ Inserting the Context to Pass to the Human Agent – Step 3
​ 11
​ 11
​ Handoff to Agent Step Wrap-Up – Step 3
​ 12
​ 12
​ Creating a Final Response – Step 4
​ 13
​ 13
​ Try it Out
​ 14
​ 14
​ Get a Real-Life Preview of your Assistant: Preview
The New Watson Assistant
The new Watson Assistant experience is focused on using actions to

build customer conversations. It is designed to make it simple

enough for anyone to build a virtual assistant. Building, testing,

publishing, and analyzing your assistant can all now be done in one

simple and intuitive interface.


New navigation provides a workflow for building, previewing,

publishing, and analyzing your assistant.

●​ Each assistant has a home page with a task list to help you get started.
●​ Build conversations with actions, which represent the tasks that you
want your assistant to help your customers with.
○​ Each action contains a series of steps that represent individual
exchanges with a customer.
●​ When you publish, you can review and debug your work in a draft
environment before you go live to your customers.
●​ Use a new suite of analytics to improve your assistant.
○​ Review which actions are being completed to see what your
customers want help with, determine whether your assistant
understands and addresses customer needs, and decide how can
you make your assistant better.

Switching the experience

You can easily switch back and forth between the new experience and

the classic experience. However, the new experience provides a

simplified user interface, an improved deployment process, and

access to the latest features.

Building your Assistant with Action and Step


Figure 11-9. Building your assistant with Action and Step
Like a human personal assistant, the assistant you build helps your

customers perform tasks and answer questions. To accomplish this,

you define actions and steps for the assistant.

An action is a problem or a task that your customer wants to resolve.

For example, paying a bill, getting an invoice, getting the balance in

a bank account, withdrawing money, saying hello, or asking about

the weather might all be an action in your assistant.

An action represents a discrete outcome that you want your assistant

to be able to accomplish in response to a user's request. An action

comprises the interaction between a customer and the assistant

about a particular question or request. This interaction begins with

the user input that starts the action (for example, ”I want to

withdraw money”). It might then include additional exchanges as

the assistant gathers more information, and it ends when the

assistant carries out the request or answers the customer's question.

A step is a back-and-forth interaction between the assistant and the

customer.
Steps represent the clarification questions, final answers, or human

agent handoff points in the action.

Everything else that the step needs to function like the flow logic,

response options, or storage of the user’s response is contained

within the step.

In the example that is shown in the slide, the assistant asks

clarification questions before it hands the conversation over to an

agent (with the account number as context) for a cable bill. For

internet or phone bills the assistant guides the user to the online

billing portal.

An action consists of one or more steps. The steps in an action define

the conversation turns that follow the initial customer input that

triggered the action. In a simple case, a step might consist of a direct

answer to a question from the customer; for example, if the

customer asks “What are your business hours?”, a one-step action

might reply with “We are open Monday through Friday from 9 AM to

5 PM”.
More commonly, an action requires multiple steps to fully

understand the customer's request. For the example, If the customer

says “I want to pay my cable bill” the assistant needs more

information such as “what is your account number?” Each of these

follow-up questions represents a step in the action.

For step-by-step instructions on the Billing Assistant scenario that

is described in this presentation, see the tutorial “Getting Started

with the New Watson Assistant: The Build Guide Part I” at

https://www.ibm.com/blogs/watson/2021/12/getting-started-with-

the-new-watson-assistant-part-i-the-build-guide/

Actions Benefits
Using actions is the best choice when you want to approach the

assistant with a focus on content. Actions offers the following

benefits:

●​ The process of creating a conversational flow is easier. People who have


expertise with customer care can write the words that your assistant
says. With a simplified process anyone can build a conversation. You
don't need knowledge about machine learning or programming.
●​ Actions provide better visibility into the customer's interaction and
satisfaction with the assistant. Because each task is discrete and has a
clear beginning and ending, you can track user progress through a task
and identify snags.
●​ The conversation designer doesn't have to manage data collected
during the conversation. By default, your assistant collects and stores
information for the duration of the current action. You don't need to
take extra steps to delete saved data or reset the conversation. But if
you want, you can store certain types of information, such as the
customer's name, for the duration of a conversation.
●​ Many people can work at the same time in separate, self-contained
actions. The order of actions within a conversation doesn't matter. Only
the order of steps within an action matters. And the action author can
use drag and drop to reorganize the steps in the action for optimal flow.

Creating your First Assistant

​ 1
​ 2

Figure 11-11. Creating your first assistant


Figure 11-12. Creating your first assistant (cont.)
To create your first assistant, complete these steps:

1.​ Create a Watson Assistant service. You can use an existing Watson
Assistant service if you already have one.
2.​ After you create the service instance, the dashboard for the instance
opens. Click Launch Watson Assistant to open the tool and create your
assistant.
3.​ Give your assistant a name that describes the group of topics it covers.
In this example, Billing Assistant. Choose the language it uses to
respond to your customers and click Create assistant.
4.​ The Home page of your assistant is displayed.

Creating your First Action

Figure 11-13. Creating your first Action


Actions represent the tasks or questions that your assistant can help

customers with. Each action has a beginning and an end, making up

a conversation between the assistant and a customer.

When you create a new action, Watson Assistant prompts you for an

example of the customer’s input that starts the action.

1.​ From the navigation panel on the left click Actions to create a new
action and then click Create a new action.
2.​ In the “New action” window, at the prompt "What does your customer
say to start this interaction?", type an example of your customers
interactions for this action. In this example, “I want to pay my cable bill
please”.

Creating your First Step – Step 1


Now it’s time to create the first step in the bill pay interaction.

1.​ Add a step to ask a clarification question, for example, the type of account, which has a bill that
needs to be paid. Step 1 is created by default.
2.​ Add the clarification question in the “Assistant says” text box.
3.​ Select the type of customer response that the assistant should wait for. In this case, Options is
the best choice.

Figure 11-14. Creating your first Step – Step 1

4. Add the three options for Cable, Internet, and Phone, and apply your changes.

Finally, the first step looks as shown in the figure on the right.

Figure 11-15. Creating your first Step – Step 1 (cont.)

Trying Out your Action

Figure 11-16. Trying out your action

Now preview your action to make sure it works.


Click Try it in the lower right of your screen. Try out a few interactions and see

that it properly recognizes what you ask it. Try typing something other than

what you used as your training sentence.

Adding More Steps to the Conversation – Step 2


The steps in an action define the conversation turns that follow the

initial customer input that triggered the action. In a simplest case, a

step might consist of a direct answer to a question from the

customer; for example, if the customer asks. What are your business

hours?, a one-step action might reply with “We are open Monday

through Friday from 9 AM to 5 PM”.

Usually, an action requires multiple steps to fully understand the

customer's request.

For example, if the customer wants to pay a bill, the assistant needs

to know the account number.

If the customer asks, “I want to withdraw money”, the assistant

needs more information:

●​ Which account should the money come from?


●​ What is the amount to withdraw?

Each of these follow-up questions represents a step in the action.

In this example:

1. Add Step 2 below Step 1 by clicking New step.

2. In the “Assistant says” field, type the clarification question to ask for the account

number, “What’s your account number?”.

3. Select Number from the response drop-down list as the type of customer response.

Figure 11-17. Adding more steps to the conversation – Step 2

Adding Logic (Conditions) to the Flow – Step 2

Figure 11-18. Adding logic (conditions) to the flow – Step 2

Next, you need to add some flow logic to the flow. In this example, given the

way that this flow works, an account number should be gathered only for cable

bills. To handle this scenario, you need to add a condition to your step. To do

that, change the step to be taken with conditions instead of without.


Conditions are basically requirements, which must be met for the step to be

triggered. In this case, you want to condition on the answer to step 1 being

“Cable” but not Internet or Phone. To set it up, make sure that you have the

condition set to step 1 = Cable.

Adding an Agent Handoff Step – Step 3


Now, you need to add the last steps to provide the outcome to the user.

1. Add step 3 under step 2.

2. Enter some text related to getting the user to an agent to pay their bill. For example,

“Let me get you to an agent who can help you pay your cable bill!.”

3. Condition step 3 on “step 1 = Cable” as you did in step 2.

4. For this step, you don’t need to gather any information from the user, so you can leave

the “Define customer response” section empty.

5. Now, set up the assistant to route this conversation to a human agent. Change the And

then setting to Connect to agent (connect to a human agent), which also ends the action

when it hands off.

Figure 11-19. Adding an agent handoff step – Step 3


Inserting the Context to Pass to the Human Agent – Step 3

Figure 11-20. Inserting the context to pass to the human agent – Step 3

●​ Insert the context that you gathered for the human agent to review.
●​ The customer wants to pay their Cable bill.
●​ The account number. Select step 2 (the account number) as the field that you want
to insert
●​ Apply your changes.

Hint: To insert the content of the variables that you collected into

text, start with the “$” sign and a quick select box appears as shown

in the figure.

Handoff to Agent Step Wrap-Up – Step 3


The image shows the completed step 3, which derives the customer to a human agent

and passes the context that is collected from the user.

Figure 11-21. Handoff to agent step wrap-up – Step 3

Creating a Final Response – Step 4

Figure 11-22. Creating a final response – Step 4 (1 of 5)

●​ Step 4 tells the customer, To pay your <type of bill> bill, you can head to our online
portal <link to portal>
●​ To insert a variable like the type of bill to be paid, click the Insert a variable button
that is located over the text box.
●​ To add a link to the text, highlight the text that you want to use and then click the Link button.
●​ Enter the URL in the “Insert link” window.
●​ The settings for the link look like the figure.

Figure 11-23. Creating a final response – Step 4 (2 of 5)

Figure 11-24. Creating a final response – Step 4 (3 of 5)

●​ This step must run only for an Internet or Phone bills.


○​ Create a condition on “step 1 = Internet.”
○​ Add another condition for “step 1 = Phone.”
○​ Ensure that the step runs when Any of these conditions are met.
●​ The action must end after this step is reached.
●​ Change the “And then” setting to End the action.

Figure 11-25. Creating a final response – Step 4 (4 of 5)

Figure 11-26. Creating a final response – Step 4 (5 of 5)

●​ Your steps now are completed and look like shown in the figure.
●​ Click Try it to test a few scenarios.
Try it Out

Figure 11-27. Try it out


Try a few scenarios. Notice that when you state the type of bill up

front, the assistant skips the question about the type of bill and

moves immediately to the next step.

Get a Real-Life Preview of your Assistant: Preview

Figure 11-28. Get a real-life preview of your assistant: Preview


To see how your assistant would really work on one of your channels,

open the preview page.

The preview page is a representation of your “draft” work in

progress and has an inline preview for you to test. You can also share

your work with others on your team quickly with a sharable URL

Section 3: Classic Watson Assistant


Introduction
Topics
Gain an understanding of the concepts covered in this section, by going

through the reading material and, slides included in the sections below:


When to Use Classic Watson Assistant?
Reasons for using classic Watson Assistant include:

●​ The assistant was built before the new Watson Assistant became available
in October 2021.
●​ Your developers are familiar and comfortable with the classic experience.
●​ You want greater control over the logic of the flow.
○​ The dialog editor exposes more of the underlying artifacts (such as
intents and entities) used to build the AI models.
○​ The dialog flow uses an if-then-else style structure that might be
familiar to developers, but not to content designers or customer-care
experts.
●​ The following features are available in dialog-based conversations but not in
Actions.
○​ Contextual entities
○​ Detection of other system entities
○​ Image response type
○​ Digression support
○​ Webhook (log all messages) support

Want to get started with actions, but need features that are available from a

dialog? Use both. Dialog is your primary conversation with users, but you

can call an action from your dialog to perform a discrete task. For more

information see Calling actions from a dialog at

https://cloud.ibm.com/docs/watson-assistant?topic=watson-assistant-di

alog-call-action

Components
The Watson Assistant main components are:

●​ Assistants
●​ Dialog skills
●​ Intents
●​ Entities
●​ Dialog

Assistant
An assistant is a cognitive bot that you can customize for your business

needs and deploy across multiple channels to help your customers where

and when they need it.

You customize the assistant by adding to it the skills that it needs to satisfy

your customers' goals.

Activate a dialog skill that can understand and address questions or

requests with which your customers typically need help. You provide

information about the subjects or tasks that your users ask about and how

they ask about them. Then, the service dynamically builds a machine

learning model that is tailored to understand the same and similar user

requests.

You can deploy the assistant through multiple interfaces, including:

●​ Existing messaging channels, such as Slack and Facebook Messenger.


●​ You can give your users access to the assistant through a simple chat
widget that you publish to a website or add to an existing company web
page.
●​ You can design a custom application that incorporates it by making direct
calls to the underlying APIs.

Dialog Skill
Skill. Sometimes referred to as dialog skill.

A Dialog skill acts as a container that contains the training data and logic

that enables an assistant to help your customers.

It contains the following types of artifacts:

●​ Intents
●​ Entities
●​ Dialog

As you add information, the data is used to build a machine learning model

that can recognize these and similar user inputs. Each time that you add or

change the training data, the training process is triggered to ensure that

the underlying model stays up-to-date as your customer needs and the

topics they want to discuss change.

Intent
An intent represents the purpose of a user's input, such as a question about

business locations or a bill payment.


Plan the intents for an application: Consider what your customers might

want to do and what you want your application to be able to handle on their

behalf. For example, you might want your application to help your

customers make a purchase. If so, you can add a #buy_something intent.

You define an intent for each type of user request that you want your

application to support.

Teach Watson Assistant about your intents: After you decide the business

requests that you want your application to handle for your customers, you

must teach Watson Assistant about them. For each business goal (such as

#buy_something), you must provide at least 10 examples of utterances

that your customers typically use to indicate their goal. For example, “I

want to make a purchase.”

Ideally, find real-world user utterance examples that you can extract from

existing business processes. The user examples should be tailored to your

specific business. For example, if you are an insurance company, your user

examples might look more like this, “I want to buy a new XYZ insurance

plan.”
To train the dialog skill to recognize your intents, supply many examples of

user input and indicate to which intents they map. The examples that you

provide are used by the service to build a machine learning model that can

recognize the same and similar types of utterances and map them to the

appropriate intent.

In the tool, the name of an intent is always prefixed with the # character.

Start with a few intents and test them as you iteratively expand the scope of

the application.

Intent examples:

●​ “Good morning” -> #greeting


●​ “Where can I find the nearest restaurant?” -> #location_info
●​ “Where can I pay my electric bill?” -> #location_info

Remember: Intents represent what the user wants to achieve: a goal, an

action, or verbs.

Entity
Entities represent information in the user input that is relevant to the

user's purpose.
If intents represent verbs (the action a user wants to do), entities represent

nouns (the object of, or the context for, that action). For example, when the

intent is to get a weather forecast, the relevant location and date entities

are required before the application can return an accurate forecast.

Entities represent a class of object or a data type that is relevant to a user’s

purpose. By recognizing the entities that are mentioned in the user's input,

the service can reply with a more targeted response or perform a required

action.

Entity Evaluation
The Assistant service looks for terms in the user input that match the

values, synonyms, or patterns that you define for the entity:

●​ Synonym entity: Synonyms are words or phrases that mean exactly or


nearly the same as the corresponding entity. You define a category of terms
as an entity (color), and then one or more values in that category (blue). For
each value, you specify several synonyms (aqua, navy, and cyan). At run
time, the service recognizes terms in the user input that exactly match the
values or synonyms that you defined for the entity as mentions of that entity.
●​ Pattern entity: You define a category of terms as an entity (contact_info),
and then define one or more values in that category (email). For each value,
you specify a regular expression that defines the textual pattern of mentions
of that value type. For an email entity value, you might want to specify a
regular expression that defines a [email protected] pattern.

\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}\b
This email regex example can capture all mentions of emails in their

proper format. At run time, the service looks for patterns

matching your regular expression in the user input, and identifies

any matches as mentions of that entity.

●​ System entity: A synonym entity that is prebuilt for you by IBM. They cover
commonly used categories, such as numbers, dates, and times. You simply
enable a system entity to start using it.

Figure 11-38. Entity evaluation


Dialog
A dialog is a branching conversation flow that defines how your application

responds when it recognizes the defined intents and entities.

The dialog is made up of nodes that define steps in the conversation.

Dialog nodes are chained together in a tree structure (graphically).

Each node includes conditions for the node to be active, and also an output

object that defines the response that is provided.


Think of the node as an if-then construction: If this condition is true, then

return this response.

Condition: Specifies the information that must be present in the user input

for this node in the dialog to be triggered.

Response: The utterance that the service uses to respond to the user.

Figure 11-39. Dialog


A single node with one condition and response can handle simple user

requests.

To handle more complex tasks, you can add child nodes to ask the user for

additional information.

A child node is processed after its parent node. NODE 1 is the parent node

for CHILD NODE 1 and CHILD NODE 2.

Some useful definitions:


●​ A root node is a node that does not depend on other nodes. In the example,
NODE 1 is a root node.
●​ A child node is a node that depends on another node, its parent node. The
parent node is processed first, and based on that processing, the child node
is either processed or not.

Figure 11-40. Dialog (cont.)


Conditions
Conditions are logical expressions that are evaluated to true or false. A

node condition determines whether that node is used in the conversation

or to choose among the possible responses to the user.

Conditions usually evaluate the intents and entities that are identified in

the user responses, but they can also evaluate information that is stored in

the context.

Responses
Responses are messages that are based on the identified intents and entities that are

communicated to the user when the dialog node is activated. You can add variations of the

response for a more natural experience or add conditions to pick one response out of many in the

same dialog node.

The figure shows an example of adding different variations for greetings if the node is triggered

by a greeting intent from the user.


Figure 11-42. Responses

Rich Responses
In addition to the default response type of text, you can return responses

with multimedia or interactive elements, such as images or clickable

buttons to simplify the interaction model of your application and enhance

the user experience.

The following response types are supported:

●​ Image: Embeds an image into the response.


●​ Option: Adds a list of one or more options. When a user clicks one of the
options, an associated user input value is sent to the service. How options
are rendered can differ depending on where you deploy the dialog. For
example, in one integration channel, the options might be displayed as
clickable buttons, but in another they might be displayed as a drop-down list.
●​ Pause: Forces the application to wait for a specified number of milliseconds
before continuing with processing. You can choose to show an indicator that
the dialog is working on typing a response. Use this response type if you
need to perform an action that might take some time.

Multiple (Conditional) Responses

Figure 11-44. Multiple (conditional) responses


In a single dialog node, you can provide different responses, each one

triggered by a different condition. Use this approach to address multiple

scenarios in a single node.

In this example, the service uses information that it collected earlier about

the user's location to tailor its response and provide information about the

store nearest the user. The conditional responses are based on context

values.

This single node now provides the equivalent function of four separate

nodes.

Watson Assistant Dialog Features

Figure 11-45. Watson Assistant dialog features


Add slots to a dialog node to gather multiple pieces of information from a

user within that node. Slots collect information at the users' pace. Details

that the user provides are saved, and the service asks only for the details

the use did not provide.

Example:
The user wants to reserve a table in a restaurant. The needed information is

the number of guests, the date, the time, and the name of the restaurant.

When asked, the user might provide values for multiple slots at once. For

example, the input might include the information, ”There will be 6 of us

dining at 7 PM”. This one input contains two of the missing required

values: the number of guests and time of the reservation. The service

recognizes and stores both of them, each one in its corresponding slot. It

then displays the prompt that is associated with the next empty slot, which

in this case asks the user, “Where would you like to eat?” to determine the

restaurant slot and stores the response. After the user replies, the service

asks, “What day will this take place?” to determine the date slot, stores it,

and gives a reply.

Dialog Flow

Figure 11-46. Dialog flow


The dialog is processed by the service from top to bottom.

As it travels down the tree, if the service finds a condition that is met, it

triggers that node. It then moves from left to right on the triggered node to

check the user input against any child node conditions. As it checks the

children nodes, it moves again from top to bottom.


The service continues to work its way through the dialog tree until it

reaches the last node in the branch that it is following.

If none of the conditions evaluates to true, then the response from the last

node in the tree, which typically has a special anything_else condition that

always evaluates to true, is returned.

When you start to build the dialog, you must determine the branches to

include and where to place them. The order of the branches is important

because nodes are evaluated from first to last. The first node whose

condition matches the input is used; any nodes that come later in the tree

are not triggered.

Watson Assistant Components

Figure 11-47. Watson Assistant components


Dialog depiction

This diagram shows a mockup of a dialog tree that is built with the GUI

dialog builder tool. It contains two root dialog nodes. A typical dialog tree

would likely have many more nodes, but this depiction provides a glimpse

of what a subset of nodes might look like.


The first root node has conditions for an intent value. It has two child

nodes and each have a condition on an entity value. The second child node

defines two responses. The first response is returned to the user if the value

of the context variable matches the value that is specified in the condition.

Otherwise, the second response is returned.

This standard type of node is useful to capture questions about a certain

topic and then in the root response ask a follow-up question that is

addressed by the child nodes. For example, it might recognize a user

question about discounts and ask a follow-up question about whether the

user is a member of any associations with which the company has special

discount arrangements. The child nodes provide different responses based

on the user's answer to the question about association membership.

The second root node is a node with slots. It also has conditions on an

intent value. It defines a set of slots, one for each piece of information that

you want to collect from the user. Each slot asks a question to elicit the

answer from the user. It looks for a specific entity value in the user's reply

to the prompt, which it then saves in a slot context variable.

This type of node is useful for collecting details that you might need to

perform a transaction on the user's behalf. For example, if the user's intent
is to book a flight, the slots can collect the origin and destination location

information, travel dates, and so on.

Switching the Experience


You can easily switch back and forth between the new experience and the classic experience.

However, the new experience provides a simplified user interface, an improved deployment

process, and access to the latest features.

To switch between the new and classic experiences, following these steps:

1.​ From the Watson Assistant interface, click the Manage icon to open your account menu.
2.​ Select Switch to classic experience or Switch to new experience from the account menu.

You don’t lose any work if you switch to a different experience, and other users of the same

instance are not affected. However, keep in mind that any work you do in one experience is not

available in the other experience. You can switch back and forth at any time.

Figure 11-48. Switching the experience

You might also like