100% found this document useful (1 vote)
813 views232 pages

Bot Framework Composer Documentation

Uploaded by

Russ Hax
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
813 views232 pages

Bot Framework Composer Documentation

Uploaded by

Russ Hax
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 232

Contents

Bot Framework Composer Documentation


Overview
Introduction to Bot Framework Composer
What's new?
Composer releases
Installation
Install Composer
Quickstart
Tour of Composer
Create your first bot
Tutorials
0. Tutorial introduction
1. Create a bot
2. Add a dialog
3. Get weather report
4. Add help and cancel command
5. Add Language Generation
6. Use cards
7. Add LUIS
Concepts
Dialog
Events and triggers
Conversation flow and memory
Introduction to Natural Language Processing in Composer
Language generation
Language understanding
Composer plugins
Composer best practices
Samples
Use samples
Develop
Send messages
Send cards
Ask for user input
Manage conversation flow
Add LUIS to your bot
Author QnA Maker knowledge base
Add QnA to your bot
Define triggers and events
Define advanced intent and entities
Use OAuth
Send an HTTP request
Connect to a skill
Add custom actions
Extend Composer with plugins
Host Composer in the cloud
Author bots in multiple languages
Capture bot's telemetry data
Test
Test your bot in the Emulator
Debug
Linting and validation
Publish
Publish your bot to Azure
Glossary
Concepts and terms
Resources
Adaptive dialogs
Language generation
Adaptive expressions
.lu file format
.lg file format
Introduction to Bot Framework Composer
9/21/2020 • 3 minutes to read

Bot Framework Composer is an open-source visual authoring canvas for developers and multidisciplinary teams to
build bots. Composer integrates language understanding services such as LUIS and QnA Maker and allows
sophisticated composition of bot replies using Language Generation. Composer is available as a desktop
application as well as a web-based component.
Built with the latest features of the Bot Framework SDK, Composer provides everything you need to build a
sophisticated conversational experience:
A visual editing canvas for conversation flows
Tools to author and manage language understanding (NLU) and QnA components
Powerful language generation and templating system
A ready-to-use bot runtime executable

What you can do with Composer


Composer is a visual editing canvas for building bots. You can use it to do the following:
Build bots without the need to write code
Author and publish NLP data such as LUIS models and QnA Maker knowledge base
Author and validate language generation templates
Author bots in multiple languages
Publish bots to Azure App Service and Azure Functions
Integrate external services such as QnA Maker knowledge base
Beyond a visual editing canvas, you can use Composer to do the following:
Import and export dialog assets to share with other developers
Build, export, and call a skill
Export and customize runtime (C# | JavaScript Preview )
Create your own custom actions
Host Composer in the cloud
Extend Composer with plugins
Under the hood, Composer harnesses the power of many of the components from the Bot Framework SDK. When
building bots in Composer, developers will have access to:
Adaptive dialogs
Dialogs provide a way for bots to manage conversations with users. The new Adaptive dialog and the event model
simplify sophisticated conversation modelling and help you focus on the model of the conversation rather than the
mechanics of dialog management. Read more in the dialog concept article.
Language Understanding (LU )
LU is a core component of Composer that allows developers and conversation designers to train language
understanding directly in the context of editing a dialog. As dialogs are edited in Composer, developers can
continuously add to their bots' natural language capabilities using the lu file format, a simple Markdown-like
format that makes it easy to define new intents and provide sample utterances. In Composer, you can use both
Regular Expression and LUIS service. Composer detects changes and updates the bot's cloud-based natural-
language understanding (NLU) model automatically so it is always up to date. Read more in the language
understanding concept article.

Language Generation (LG )


Creating grammatically correct, data-driven responses that have a consistent tone and convey a clear brand voice
has always been a challenge for bot developers. Composer's integrated Language Generation (LG) that allows
developers to create bot replies with a great deal of flexibility. Read more in the language generation concept
article.
With Language Generation, you can achieve previously complex tasks easily such as:
Including dynamic elements in messages
Generating grammatically correct lists, pronouns, articles
Providing context-sensitive variation in messages
Creating Adaptive Cards attachments, as seen above
QnA Maker
QnA Maker is a cloud-based Natural Language Processing (NLP) service that easily creates a natural conversational
layer over your data. It can be used to find the most appropriate answer for any given natural language input, from
your custom knowledge base (KB) of information.
Bot Framework Emulator
Emulator is a desktop application that allows bot developers to test and debug bots built using Composer.

Advantage of developing bots with Composer


Developers familiar with the Bot Framework SDK will notice differences between bots developed with it and Bot
Framework Composer. Some of the advantages of developing bots in Composer include:
Use of Adaptive Dialogs allows for Language Generation (LG), which can simplify interruption handling and give
bots character.
The visual design surface in Composer that eliminates the need for boilerplate code and makes bot
development more accessible. You no longer need to navigate between experiences to maintain LU model as it
is editable within the app.
Time saved with fewer steps to set up your environment.
A major difference between the current version of the Bot Framework SDK and Composer is that the apps created
using Composer uses the Adaptive dialog format, a JSON specification shared by many tools provided by the Bot
Framework. More information about Adaptive dialog is available here.
The Composer bot projects contain reusable assets in the form of JSON and Markdown files that can be bundled
and packaged with a bot's source code. These can be checked into source control systems and deployed along with
code updates, such as dialogs, language understanding (LU) training data, and message templates.

Additional resources
Bot Framework SDK
Adaptive dialog
Language generation
Adaptive expressions

Next steps
Read best practices for building bots using Composer.
Learn how to create an echo bot using Composer.
What's new September 2020
9/21/2020 • 2 minutes to read

Bot Framework Composer, a visual authoring tool for building Conversational AI applications, has seen strong
uptake from customers and positive feedback since entering general availability at Microsoft BUILD 2020. We
continue to invest in ensuring Composer provides the best possible experience for our customers.
Welcome to the September 2020 release of Bot Framework Composer. This article summarizes key new features
and improvements in Bot Framework Composer 1.1.1 stable release. There are a number of updates in this version
that we hope you will like. Some of the key highlights include:
QnA Maker knowledge base creation
Integrated QnA Maker knowledge base creation and management in addition to the existing LUIS
integration for language understanding. This reduces the need for a customer to leave the context of the
Composer environment.
Multilingual authoring capabilities
Internationalization of the product, broadening its accessibility, as well as introducing multilingual
capabilities for bots built with Composer. This allows our customers to do the same.
JavaScript runtime in preview
A continued focus on the fundamentals of the application, with improved performance, enhancements to the
overall authoring experience, and broader inclusion for our user base with a preview of the Composer
runtime in JavaScript, in additional to the existing C# runtime. This enables customers to export the runtime
and use it for other purposes such as adding custom actions.
Skills manifest generation
An improved experience to generate Bot Framework skills manifest by adding triggers and dialogs selections
in the forms. This enables our customers to select triggers and dialogs they want to include in the manifest
and add corresponding activity types to the manifest's activities property.
Deeper integration with Azure platform
Deeper integration with the Azure platform for publishing applications built with Composer, along with
management of related services.
Additional integration with Power Vir tual Agents
Additional integration with Power Virtual Agents, part of the Power Platform, including improved capabilities
to extend PVA solutions by building Bot Framework skills.
Other improvements
Improved language generation editing performance
Support for UI schema fly-out menu and form
IntelliSense server for Composer text editor
Recoil refactor of state management
Insiders : Want to try new features as soon as possible? You can download the nightly Insiders build and try the
latest updates as soon as they are available!
Additional information
Read more in Composer 1.1.1 release notes here.
Install Bot Framework Composer
9/21/2020 • 2 minutes to read

You can choose to download and use Bot Framework Composer as an installable desktop application: Windows |
macOS | Linux. Make sure you install the Bot Framework Emulator and .NET Core SDK 3.1 or above. Alternatively,
you can build Composer from source.

Install Composer as a desktop application


Prerequisites
The Bot Framework Emulator.
The .NET Core SDK 3.1 and above.
Download and use Composer: Windows | macOS | Linux.

Build Composer from source


This section will walk you through how to run Composer as a hosted web app locally using Yarn.
Prerequisites
Git
Node.js. Use version 12.13.0 or later.
The latest stable release of Yarn.
The Bot Framework Emulator.
The .NET Core SDK 3.1 and above.
Installation instructions
1. To start, open a terminal and clone the Composer GitHub repository. You will use this terminal for the rest
of the steps in this section.

git clone https://github.com/microsoft/BotFramework-Composer.git

2. After cloning the repository, navigate to the Bot Framework Composer folder. For example:

cd C:\Users\UserName\Documents\GitHub\BotFramework-Composer

3. Then run the following commands to navigate to the Composer folder and get all required packages:

cd Composer
yarn

4. Next, run the following command to build the Composer application, this command can take several
minutes to finish:

yarn build
NOTE
If you are having trouble installing or building Composer run yarn tableflip . This will remove all of the Composer
application's dependencies (node_modules) and then it reinstalls and rebuilds all of its dependencies. Once
completed, run yarn install and yarn build again. This process generally takes 5-10 minutes.

5. Again using Yarn, start the Composer authoring application and the bot runtime:

yarn startall

6. Once you see Composer now running at: appear in your terminal, you can run Composer in your
browser using the address http://localhost:3000.

Keep the terminal open as long as you plan to work with the Composer. If you close it, Composer will stop running.
The next time you need to run the Composer, all you will need is to run yarn startall from the Composer
directory.

Next steps
Create a echo bot using Composer.
Tour of Composer
9/21/2020 • 2 minutes to read

Bot Framework Composer provides Onboarding functionality to help you get familiar with the bot creation process.
This functionality consists of a product tour that includes five sections with each section containing one or more
tips.

Prerequisites
Install Composer
Create an Echo bot

To run the Composer product tour:


1. Select Settings in the menu on the left side of the Composer screen.
2. Once in the Settings screen, select Application Settings .
3. Once in Application Settings screen, toggle the Onboarding switch to On .
The Onboarding feature is now enabled. You can now start the product tour.
4. To start the product tour, select Design on the menu.
5. You will see the Onboarding Welcome! screen appear at the bottom right corner of the screen. The
Onboarding tour consists of five sections. Each section consists of one or more tips. Select Learn the basics
and you will start your Onboarding tour.

6. You can navigate backwards or forward through the tips of a section by using the Previous or Next
buttons.
You can exit the tour at anytime by simply selecting anywhere outside the overview views. If you do, you will
see a popup window asking if you would like to Leave Product Tour . If you select Yes your onboarding
process will end. If you select Cancel your onboarding process continues.

7. Once you complete a section, select done and you will return to the the Onboarding Welcome! screen
where you can continue to the next section of the tour.

Sections that only contain a single tip will not have the Previous , Next or Done buttons but instead you can
select the Got it! button to move to the next section. Once you complete a section, you cannot go back to it
without restarting the onboarding tour.
8. Once you complete the tour, select Done! . The Onboarding switch in your settings will automatically be set
to Disabled .

You can restart the onboarding tour anytime by repeating these steps.
Next steps
Learn how to build a weather bot.
Create your first bot
9/21/2020 • 2 minutes to read

In this quickstart you will learn how to create a bot using the Echo Bot template in Composer and test it in the
Bot Framework Emulator.

Prerequisites
Download and use Bot Framework Composer as an installable desktop application: Windows | macOS | Linux.
Make sure you install the Bot Framework Emulator and .NET Core SDK 3.1 or above.
Or build Composer with source.

Create an echo bot


1. After starting Composer in a browser, click the Echo Bot button at the top of the Examples list on the
homepage.

2. Enter a Name and Description for your bot. Choose where you want to save the bot or keep the default
location and click Next .
3. You will now see your bot's main dialog.

4. Test your bot by clicking Star t Bot in the top right. You will then see the Test in Emulator button show
up. Click Test in Emulator .

5. Type anything in the Emulator to have the bot echo back your response.

Congratulations! You've successfully created an echo bot!


Next Steps
Create a weather bot using Composer.
The Bot Framework Composer tutorials
3/25/2020 • 2 minutes to read

Welcome to the Bot Framework Composer tutorials. These start with the creation of a simple bot with each
successive tutorial building on the previous tutorial adding additional capabilities designed to teach some of the
basic concepts required to build bots with the Bot Framework Composer.
In these tutorials, you will build a weather bot using Composer, starting with a simple bot and gradually
introducing more sophistication. You'll learn how to:
Create a simple bot and test it in the Emulator
Add multiple dialogs to help your bot fulfill more than one scenario
Use prompts to ask questions and get responses from an HTTP request
Handle interruptions in the conversation flow in order to add global help and the ability to cancel at any time
Use Language Generation to power your bot's responses
Send responses with cards
Use LUIS in your bot

Prerequisites
A good understanding of the material covered in the Introduction to Bot Framework Composer, including the
naming conventions used for elements in the Composer.

Next step
Tutorial: Create a new bot and test it in the Emulator
Tutorial: Create a new bot and test it in the Emulator
9/21/2020 • 3 minutes to read

This tutorial walks you through creating a basic bot with the Bot Framework Composer and testing it in the
Emulator.
In this tutorial, you learn how to:
Create a basic bot using Bot Framework Composer
Run your bot locally and test it using the Bot Framework Emulator

Prerequisites
Bot Framework Composer
Bot Framework Emulator

Create a new bot


The first step in creating a bot with the Bot Framework Composer is to create a new bot project from the home
screen in Composer. This will create a new folder locally on your computer with all the files necessary to build, test
and run the bot.
1. From the home screen, select New .

2. In the Create from scratch? screen, you'll be presented with different options to create your bot. For this
tutorial, select Create from Scratch , then Next .
3. The Define conversation objective form:
a. Enter the name WeatherBot in the Name field.
b. Enter A friendly bot who can talk about the weather in the Description field.
c. Select the location to save your bot.
d. Save your changes and create your new bot by selecting Next .

TIP
Spaces and special characters are not allowed in the bot's name.

After creating your bot, Composer will load the new bot's main dialog in the editor. It should look like this:
NOTE
Each dialog contains one or more triggers that define the actions available to the bot while the dialog is active.
When you create a new bot, an Activities trigger of type Greeting (ConversationUpdate activity) is
automatically provisioned. Triggers help your dialog capture events of interest and respond to them using actions.

TIP
To help keep bots created in Composer organized, you can rename any trigger to something that better describes
what it does.

4. Click the Greeting trigger in the navigation pane.

TIP
Steps 4-8 are demonstrated in the image immediately following step 8.

5. In the proper ties panel on the right side of the screen, select the trigger name and type
WelcomeTheUser .
6. Next you will start adding functionality to your bot by adding Actions to the WelcomeTheUser trigger.
You do this by selecting the plus (+) icon in the Authoring canvas , then select Send a response from the
list of actions.

TIP
Selecting the plus (+) icon in the Authoring canvas is used to add Actions to the conversation flow. You can use
this to add actions to the end of a flow, or insert new actions between existing actions.

Now, it's time to make the bot do something.


You will see the flow in the Authoring canvas starts with the Trigger name with a line below it that
includes in a + button.
For now, instruct the bot to send a simple greeting.
7. Select the new Send a response action in the Authoring Canvas and its properties will appear on the
right hand side of the screen in the Proper ties panel. This action has only one property, the text of the
activity to send.
8. Type a welcome message into this field. It is always a good idea to have your bot introduce itself and explain
it's main features, something like:
Hi! I'm a friendly bot that can help with the weather. Tr y saying WEATHER or FORECAST.

Start your bot and test it


Now that your new bot has its first simple feature, you can launch it in the emulator and verify that it works.
1. Click the Star t Bot button in the upper right hand corner of the screen. This tells Composer to launch the
bot's runtime, which is powered by the Bot Framework SDK.
2. After a second the Star t Bot button will change to Restar t Bot which indicates that the bots runtime has
started. Simultaneously a new link will appear next to the button labeled Test in Emulator . Selecting this
link will open your bot in the Emulator.

Soon the Emulator will appear, and the bot should immediately greet you with the message you just
configured:
You now have a working bot, and you're ready to add some more substantial functionality!

Next steps
Tutorial: Adding dialogs to your bot
Tutorial: Adding dialogs to your bot
9/21/2020 • 3 minutes to read

In the previous tutorial you learned how to create a new bot using Bot Framework Composer. In this tutorial you
will learn how to add additional dialogs to your bot and test them using Bot Framework Emulator.
It can be useful to group functionality into different dialogs when building the features of your bot with
Composer. This helps keep the dialogs organized and allow sub-dialogs to be combined into larger and more
complex dialogs.
A dialog contains one or more triggers. Each trigger consists of one or more actions which are the set of
instructions that the bot will execute. Dialogs can also call other dialogs and can pass values back and forth
between them.
In this tutorial, you learn how to:
Build on the basic bot created in the previous tutorial by adding an additional dialog.
Run your bot locally and test it using the Bot Framework Emulator.

Prerequisites
Completion of the tutorial Create a new bot and test it in the Emulator
Knowledge about dialogs in Composer

What are you building?


The main function of this bot is to report current weather conditions.
To do this, you will create a dialog that:
Prompts the user to enter a zip code to use as location for weather lookup
Calls an external API to retrieve the weather data for a specific zip code

TIP
Create all of your bot components and make sure they work together before creating detailed functionality.

Create a new dialog


1. Select + Add and then select Add new dialog in the toolbar. A dialog will appear and ask for a Name and
Description .

2. Fill in the Name field with getWeather and the Description field with Get the current weather
conditions , then select Next .
3. Composer will create the new dialog with a pre-configured BeginDialog trigger.
For now, we'll just add a simple message to get things hooked up, then come back to flesh out the feature in
a later tutorial.
4. In the BeginDialog trigger, select the plus (+) icon in the Authoring canvas then select the Send a
response action.
5. Once the new action is created, enter the following text into the Proper ty panel:
Let's check the weather
You'll have a flow that looks like this:

Connect your new dialog


You can break pieces of your conversation flow into different dialogs and can chain them together. Next you need
to get the newly created getWeather dialog connected to the main dialog.
1. Select WeatherBot in the Navigation pane.
2. Find the Language Understanding section in the Proper ties panel.

Each dialog can have its own recognizer, a component that lets the bot examine an incoming message
and decide what it means by choosing between a set of predefined intents. Different types of
recognizers use different techniques to determine which intent, if any, to choose.

3. Select Regular Expression from the Recognizer Type drop-down list.


4. In the toolbar select + Add and then select Add new trigger to create a new trigger in the WeatherBot
dialog.

5. Select Intent recognized from the What is the type of this trigger? drop-down list. Enter weather for
both the What is the name of this trigger (RegEx) and the Please input regex pattern fields.

NOTE
This tells the bot to look for the word "weather" anywhere in an incoming message. Regular expression patterns are
generally much more complicated, but this is adequate for the purposes of this example.

6. Next, create a new action for the Intent recognized trigger you just created. You can do this by selecting
the + sign under the trigger node in the Authoring canvas_ then select Begin a new dialog from the
Dialog management menu.
7. In the Proper ties panel for the new Begin a new dialog action, select getWeather from the dialog
name drop-down list.

Now when a user enters weather , your bot will respond by activating the getWeather dialog.
In the next tutorial you will learn how to prompt the user for additional information, then query a weather service
and return the results to the user, but first you need to validate that the functionality developed so far works
correctly, you will do this using the Emulator.

Test bot in the Emulator


1. Select the Restar t Bot button in the upper right hand corner of the Composer window. This will update the
bot runtime app with all the new content and settings. Then select Test in Emulator . When the Emulator
connects to your bot, it'll send the greeting you configured in the last section.

2. Send the bot a message that says weather . The bot should respond with your test message, confirming
that your intent was recognized as expected, and the fulfillment action was triggered.
Next steps
Tutorial: Creating the weather bot - Adding actions to you dialog
Tutorial: Adding actions to your dialog
9/21/2020 • 6 minutes to read

In this tutorial you will learn how to add actions to your dialog in Composer. You will prompt the user for their zip
code, and then the bot will respond with the weather forecast for the specified location based on a query to an
external service.
In this tutorial, you learn how to:
Add actions in your trigger to prompt the user for information
Create properties with default values
Save data into properties for later use
Retrieve data from properties and use it to accomplish tasks
Make calls to external services

Prerequisites
Completion of the tutorial Adding dialogs to your bot.
Knowledge about dialogs in Composer, specifically actions.
Knowledge about conversation flow and memory.

Get weather report


Before you can get the weather forecast you need to know the desired location. You can create a Text Input action
to prompt the user for a zip code to pass to the weather service.
1. Select getWeather in the Navigation pane to show the getWeather dialog, and then select the
BeginDialog trigger.
2. To create the Text Input action, select the top most + in the Authoring canvas then select Text from the
Ask a question menu. See the Asking for user input article for more information about requesting and
validating different data types.
After selecting Text from the Ask a question menu, you will notice that two new nodes appear in the flow.
Each node corresponds to a tab in the Proper ties panel as shown in the following image:
Bot Asks refers to the bots prompt to the user for information.
User Input enables you to assign the user input to a property that is saved in memory and can be used
by the bot for further processing.
Other enables you to validate the user input and respond with a message if invalid input is entered.
3. Select the Bot Asks tab in the Proper ties panel and enter What is your zip code? into the Prompt field.
This is what the bot will display to the user to request their input.

4. Select the User Input tab in the Proper ties panel. This part of the prompt represents the user's response,
including where to store the value and how to pre-process it. Enter user.zipcode in the Proper ty field.
5. Next, in the User Input tab, select expression and then enter =trim(this.value) in the Output Format
field. trim() is a prebuilt function in adaptive expressions. This function trims all leading and trailing spaces
in the user's input before the value is validated and assigned to the property defined in the Proper ty field
user.zipcode .
6. Select the Other tab in the Proper ties panel. This is where you can specify your validation rules for the
prompt, as well as any error messages that will be displayed to the user if they enter an invalid value based
on the Validation Rules you create.
7. In the Unrecognized Prompt field, enter:
Sorr y, I do not understand '${this.value}'. Please specify a zip code in the form 12345
8. In the Validation Rules field, enter:
length(this.value) == 5
This validation rule states that the user input must be 5 characters long. If the user input is shorter or longer
than 5 characters your bot will send an error message.

IMPORTANT
Make sure to press the enter key after entering the validation rule. If you don't press enter the rule will not be added.

9. In the Invalid Prompt field, enter:


Sorr y, '${this.value}' is not valid. I'm looking for a 5 digit number as zip code. Please specify a
zip code in the form 12345
10. Set the Default value property (next to Max turn count ) to 98052 .

NOTE
By default prompts are configured to ask the user for information Max turn count number of times (defaults to 3).
When the max turn count is reached, the prompt will stop and the property will be set to the value defined in the
Default value field before moving forward with the conversation.

You have created an action in your BeginDialog trigger that will prompt the user for their zip code and placed it
into the user.zipcode property. Next you will pass the value of that property in an HTTP request to a weather
service, validate the response, and if it passes your validation display the weather report to the user.

Add an HTTP request


In this section, we will demonstrate the process of adding an HTTP request, capturing the results into a property
then determining what action to take depending on the results.
1. Select the + under the last action in the Authoring canvas , then select Send an HTTP request from the
Access external resources menu.
2. In the Proper ties panel, select GET from the HTTP method drop-down list. Enter the following in the Url
field:
http://weatherbot-ignite-2019.azurewebsites.net/api/getWeather?
zipcode=${user.zipcode}&api_token=Your_API_Token
This will enable the bot to make an HTTP request to the specified URL. The reference to ${user.zipcode}
will be replaced by the value from the bots' user.zipcode property. You can get an api_token for free from
the OpenWeather website.

3. Next, still in the Proper ties panel, enter the following in the Result proper ty field:
dialog.api_response
Result proper ty represents the property where the result of this action will be stored. The result can
include any of the following 4 properties from the HTTP response:
statusCode. This can be accessed via dialog.api_response.statusCode .
reasonPhrase. This can be accessed via dialog.api_response.reasonPhrase .
content. This can be accessed via dialog.api_response.content .
headers. This can be accessed via dialog.api_response.headers .
If the Response type is Json, it will be a deserialized object available via dialog.api_response.content
property.
4. After making an HTTP request, you need to test the status of the response and handle errors as they occur.
You can use an If/Else branch for this purpose. To do this, select + that appears beneath the Send HTTP
Request action you just created, then select Branch: if/else from the Create a condition menu.
5. In the Proper ties panel on the right, enter the following value into the Condition field:
dialog.api_response.statusCode == 200
6. In the True branch select the + button then select Set a Proper ty from the Manage proper ties menu.
7. In the Proper ties panel on the right, enter dialog.weather into the Proper ty field.
8. Next, enter =dialog.api_response.content into the Value field.

9. While still in the True branch, select the + button that appears beneath the action created in the previous
step, then select Send a response .
10. In the Proper ties panel on the right, enter the following response to send:
- The weather is ${dialog.weather.weather} and the temp is ${dialog.weather.temp}°

The flow should now appear in the Authoring canvas as follows:

You will now tell the bot what to do in the event that the statusCode returned is not 200.
11. Select the + button in the False branch, then select Send a response and set the text of the message to:
I got an error : ${dialog.api_response.content.message}
12. For the purposes of this tutorial we will assume that if you are in this branch, it is because the zip code is
invalid, and if it is invalid it should be removed so that the invalid value does not persist in the
user.zipcode property. To remove the invalid value from this property, select the + button that follows the
Send a response action you created in the previous step, then select Delete a proper ty from the
Manage proper ties menu.
13. In the Proper ties panel on the right, enter user.zipcode into the Proper ty field.
The flow should appear in the Authoring canvas as follows:
You have now completed adding an HTTP request to your BeginDialog trigger. The next step is to validate that
these additions to your bot work correctly. To do that you can test it in the Emulator.

Test in the Emulator


1. Select the Restar t bot button in the upper right-hand corner of the Composer screen, then Test in
Emulator .

2. After the greeting, send weather to the bot. The bot will prompt you for a zip code. Give it your home zip
code, and seconds later, you should see the current weather conditions.
Next steps
Tutorial: Adding Help and Cancel functionality to your bot
Tutorial: Adding Help and Cancel functionality to
your bot
9/21/2020 • 4 minutes to read

In the last tutorial you learned how to add actions to a trigger. In this tutorial you will learn how to handle
interruptions to conversation flow. In Composer you can add help topics to your bot and let users exit out of any
process at any time.
In this tutorial, you learn how to
Create help topics that can be accessed from anywhere is the flow at any time.
Interrupt your bots flow to enable your users to exit out of any process before it is completed.

Prerequisites
Completion of the tutorial Adding actions to you dialog.

Add Help and Cancel


With even a simple bot, it is a good practice to provide help. You'll also want to provide a way for users to exit at
any point in the flow.
Create a new dialog
1. Select + Add and then Add new dialog in the toolbar.
2. Enter help in the Name field and global help in the Description field of the Define conversation
objective form, then click Next .

Composer will create the new help dialog with one BeginDialog trigger pre-configured.
3. Select the BeginDialog trigger in the Navigation pane.
4. Create a new action at the bottom of the flow by selecting the plus + icon in the Authoring canvas , then
select Send a response from the list of actions.
5. Enter the following text into the Proper ties panel on the right side of the Composer screen:
I am a weather bot! I can tell you the current weather conditions. Just say WEATHER.
Create an Intent Recognized trigger
1. Select WeatherBot (the main dialog) from the dialog navigation pane.
2. In the Proper ties panel on the right side, select Regular Expression from the Recognizer Type drop-
down list.

3. Select + Add and then + Add new trigger on the toolbar.


4. In the pop-up Create a trigger window, enter help for both the What is the name of this trigger
(RegEx) and the Please input regex pattern fields. Select Submit .

5. Next select + in the Authoring Canvas to create a new action, then select Begin a new dialog from the
Dialog management menu.
6. Next you need to specify the dialog to call when the help intent is recognized. You do this by selecting help
from the Dialog name drop-down list in the Proper ties panel.

Now, in addition to giving you the current weather, your bot should now also offer help. You can verify this
using the Emulator.
7. Select Restar t Bot and open it in the Emulator to verify you are able to call your new help dialog.
Notice that once you start the weather dialog by saying weather your bot doesn't know how to provide help since
it is still trying to resolve the zip code. This is why you need to configure your bot to allow interruptions to the
dialog flow.
Allowing interruptions
The getWeather dialog handles getting the weather forecast, so you will need to configure its flow to enable it to
handle interruptions, which will enable the new help functionality to work. The following steps demonstrate how
to do this.
1. Select the BeginDialog trigger in the getWeather dialog.
2. Select the Bot Asks (Text) action in the Authoring canvas .

3. Select the Other tab in the Proper ties panel. Set the Allow interruptions field to true .

This tells Bot Framework to consult the parent dialog's recognizer, which will allow the bot to respond to
help at the prompt as well.
4. Select Restar t Bot and open it in the Emulator to verify you are able to call your new help dialog.
5. Say weather to your bot. It will ask for a zip code.
6. Now say help . It will now provide the global help response, even though that intent and trigger are defined
in another dialog.
You have learned how to interrupt a flow to include help functionality to your bot. Next you will learn how to add a
global cancel command that lets users exit out of a flow without completing it.
Global cancel
1. Follow the steps described in the create a new dialog section above to create a new dialog named cancel
and add a Send a response action with the response of Canceling! .
2. Add another action by selecting + at the bottom of the flow in the Authoring canvas then select Cancel
all dialogs from the Dialog management menu. When Cancel all dialogs is triggered, the bot will
cancel all active dialogs, and send the user back to the main dialog.

Next you will add a cancel intent, the same way you added the help intent in the previous section.
3. Follow steps 1 to 5 described in the create an intent recognized trigger section above to create a cancel
intent in the main dialog (weatherBot ) and add a Begin a new dialog action in the cancel trigger. You
also need to specify the dialog to call when the cancel intent is recognized. You do this by selecting cancel
from the Dialog name drop-down list in the Proper ties panel.
Now, your users will be able to cancel out of the weather dialog at any point in the flow. You can verify this
using the Emulator.
4. Select Restar t Bot and open it in the Emulator to verify you are able to cancel.
5. Say weather to your bot. The bot will ask for a zip code.
6. Now say help . The bot will provide the global help response.
7. Now, say cancel . Notice that the bot doesn't resume the weather dialog but instead, it confirms that you
want to cancel, then waits for your next message.

Next steps
Tutorial: Adding Language Generation to your bot to power your bot's responses.
Tutorial: Adding language generation to your bot
9/21/2020 • 3 minutes to read

Now that your bot can perform its basic tasks it's time to improve your bot's conversational abilities. The ability to
understand what your user means conversationally and contextually and responding with useful information is
often the primary challenge for a bot developer. Bot Framework Composer integrates with the Bot Framework
Language Generation library, a set of powerful templating and message formatting tools that let you include
variation, conditional messages, and dynamic content. LG gives you greater control of how your bot responds to
the user.
In this tutorial, you learn how to:
Integrate Language Generation into your bot using Composer

Prerequisites
Completion of the tutorial Adding Help and Cancel functionality to your bot
Knowledge about Language Generation

Language Generation
Let's start by adding some variation to the welcome message.
1. Go to the Navigation pane and select the WeatherBot dialogs WelcomeTheUser trigger.
2. Select the Send a response action in the Authoring Canvas .

3. Replace the response text in the Proper ties panel with the following:

- Hi! I'm a friendly bot that can help with the weather. Try saying WEATHER.
- Hello! I am Weather Bot! Say WEATHER to get the current conditions.
- Howdy! Weather bot is my name and weather is my game.
Your bot will randomly select any of the above phrases when responding to the user. Each phrase must
begin with the dash (- ) character on a separate line. For more information see the Template and Anatomy of
a template sections of the Language Generation article.
4. To test your new phrases select the Restar t Bot button in the Toolbar and open it in the Emulator. Click
Restar t conversation a few times to see the results of the greetings being randomly selected.
Currently, the bot reports the weather in a very robotic manner:

The weather is Clouds and it is 75°.

You can improve the language used when delivering the weather conditions to the user by utilizing two
features of the Language Generation system: conditional messages and parameterized messages.
5. Select Bot Responses from the Composer menu.
You'll notice that every message you created in the flow editor also appears here and these LG template are
grouped by dialogs. They're linked, and any changes you make in this view will be reflected in the flow as
well.
6. Select getWeather in the navigation pane and toggle the Edit Mode switch in the upper right hand corner
so that it turns blue. This will enable a syntax-highlighted LG editor in the main pane. You can now edit LG
template in the selected dialog getWeather .

7. Scroll to the bottom of the editor and paste the following text:
# DescribeWeather(weather)
- IF: ${weather.weather=="Clouds"}
- It is cloudy
- ELSEIF: ${weather.weather=="Thunderstorm"}
- There's a thunderstorm
- ELSEIF: ${weather.weather=="Drizzle"}
- It is drizzling
- ELSEIF: ${weather.weather=="Rain"}
- It is raining
- ELSEIF: ${weather.weather=="Snow"}
- There's snow
- ELSEIF: ${weather.weather=="Clear"}
- The sky is clear
- ELSEIF: ${weather.weather=="Mist"}
- There's a mist in the air
- ELSEIF: ${weather.weather=="Smoke"}
- There's smoke in the air
- ELSEIF: ${weather.weather=="Haze"}
- There's a haze
- ELSEIF: ${weather.weather=="Dust"}
- There's a dust in the air
- ELSEIF: ${weather.weather=="Fog"}
- It's foggy
- ELSEIF: ${weather.weather=="Ash"}
- There's ash in the air
- ELSEIF: ${weather.weather=="Squall"}
- There's a squall
- ELSEIF: ${weather.weather=="Tornado"}
- There's a tornado happening
- ELSE:
- ${weather.weather}

This creates a new Language Generation template named DescribeWeather . The template lets the LG system
use the data returned from the weather service in the weather.weather variable to generate a friendlier
response.
8. Select Design from the Composer Menu.
9. Select the getWeather dialog, then its BeginDialog trigger in the Navigation pane.

10. Scroll down in the Authoring Canvas and select the Send a response action that starts with The weather
is....
11. Now replace the response with the following:
- ${DescribeWeather(dialog.weather)} and the temp is ${dialog.weather.temp}°

This syntax lets you nest the DescribeWeather template inside another template. LG templates can be
combined in this way to create more complex templates.
You are now ready to test your bot in the Emulator.
12. Select the Restar t Bot button in the Toolbar then open it in the Emulator.
Now, when you say weather , the bot will send you a message that sounds much more natural than it did
previously.

Next steps
Tutorial: Incorporating cards and buttons into your bot
Tutorial: Incorporating cards and buttons into your
bot
9/21/2020 • 2 minutes to read

The previous tutorial taught how to add language generation to your bot to include variation, conditional
messages, and dynamic content that give you greater control of how your bot responds to the use. This tutorial
will build on what you learned in the previous tutorial by adding richer message content to your bot using Cards
and Buttons.
In this tutorial, you learn how to:
Add cards and buttons to your bot using Composer

Prerequisites
Completion of the tutorial Adding language generation to your bot
Knowledge about Language Generation
Knowledge about Cards
Knowledge about Sending responses with cards in Composer

Adding buttons
Buttons are added as suggested actions. Your can add preset buttons to your bot that the user can select to provide
input. Suggested actions improve the user experience by letting users answer questions or make selections with
the tap of a button instead having to type responses.
First, update the prompt for the users zip code to include suggested actions for help and cancel actions.
1. Select the BeginDialog trigger in the getWeather dialog.
2. Select the Bot Asks (Text) action which is the second action in the flow.

3. Update the Prompt to include the suggested actions as shown below:


[Activity
Text = What is your zip code?
SuggestedActions = help | cancel
]

4. Click Restar t Bot and then Test in emulator .


Now when you say weather to your bot, you will not only see that your bot asks you for zipcode but also
presents help and cancel button as suggested actions.

Adding cards
Now you can change the weather report to also include a card.
1. Scroll to the bottom of the Authoring canvas and select the Send a response node in the True branch
that starts with ${DescribeWeather(dialog.weather)}...
2. Replace the response with this Thumbnail Card:

[ThumbnailCard
title = Weather for ${dialog.weather.city}
text = The weather is ${dialog.weather.weather} and ${dialog.weather.temp}°
image = ${dialog.weather.icon}
]

3. Click Restar t Bot in the Composer Toolbar . Once your bot has restarted click Test in Emulator .
In the Emulator, go through the bot flow, say weather and enter a zip code. Notice that the bot now
responds back with a card that contains the results along with a card title and image.
Next steps
Tutorial: Adding LUIS functionality to your bot
Tutorial: Using LUIS for Language Understanding
9/21/2020 • 4 minutes to read

Up until this point in the tutorials we have been using the Regular Expression recognizer to detect user intent.
The other recognizer currently available in Composer is the LUIS recognizer. The LUIS recognizer incorporates
Language Understanding (LU) technology that is used by a bot to understand a user's response to determine what
next to do in a conversation flow. Once the LUIS recognizer is selected you will need to provide training data in the
dialog to capture the users intent that is contained in the message, which you will then pass on to the triggers
which define how the bot will respond.
In this tutorial, you learn how to:
Add the LUIS recognizer to your bot.
Determine user intent and entities and use that to generate helpful responses.

Prerequisites
Completion of the tutorial Incorporating cards and buttons into your bot
Knowledge about Language Understanding concept article
A LUIS and LUIS authoring key.

Update the recognizer type


1. Select the main dialog WeatherBot in the Navigation pane.
2. Select LUIS from the Default recognizer drop-down list in the Proper ties panel.

In the next section, you will learn to create three Intent recognized triggers using LUIS recognizer in
WeatherBot . You can ignore or delete the Intent recognized triggers you created using Regular Expression in
the Add help and cancel command tutorial.

Add language understanding data and conditions


You need to add trigger phrases for each of the three triggers (cancel, help, and weather) that LUIS will use as
training data for your bot. You also need to set the Conditions for the help and weather dialogs.
1. Select + Add then Add new trigger in the toolbar.
2. Add the following language understanding training data in the Create a trigger form.
Select Intent recognized in the What is the type of this trigger field.
Enter cancel in the What is the name of this trigger(LUIS) field.
Enter example utterances using the .lu file format in the Trigger phrases field.
- cancel
- please cancel
- stop that

3. After you select Submit , you will see the trigger node in the authoring canvas.
4. In the Proper ties pane on the right hand of Composer screen, you can set the Condition property to
#Cancel.Score >= 0.8 .
This tells your bot not to fire the cancel trigger if the confidence score returned by LUIS is lower than 80%.
LUIS is a machine learning based intent classifier and can return a variety of possible matches, so you will
want to avoid low confidence results.

5. Repeat steps 1 through 3 to create the weather trigger in the WeatherBot.Main dialog. Add the following
LU training data phrases to the Trigger phrases field:
# weather
- get weather
- weather
- how is the weather

6. Repeat steps 1 through 3 to create the help trigger in the WeatherBot dialog. Set the Condition property
to #Help.Score >= 0.5 and add the following LU training data phrases to the Trigger phrases field:

# help
- help
- I need help
- please help me
- can you help

7. Click the Restar t Bot button from the Composer Toolbar .


8. Composer needs to publish the LUIS model you created in the LU Editor and needs your LUIS Key to do it.
Enter your LUIS key in the LUIS Primar y key field of the Publish LUIS models form. If you do not have a
LUIS account, you can sign up for one on the LUIS site. Once entered, select OK to continue.

TIP
You can find your LUIS Primary key on the LUIS home page by selecting your user account icon in the top right side
of the screen, then Settings then copy the value of the Primar y key field in the Star ter_Key section of the User
Settings page.

9. Select Test in Emulator from the Composer Toolbar .


With LUIS, you no longer have to type in exact regex patterns to trigger specific scenarios for your bot. Try
phrases like:
"How is the weather"
"Weather please"
"Cancel for me"
"Can you help me?"

Using LUIS for entity extraction


You can use LUIS to recognize entities. An entity is a word or phrase extracted from the users utterance that helps
clarify their intent.
For example, the user could say "How is the weather in 98052?" Instead of prompting the user again for a zip code,
your bot could respond with the weather. This is a very simple example of very powerful capabilities. For more
information on how to use LUIS for entity extraction in Composer, read the How to define intent with entities
article.
The first step is to add a regex entity extraction rule to the LUIS app.
1. Select User Input and then select WeatherBot in the navigation pane. Toggle Edit mode and in the
Language Understanding editor add the following entity definition at the end of the LU content:

> Define a regex zipcode entity. Any time LUIS sees a five digit number, it will flag it as 'zipcode'
entity.

$ zipcode : /[0-9]{5}/

The next step is to create an action in the BeginDialog trigger to set the user.zipcode property to the value of the
zipcode entity.

2. Select the getWeather dialog in the Navigation pane, then the BeginDialog trigger.
3. Select + in the Authoring Canvas to insert an action after the Send a response action (that has the
prompt Let's check the weather). Then select Set a proper ty from the Manage Proper ties menu.
4. In the Proper ties panel enter user.zipcode into the Proper ty field and =@zipcode in the Value field. The
user.zipcode property will now be set to the value of the zipcode entity, and if entered by the user, they
will no longer be prompted for it.

Finally, you can test your bot.


5. Select the Restar t Bot button in the Composer Toolbar and wait for the LUIS application to finish updating
your changes. Then select Test in Emulator .
Now when you say "how is the weather in 98004" your bot will respond with the weather for that location
instead of prompting you for a zip code.
Dialogs
9/21/2020 • 5 minutes to read

Modern conversational software has many different components, including source code, custom business logic,
cloud API, training data for language processing systems, and perhaps most importantly, the actual content used
in conversations with the bot's end users. Composer integrates all of these pieces into a single interface for
constructing the building blocks of bot functionality called Dialogs .
Each dialog represents a portion of the bot's functionality and contains instructions for how the bot will react to
the input. Simple bots will have just a few dialogs. Sophisticated bots may have dozens or hundreds of individual
dialogs.
In Composer, dialogs are functional components offered in a visual interface that do not require you to write
code. The dialog system supports building an extensible model that integrates all of the building blocks of a bot's
functionality. Composer helps you focus on conversation modeling rather than the mechanics of dialog
management.

Types of dialogs
You create a dialog in Composer to manage a conversation objective. There are two types of dialogs in
Composer: main dialog and child dialog. The main dialog is initialized by default when you create a new bot. You
can create one or more child dialogs to keep the dialog system organized. Each bot has one main dialog and can
have zero or more child dialogs. Refer to the Create a bot article on how to create a bot and its main dialog in
Composer. Refer to the Add a dialog article on how to create a child dialog and wire it up in the dialog system.
Below is a screenshot of a main dialog named MyBot and two children dialogs called Weather and Greeting .

At runtime, the main dialog is called into action and becomes the active dialog, triggering event handlers with the
actions you defined during the creation of the bot. As the conversation flows, the main dialog can call a child
dialog, and a child dialog can, in turn, call the main dialog or other children dialogs.

Anatomy of a dialog
The following diagram shows the anatomy of a dialog in Composer. Note that dialogs in Composer are based on
Adaptive dialogs.
Recognizer
The recognizer interprets what the user wants based on their input. When a dialog is invoked its recognizer will
start to process the message and try to extract the primary intent and any entity values the message includes.
After processing the message, both the intent and entity values are passed onto the dialog's triggers.
Composer currently supports two recognizers: The LUIS recognizer, which is the default, and the Regular
Expression recognizer. You can choose only one recognizer per dialog, or you can choose not to have a recognizer
at all.
Recognizers give your bot the ability to understand and extract meaningful pieces of information from user
input. All recognizers emit events when the recognizer picks up an intent (or extracts entities ) from a given user
utterance . The recognizer of a dialog is not always called into play when a dialog is invoked. It depends on how
you design the dialog system.
Below is a screenshot of recognizers in Composer.

Default recognizer : enables you to use the following different recognizers:


None - do not use recognizer.
LUIS recognizer - to extract intents and entities from a user's utterance based on the defined LUIS
application.
QnA Maker recognizer - to extract intents from a user's utterance based on the defined QnAMaker
application.
Cross-trained recognizer set - to compare recognition results from more than one recognizer to decide
a winner.
Regular Expression : gives you the ability to extract intent and entity data from an utterance based on
regular expression patterns.
Custom recognizer : enables you to customize your own recognizer by editing JSON in the form.
Trigger
The functionality of a dialog is contained within triggers. Triggers are rules that tell the bot how to process
incoming messages and are also used to define a wide variety of bot behaviors, from performing the main
fulfillment of the user's request, to handling interruptions like requests for help, to handling custom, developer-
defined events originating from the app itself. Below is a screenshot of the trigger menu in Composer.

Action
Triggers contain a series of actions that the bot will undertake to fulfill a user's request. Actions are things like
sending messages, responding to user questions using a knowledge base, making calculations, and performing
computational tasks on behalf of the user. The path the bot follows through a dialog can branch and loop. The bot
can ask even answer questions, validate input, manipulate and store values in memory, and make decisions.
Below is a screenshot of the action menu in Composer. Select the + sign below the trigger you can mouse over
the action menu.

Language Generator
As the bot takes actions and sends messages, the Language Generator is used to create those messages from
variables and templates. Language generators can create reusable components, variable messages, macros, and
dynamic messages that are grammatically correct.

Dialog actions
A bot can have from one to several hundred dialogs, and it can get challenging to manage the dialog system and
the conversation with users. In the Add a dialog section, we covered how to create a child dialog and wire it up to
the dialog system using Begin a new dialog action. Composer provides more dialog actions to make it easier
to manage the dialog system. You can access the different dialog actions by clicking the + node under a trigger
and then select Dialog management .
Below is a list of the dialog actions available in Composer:

DIA LO G A C T IO N DESC RIP T IO N

Begin a new dialog An action that begins another dialog. When that dialog is
completed, it will return to the caller.

End this dialog A command that ends the current dialog, returning the
resultProperty as the result of the dialog.

Cancel all dialogs A command to cancel all of the current dialogs by emitting
an event that must be caught to prevent cancellation from
propagating

End this turn A command to end the current turn without ending the
dialog.

Repeat this Dialog An action that repeats the current dialog with the same
dialog.

Replace this Dialog An action that replaces the current dialog with the target
dialog.

With these dialog actions, you can easily create an extensible dialog system without worrying about the
complexities of dialog management.

Further reading
Dialogs library
Adaptive dialogs

Next
Events and triggers
Events and triggers
9/21/2020 • 5 minutes to read

In Bot Framework Composer, each dialog includes one or more event handlers called triggers. Each trigger
contains one or more actions. Actions are the instructions that the bot will execute when the dialog receives any
event that it has a trigger defined to handle. Once a given event is handled by a trigger, no further action is taken
on that event. Some event handlers have a condition specified that must be met before it will handle the event and
if that condition is not met, the event is passed to the next event handler. If an event is not handled in a child
dialog, it gets passed up to its parent dialog to handle and this continues until it is either handled or reaches the
bots main dialog. If no event handler is found, it will be ignored and no action will be taken.
To see the complete trigger menu in Composer, select + Add in the tool bar and + Add new trigger from the
drop-down list.

Anatomy of a trigger
The basic idea behind a trigger (event handler) is "When (event) happens, do (actions)". The trigger is a conditional
test on an incoming event, while the actions are one or more programmatic steps the bot will take to fulfill the
user's request.
A trigger contains the following properties:

T RIGGER P RO P ERT Y DESC RIP T IO N

Name The trigger name can be changed in the property panel.

Actions The set of instructions that the bot will execute.

Condition The condition can be created or updated in the properties


panel and is ignored if left blank, otherwise it must evaluate
to true for the event to fire. Conditions must follow the
Adaptive expressions syntax. If the condition is ignored or
evaluates to false, processing of the event continues with the
next trigger.

A dialog can contain multiple triggers. You can view them under the specific dialog in the navigation pane. Each
trigger shows as the first node in the authoring canvas. A trigger contains actions defined to be executed. Actions
within a trigger occur in the context of the active dialog.
The screenshot below shows the properties of an Intent recognized trigger named Cancel that is configured to
fire whenever the Cancel intent is detected as shown in the properties panel. In this example the Condition field
is left blank, so no additional conditions are required in order to fire this trigger.

Types of triggers
There are different types of triggers that all work in a similar manner, and in some cases can be interchanged. This
section will cover the different types of triggers and when you should use them. See the define triggers article for
additional information.
Intent triggers
Intent triggers work with recognizers. After the first round of events is fired, the bot will pass the incoming
message through the recognizer. If an intent is detected, it will be passed into the trigger (event handler) with any
entities contained in the message. If no intent is detected by the recognizer, an Unknown intent trigger will fire,
which handles intents not handled by any trigger.
There are four different intent triggers in Composer:
Unknown intent
Intent recognized
QnA Intent recognized
Duplicated intents recognized
You should use intent triggers when you want to:
Trigger major features of your bot using natural language.
Recognize common interruptions like "help" or "cancel" and provide context-specific responses.
Extract and use entity values as parameters to your dialog.
For additional information see how to define this type of triggers in the how to define triggers article.
Dialog events
The base type of triggers are dialog triggers. Almost all events start as dialog events which are related to the
"lifecycle" of the dialog. Currently there are four different dialog events triggers in Composer:
Dialog star ted (Begin dialog event)
Dialog cancelled (Cancel dialog event)
Error occurred(Error event)
Re-prompt for input(Reprompt dialog event)
Most dialogs include a trigger configured to respond to the BeginDialog event, which fires when the dialog
begins. This allows the bot to respond immediately.
You should use dialog triggers to:
Take actions immediately when the dialog starts, even before the recognizer is called.
Take actions when a "cancel" signal is detected.
Take actions on messages received or sent.
Evaluate the content of the incoming activity.
For additional information, see the dialog events section of the article on how to define triggers.
Activities
Activity triggers are used to handle activities such as when a new user joins and the bot begins a new
conversation. Greeting (ConversationUpdate activity) is a trigger of this type and you can use it to send a
greeting message. When you create a new bot, the Greeting (ConversationUpdate activity) trigger is
initialized by default in the main dialog. This specialized option is provided to avoid handling an event with a
complex condition attached. Message events is a type of Activity trigger to handle message activities.
You should use Activities trigger when you want to:
Take actions when a user begins a new conversation with the bot.
Take actions on receipt of an activity with type EndOfConversation .
Take actions on receipt of an activity with type Event .
Take actions on receipt of an activity with type HandOff .
Take actions on receipt of an activity with type Invoke .
Take actions on receipt of an activity with type Typing .
Take actions when a message is received (on receipt of an activity with type MessageReceived ).
Take actions when a message is updated (on receipt of an activity with type MessageUpdate ).
Take actions when a message is deleted (on receipt of an activity with type MessageDelete ).
Take actions when a message is reacted (on receipt of an activity with type MessageReaction ).

For additional information, see Activities trigger in the article titled How to define triggers.
Custom events
You can create and emit your own events by creating an action associated with any trigger, then you can handle
that custom event in any dialog in your bot by defining a Custom event event trigger.
Bots can emit your user-defined events using Emit a custom event . If you define an Emit a custom event and
it fires, any Custom event in any dialog will catch it and execute the corresponding actions.
For additional information, see Custom event in the article titled How to define triggers.

Further reading
Adaptive dialog: Recognizers, rules, steps and inputs

Next
Conversation flow and memory
How to define triggers
Conversation flow and memory
9/21/2020 • 7 minutes to read

All bots built with Bot Framework Composer have a "memory", a representation of everything that is currently in
the bot's active mind. Developers can store and retrieve values in the bot's memory, and can use those values to
create loops, branches, dynamic messages and behaviors in the bot. Properties stored in memory can be used
inside templates or as part of a calculation.
The memory system makes it possible for bots built in Composer to do things like:
Store user profiles and preferences.
Remember things between sessions such as the last search query or a list of recently mentioned locations.
Pass information between dialogs.

Anatomy of a property in memory


A piece of data in memory is referred to as a proper ty . A property is a distinct value identified by a specific
address comprised of two parts, the scope of the property and the name of the property: scope.name .
Here are a couple of examples:
user.name
dialog.index
turn.activity
user.profile.age
this.value

The scope of the property determines when the property is available, and how long the value will be retained.

TIP
It's useful to establish conventions for your state properties across conversation , user , and dialog state for
consistency and to prepare for context sharing scenarios. It is also a good practice to think about the lifetime of the
property when creating it. Read more in the composer best practices article.

Store information about users and ongoing conversations


The bot's memory has two "permanent" scopes. The first is a place to store information about individual users,
the second is a place to store information about ongoing conversations:
1. user is associated with a specific user. Properties in the user scope are retained forever and may be
accessed by multiple users within the same conversation (for example, multiple users together in a
Microsoft Teams channel).
2. conversation is associated with the conversation id. Properties in the conversation scope have a lifetime
of the conversation itself.
Store temporary values during task handling
The bot's memory also has two "ephemeral" scopes. Ephemeral scopes are a place to store temporary values that
are only relevant while a task is being handled. The two scopes are:
1. dialog is associated with the active dialog and any child or parent dialogs. Properties in the dialog scope
are retained until the last active dialog ends.
2. turn is associated with a single turn. You can also think of this as the bot handling a single message from
the user. Properties in the turn scope are discarded at the end of the turn.
Store values of the active action's property
The this scope pertains to the active action's properties. This is helpful for input actions since their life time
typically lasts beyond a single turn of the conversation.
this.value holds the current recognized value for the input.
this.turnCount holds the number of times the missing information has been prompted for this input.

Set properties with prompts


Input is collected from users with prompt types provided in the Ask a question sub-menu.

Prompts define the questions posed to the user and are set in the Prompt box under the Bot Asks tab in the
properties panel on the left.

Under the User Input tab you'll see Proper ty to fill , where the user's response will be stored. Prompt
responses can be formatted before being stored by selecting an option for Output Format .
In the above example of a number prompt, the result of the prompt "What is your age?" will be stored as the
user.age property.

For more information about implementing text other prompts see the article Asking users for input.

Manipulate properties using memory actions


Bot Framework Composer provides a set of memory manipulation actions in the Manage proper ties sub-
menu. These actions can be used to create, modify and delete properties in memory. Properties can be created in
the editor and during runtime. Composer will automatically manage the underlying data for you.

Set a property
Use Set a proper ty to set the value of a property.
The value of a property can be set to a literal value, like true , 0 , or fred , or it can be set to the result of a
computed expression. When storing simple values it is not necessary to initialize the property.
Set properties
Use Set proper ties to set a group of properties.

The value of each property is assigned individually in the Proper ties panel . You select Add to set the next one.
Delete a property
Use Delete a Proper ty to remove a property from memory.
Delete properties
Use Delete proper ties to remove properties from memory.

Edit an Array Property


Use Edit an Array Proper ty to add and remove items from an array. Items set in Value can be added or
removed from the beginning or end of an array in the Items proper ty using push, pop, take, remove, and clear
in Type of change . The result of the edited array is saved to Result Proper ty
Note that it is possible to push the value of an existing property into an array property. For example, push
turn.choice onto dialog.choices .

Manipulate properties with dialogs


Child dialogs can return values to their parent dialogs. In this way, a child dialog can encapsulate a multi-step
interaction, collect and compute multiple values, and then return a single value to its parent dialog.
For example, a child dialog named profile may have two prompts to build a compound property representing a
user profile:
When the dialog returns the compound value to the parent dialog, the return value is specified as the Default
result proper ty within the trigger for the child dialog:

Finally, the parent dialog is configured to capture the return value inside the Begin a new dialog action:
When executed, the bot will execute the profile child dialog, collect the user's name and age in a temporary
scope, then return it to the parent dialog where it is captured into the user.profile property and stored
permanently.

Automatic properties
Some properties are automatically created and managed by the bot. These are available automatically.

P RO P ERT Y DESC RIP T IO N

turn.activity The full incoming Activity object.

turn.intents If a recognizer is run, the intents found.

turn.entities If a recognizer is run, the entities found.

turn.dialogEvents.event name.value Payload of a custom event fired using the EmitEvent action.

Refer to properties in memory


Bots can retrieve values from memory for a variety of purposes. The bot may need to use a value in order to
construct an outgoing message, or make a decision based on a value then perform actions based on that decision,
or use the value to calculate other values.
Sometimes, you will refer directly to a property by its address in memory: user.name . Other times, you will refer
to one or more properties as part of an expression: (dialog.orderTotal + dialog.orderTax) > 50 .
Expressions
Bot Framework Composer uses the Adaptive expressions to calculate computed values. This syntax allows
developers to create composite values, define complex conditional tests, and transform the content and format of
values. For more information see the Adaptive expressions operators and pre-built functions.
When used in expressions, no special notation is necessary to refer to a property from memory.
Memory in branching actions
A bot can evaluate values from memory when making decisions inside a branching action like an If/Else or
Switch branch. The conditional expression that is tested in one of these branching actions is an expression that,
when evaluated, drives the decision.
In the example below, the expression user.profile.age > 13 will evaluate to either True or False , and the flow
will continue through the appropriate branch.

In this second example, the value of turn.choice is used to match against multiple Switch cases. Note that, while
it looks like a raw reference to a property, this is actually an expression and since no operation is being taken on
the property, the expression evaluates to the raw value.

Memory in loops
When using For each and For each page loops, properties also come into play. Both require an Items
proper ty that holds the array, and For each page loops also require a Page size , or number of items per page.
Memory in LG
One of the most powerful features of the Bot Framework system is Language Generation, particularly when used
alongside properties pulled from memory.
You can refer to properties in the text of any message, including prompts.
You can also refer to properties in LG templates. See Language Generation to learn more about the Language
Generation system.
To use the value of a property from memory inside a message, wrap the property reference in curly brackets:
{user.profile.name}

The screenshot below demonstrates how a bot can prompt a user for a value, then immediately use that value in
a confirmation message.

In addition to getting properties values, it is also possible to embed properties in expressions used in a message
template. Refer to the Adaptive expressions page for the full list of pre-built functions.
Properties can also be used within an LG template to provide conditional variants of a message and can be
passed as parameters to both built-in and custom functions. Learn more about LG.
Memory shorthand notations
Bot Framework Composer provides a variety of shortcuts for referring to properties in memory. Refer to the
Managing state documentation for the complete list of memory shorthand notations.

Further reading
Memory scopes in adaptive dialogs.

Next
Language Generation in Bot Framework Composer.
Natural Language Processing
9/21/2020 • 2 minutes to read

Natural Language Processing (NLP) is a technological process that enables computer applications, such as bots, to
derive meaning from a users input. To do this it attempts to identify valuable information contained in
conversations by interpreting the users needs (intents) and extract valuable information (entities) from a sentence,
and respond back in a language the user will understand.
Why do bots Need Natural Language Processing?
Bots are able to provide little to no value without NLP. It is what enables your bot to understand the messages your
users send and respond appropriately. When a user sends a message with “Hello”, it is the bots Natural Language
Processing capabilities that enables it to know that the user posted a standard greeting, which in turn allows your
bot to leverage its AI capabilities to come up with a proper response. In this case, your bot can respond with a
greeting.
Without NLP, your bot can’t meaningfully differentiate between when a user enters “Hello” or “Goodbye”. To a bot
without NLP, “Hello” and “Goodbye” will be no different than any other string of characters grouped together in
random order. NLP helps provide context and meaning to text or voice based user inputs so that your bot can come
up with the best response.
One of the most significant challenges when it comes to NLP in your bot is the fact that users have a blank slate
regarding what they can say to your bot. While you can try to predict what users will and will not say, there are
bound to be conversations that you did not anticipate, fortunately Bot Framework Composer makes it easy to
continually refine its NLP capabilities.
The two primary components of NLP in Composer are Language Understanding (LU) that processes and
interprets user input and Language Generation (LG) that produces bot responses.

Language Understanding
Language Understanding (LU) is the subset of NLP that deals with how the bot handles user inputs and converts
them into something that it can understand and respond to intelligently.
Additional information on Language Understanding
The Language Understanding concept article.
The Advanced intent and entity definition concept article.
The Using LUIS for Language Understanding how to article.

Language Generation
Language Generation (LG), is the process of producing meaningful phrases and sentences in the form of natural
language. Simply put, it is when your bot responds to a user with human readable language.
Additional information on Language Generation
The Language Generation concept article.
The Language Generation how to article.

Summary
Natural Language Processing is at the core of what most bots do in interpreting users written or verbal inputs and
responding to them in a meaningful way using a language they will understand.
While NLP certainly can’t work miracles and ensure a bot appropriately responds to every message, it is powerful
enough to make-or-break a bot’s success. Don’t underestimate this critical and often overlooked aspect of bots.
Language Generation
9/21/2020 • 5 minutes to read

Language Generation (LG) lets you define multiple variations of a phrase, execute simple expressions based on
context, and refer to conversational memory. At the core of language generation lies template expansion and
entity substitution. You can provide one-off variation for expansion as well as conditionally expand a template.
The output from language generation can be a simple text string, multi-line response, or a complex object
payload that a layer above language generation will use to construct a complete activity. Bot Framework
Composer natively supports language generation to produce output activities using the LG templating system.
You can use Language generation to:
Achieve a coherent personality, tone of voice for your bot.
Separate business logic from presentation.
Include variations and sophisticated composition for any of your bot's replies.
Construct cards, suggested actions and attachments using a structured response template.
Language generation is achieved through:
A Markdown based .lg file that contains the templates and their composition.
Full access to the current bot's memory so you can data bind language to the state of memory.
Parser and runtime libraries that help achieve runtime resolution.

TIP
You can read the composer best practices article for some suggestions using LG in Composer.

Templates
Templates are functions which return one of the variations of the text and fully resolve any other references to
templates for composition. You can define one or more text responses in a template. When multiple responses
are defined in the template, a single response will be selected at random.
You can also define one or more expressions using adaptive expressions, so when it is a conditional template,
those expressions control which particular collection of variations get picked. Templates can be parameterized,
meaning that different callers to the template can pass in different values for use in expansion resolution. For
additional information see .lg file format.
Composer currently supports three types of templates: simple response, conditional response, and structured
response. You can read define LG templates to learn how to define each of them.
You can split language generation templates into separate files and refer to them from one another. You can use
Markdown-style links to import templates defined in another file, like [description text](file/uri path) . Make
sure your template names are unique across files.
Anatomy of a template
A template usually consists of the name of the template, denoted with the # character, and one of the following:
A list of one-off variation text values defined using "-"
A collection of conditions, each with a:
conditional expression, expressed using adaptive expressions and
list of one-off variation text values per condition
A structure that contains:
structure-name
properties
Below is an example of a simple LG template with one-off variation text values.

> this is a comment


# nameTemplate
- Hello ${user.name}, how are you?
- Good morning ${user.name}. It's nice to see you again.
- Good day ${user.name}. What can I do for you today?

Define LG templates
When you want to determine how your bot should respond to user input, you can define LG templates to
generate responses. For example, you can define a welcome message to the user in the Send a response
action. To do this, select the Send a response action node. You will see the inline LG editor where you can
define LG templates.
To define LG templates in Composer, you will need to know:
the aforementioned LG concepts
.lg file format
adaptive expressions
You can define LG templates either in the inline LG editor or the Bot Responses that lists all templates. Below is
a screenshot of the LG inline editor.

Select the Bot Responses icon (or the bot icon when collapsed) in the navigation pane to see all the LG
templates defined in the bot categorized by dialog. Select All in the navigation to see the templates defined and
shared by all the dialogs. Use the [import](common.lg) to import the common templates to a specific dialog.
Select any dialog or All in the navigation pane and toggle Edit Mode on the upper right corner to edit your LG
template.
Composer currently supports definitions of the following three types of templates: simple, conditional, and
structured response.
Simple response template
A simple response template generates a simple text response. Simple response template can be a single line
response, text with memory, or a response of multiline text. Use the - character before response text or an
expression with the property value to returns Here are a few examples of simple response templates from the
RespondingWithTextSample.
Here is an example of a single line text response:

- Here is a simple text message.

This is an example of a single line response using a variable:

- ${user.message}

Variables and expressions are enclosed in curly brackets ${} .


Here is an example of a multi-line response. It includes multiple lines of text enclosed in ``` .

# multilineText
- ``` you have such alarms
alarm1: 7:am
alarm2: 9:pm
```

Conditional response template


For all conditional templates, all conditions are expressed in Adaptive expressions. Condition expressions are
enclosed in curly brackets ${} . Here are two conditional response template examples.
If-else

> time of day greeting reply template with conditions.


# timeOfDayGreeting
IF: ${timeOfDay == 'morning'}
- good morning
ELSE:
- good evening
Switch

# TestTemplate
SWITCH: {condition}
- CASE: {case-expression-1}
- output1
- CASE: {case-expression-2}
- output2
- DEFAULT:
- final output

Structured response template


Structured response templates lets users to define a complex structure that supports all the benefits of LG
(templating, composition, substitution) while leaving the interpretation of the structured response up to the bot
developer. It provides an easier way to define an outgoing activity in a simple text format. Composer currently
support structured LG templates to define cards and SuggestedActions.
The definition of a structured response template is as follows:

# TemplateName
> this is a comment
[Structure-name
Property1 = <plain text> .or. <plain text with template reference> .or. <expression>
Property2 = list of values are denoted via '|'. e.g. a | b
> this is a comment about this specific property
Property3 = Nested structures are achieved through composition
]

Below is an example of SuggestedActions from the Interruption Sample:

- Hello, I'm the interruption demo bot! \n \[Suggestions=Get started | Reset profile]

Below is an example of a Thumbnail card from the Responding With Cards Sample:

# ThumbnailCard
[ThumbnailCard
title = BotFramework Thumbnail Card
subtitle = Microsoft Bot Framework
text = Build and connect intelligent bots to interact with your users naturally wherever
they are, from text/sms to Skype, Slack, Office 365 mail and other popular services.
image = https://sec.ch9.ms/ch9/7ff5/e07cfef0-aa3b-40bb-9baa-
7c9ef8ff7ff5/buildreactionbotframework_960.jpg
buttons = Get Started
]

References
.lg file format
Structured response template
Adaptive expressions

Next
Language understanding
Language Understanding
9/21/2020 • 6 minutes to read

Language Understanding (LU) is used by a bot to understand language naturally and contextually to determine
what next to do in a conversation flow. In Bot Framework Composer, the process is achieved through setting up
recognizers and providing training data in the dialog so that the intents and entities contained in the message
can be captured. These values will then be passed on to triggers which define how the bot responds using the
appropriate actions.
LU has the following characteristics when used in Bot Framework Composer:
LU is training data for LUIS recognizer.
LU is authored in the inline editor or in User Input using the .lu file format.
Composer currently supports LU technologies such as LUIS.

Core LU concepts in Composer


Intents
Intents are categories or classifications of user intentions. An intent represents an action the user wants to
perform. It is a purpose or goal expressed in the user's input, such as booking a flight, paying a bill, or finding a
news article. You define and name intents that correspond to these actions. A travel app may define an intent
named "BookFlight."
Here's a simple .lu file that captures a simple Greeting intent with a list of example utterances that capture
different ways users will express this intent. You can use - or + or * to denote lists. Numbered lists are not
supported.

# Greeting
- Hi
- Hello
- How are you?

#<intent-name> describes a new intent definition section. Each line after the intent definition are example
utterances that describe that intent. You can stitch together multiple intent definitions in a language understanding
editor in Composer. Each section is identified by #<intent-name> notation. Blank lines are skipped when parsing
the file.
Utterances
Utterances are inputs from users and may have a lot of variations. Since utterances are not always well-formed,
we need to provide example utterances for specific intents to train bots to recognize intents from different
utterances. By doing so, your bots will have some "intelligence" to understand human languages.
In Composer, utterances are always captured in a markdown list and followed by an intent. For example, the
Greeting intent with some example utterances are shown in the Intents section above.

NOTE
You may have noticed that LU format is very similar to LG format but they are different. LU is for bots to understand user's
inputs (primarily capture intent and optionally entities ) and it is associated with recognizers, while LG is for bots to
respond to users as output, and it is associated with a language generator.
Entities
Entities are a collection of objects, each consisting of data extracted from an utterance such as places, time, and
people. Entities and intents are both important data extracted from utterances. An utterance may include zero or
more entities, while an utterance usually represents one intent. In Composer, all entities are defined and managed
inline. Entities in the .lu file format are denoted using {<entityName>=<labelled value>} notation. For example:

# BookFlight
- book a flight to {toCity=seattle}
- book a flight from {fromCity=new york} to {toCity=seattle}

The example above shows the definition of a BookFlight intent with two example utterances and two entity
definitions: toCity and fromCity . When triggered, if LUIS is able to identify a destination city, the city name will
be made available as @toCity within the triggered actions or a departure city with @fromCity as available entity
values. The entity values can be used directly in expressions and LG templates, or stored into a property in
memory for later use. For additional information on entities see the article advanced intents and entities.
Example
The table below shows an example of an intent with its corresponding utterances and entities. All three utterances
share the same intent BookFlight each with a different entity. There are different types of entities, you can find
more information in .lu file format.

IN T EN T UT T ERA N C ES EN T IT Y

BookFlight "Book me a flight to London" "London"

"Fly me to London on the 31st" "London", "31st"

"I need a plane ticket next Sunday to "next Sunday", "London"


London"

Below is a similar definition of a BookFlight intent with entity specification {city=name} and a set of example
utterances. We use this example to show how they are manifested in Composer. Extracted entities are passed
along to any triggered actions or child dialogs using the syntax @city .

# BookFlight
- book a flight to {city=austin}
- travel to {city=new york}
- I want to go to {city=los angeles}

After publishing, LUIS will be able to identify a city as entity and the city name will be made available as @city
within the triggered actions. The entity value can be used directly in expressions and LG templates, or stored into a
property in memory for later use. Read here for advanced intents and entities definition.

Author .lu files in Composer


You author .lu files as training data for the LUIS recognizer. You need to know:
Language Understanding concepts
.lu file format
Adaptive expressions
To create .lu files in Composer, follow these steps:
Set up a Recognizer Type
Select a dialog in the navigation pane and then select Default recognizer from the Recognizer Type drop-
down list in the Proper ties pane on the right side of the Composer screen.

Create a trigger
In the same dialog you selected the Default recognizer recognizer, select Add in the tool bar and then Add new
trigger .
In the pop-up trigger menu, select Intent recognized from the What is the type of this trigger? list. Fill in the
What is the name of this trigger (luis) field with an intent name and add example utterances in the Trigger
phrases field.
For example, you can create an Intent recognized trigger in the MyBot dialog with an intent named (weather )
and a few examples utterances.

After you select Submit you will see an Intent recognized trigger named weather in the navigation pane and
the trigger node in the authoring canvas. You can edit the .lu file inline on the right side of the Composer screen.
Select User Input from the Composer menu to view all the LU templates created. Select a dialog from the
navigation pane then toggle Edit Mode to edit the LU templates.

Add action(s) to the Intent recognized trigger


Select + under the Intent recognized trigger and add any action(s) you want your bot to execute when the
weather trigger is fired.

Publish LU to LUIS
The last step is to publish your .lu files to LUIS.
Select Star t Bot on the upper right corner of the Composer. Fill in your LUIS Primar y key and select OK .

NOTE
If you do not have a LUIS account, you can get one on the LUIS. If you have a LUIS account but do not know how to find
your LUIS primary key please see the Azure resources for LUIS section of the Authoring and runtime keys article.
Any time you select Star t Bot (or Restar t Bot ), Composer will evaluate if your LU content has changed. If so
Composer will automatically make the required updates to your LUIS applications then train and publish them. If
you go to your LUIS app website, you will find the newly published LU model.

References
What is LUIS
Language Understanding
.lu file format
Adaptive expressions
Using LUIS for language understanding
Extract data from utterance text with intents and entities

Next
Learn how to send messages to users.
Bot Framework Composer Plugins
9/21/2020 • 14 minutes to read

It is possible to extend and customize the behavior of Composer by installing plugins. Plugins can hook into the
internal mechanisms of Composer and change they way they operate. Plugins can also "listen to" the activity inside
Composer and react to it.

What is a Composer plugin?


Composer plugins are JavaScript modules. When loaded into Composer, the module is given access to a set of
Composer APIs which can then be used by the plugin to provide new functionality to the application. Plugins do
not have access to the entire Composer application - in fact, they are granted limited access to specific areas of the
application, and must adhere to a set of interfaces and protocols.

Plugin endpoints
Plugins currently have access to the following functional areas:
Authentication and identity - plugins can provide a mechanism to gate access to the application, as well as
mechanisms used to provide user identity.
Storage - plugins can override the built in filesystem storage with a new way to read, write and access bot
projects.
Web server - plugins can add additional web routes to Composer's web server instance.
Publishing - plugins can add publishing mechanisms.
Runtime templates - plugins can provide a runtime template used when "ejecting" from Composer.
Bot project templates - plugins can add items to the template list shown in the "new bot" flow.
Boilerplate content - plugins can provide content copied into all bot projects (such as a readme file or helper
scripts).
Combining these endpoints, it is possible to achieve scenarios such as:
Store content in a database
Require login via AAD or any other oauth provider
Create a custom login screen
Require login via GitHub, and use GitHub credentials to store content in a Git repo automatically
Use AAD roles to gate access to content
Publish content to external services such as remote runtimes, content repositories, and testing systems.

How to build a plugin


Plugin modules must come in one of the following forms:
Default export is a function that accepts the Composer plugin API
Default export is an object that includes an initialize function that accepts the Composer plugin API
A function called initialize is exported from the module
Currently, plugins can be loaded into Composer using 1 of 2 methods:
The plugin is placed in the /plugins/ folder, and contains a package.json file with extendsComposer set to true
The plugin is loaded directly via changes to Composer code, using pluginLoader.loadPlugin(name, plugin)
The simplest form of a plugin module is below:

export default async (composer: any): Promise<void> => {

// call methods (see below) on the composer API


// composer.useStorage(...);
// composer.usePassportStrategy(...);
// composer.addWebRoute(...)

Authentication and identity


To provide auth and identity services, Composer has in large part adopted PassportJS instead of implementing a
custom solution. Plugins can use one of the many existing Passport strategies, or provide a custom strategy.
composer.usePassportStrategy(strategy)

Configure a Passport strategy to be used by Composer. This is the equivalent of calling app.use(passportStrategy)
on an Express app. See PassportJS docs.
In addition to configuring the strategy, plugins will also need to use composer.addWebRoute to expose login, logout
and other related routes to the browser.
Calling this method also enables a basic auth middleware that is responsible for gating access to URLs, as well as a
simple user serializer/deserializer. Developers may choose to override these components using the methods
below.
composer.useAuthMiddleware(middleware)

Provide a custom middleware for testing the authentication status of a user. This will override the built-in auth
middleware that is enabled by default when calling usePassportStrategy() .
Developers may choose to override this middleware for various reasons, such as:
Apply different access rules based on URL
Do something more than check req.isAuthenticated such as validate or refresh tokens, make database calls
and provide telemetry.
composer.useUserSerializers(serialize, deserialize)

Provide custom serialize and deserialize functions for storing and retrieving the user profile and identity
information in the Composer session.
By default, the entire user profile is serialized to JSON and stored in the session. If this is not desirable, plugins
should override these methods and provide alternate methods.
For example, the below code demonstrates storing only the user ID in the session during serialization, and the use
of a database to load the full profile out of a database using that id during deserialization.

const serializeUser = function(user, done) {


done(null, user.id);
};

const deserializeUser = function(id, done) {


User.findById(id, function(err, user) {
done(err, user);
});
};

composer.useUserSerializers(serializeUser, deserializeUser);

composer.addAllowedUrl(url)
Allow access to url without authentication. url can be an express-style route with wildcards ( /auth/:stuff or
/auth(.*) )

This is primarily for use with authentication-related URLs. While /login is allowed by default, any other URL
involved in auth needs to be whitelisted.
For example, when using oauth, there is a secondary URL for receiving the auth callback. This has to be whitelisted,
otherwise access will be denied to the callback URL and it will fail.

// define a callback url


composer.addWebRoute('get','/oauth/callback', someFunction);

// whitelist the callback


composer.addAllowedUrl('/oauth/callback');

plugLoader.loginUri

This value is used by the built-in authentication middleware to redirect the user to the login page. By default, it is
set to '/login' but it can be reset by changing this member value.
Note that if you specify an alternate URI for the login page, you must use addAllowedUrl to whitelist it.
PluginLoader.getUserFromRequest(req)

This is a static method on the PluginLoader class that extracts the user identity information provided by Passport.
This is for use in the web route implementations to get user and provide it to other components of Composer.
For example:

const RequestHandlerX = async (req, res) => {

const user = await PluginLoader.getUserFromRequest(req);

// ... do some stuff

};

Storage
By default, Composer reads and writes assets to the local filesystem. Plugins may override this behavior by
providing a custom implementation of the IFileStorage interface. See interface definition here
Though this interface is modeled after a filesystem interaction, the implementation of these methods does not
require using the filesystem, or a direct implementation of folder and path structure. However, the implementation
must respect that structure and respond in the expected ways -- ie, the glob method must treat path patterns the
same way the filesystem glob would.
composer.useStorage(customStorageClass)

Provide an iFileStorage-compatible class to Composer.


The constructor of the class will receive 2 parameters: a StorageConnection configuration, pulled from Composer's
global configuration (currently data.json), and a user identity object, as provided by any configured authentication
plugin.
The current behavior of Composer is to instantiate a new instance of the storage accessor class each time it is used.
As a result, caution must be taken not to undertake expensive operations each time. For example, if a database
connection is required, the connection might be implemented as a static member of the class, inside the plugin's
init code and made accessible within the plugin module's scope.
The user identity provided by a configured authentication plugin can be used for purposes such as:
provide a personalized view of the content
gate access to content based on identity
create an audit log of changes
If an authentication plugin is not configured, or the user is not logged in, the user identity will be undefined .
The class is expected to be in the form:

class CustomStorage implements IFileStorage {


constructor(conn: StorageConnection, user?: UserIdentity) {
...
}

...
}

Web server
Plugins can add routes and middlewares to the Express instance.
These routes are responsible for providing all necessary dependent assets such as browser javaScript, CSS, etc.
Custom routes are not rendered inside the front-end React application, and currently have no access to that
application. They are independent pages -- though nothing prevents them from making calls to the Composer
server APIs.
composer.addWebRoute(method, url, callbackOrMiddleware, callback)

This is equivalent to using app.get() or app.post() . A simple route definition receives 3 parameters - the
method, URL and handler callback.
If a route-specific middleware is necessary, it should be specified as the 3rd parameter, making the handler
callback the 4th.
Signature for callbacks is (req, res) => {}

Signature for middleware is (req, res, next) => {}

For example:

// simple route
composer.addWebRoute('get', '/hello', (req, res) => {
res.send('HELLO WORLD!');
});

// route with custom middleware


composer.addWebRoute('get', '/logout', (req, res, next) => {
console.warn('user is logging out!');
next();
},(req, res) => {
req.logout();
res.redirect('/login');
});

composer.addWebMiddleware(middleware)

Bind an additional custom middleware to the web server. Middleware applied this way will be applied to all routes.
Signature for middleware is (req, res, next) => {}

For middleware dealing with authentication, plugins must use useAuthMiddleware() as otherwise the built-in auth
middleware will still be in place.
Publishing
composer.addPublishMethod(publishMechanism, schema, instructions)

By default, the publish method will use the name and description from the package.json file. However, you may
provide a customized name:

composer.addPublishMethod(publishMechanism, schema, instructions, customDisplayName,


customDisplayDescription);

Provide a new mechanism by which a bot project is transferred from Composer to some external service. The
mechanisms can use whatever method necessary to process and transmit the bot project to the desired external
service, though it must use a standard signature for the methods.
In most cases, the plugin itself does NOT include the configuration information required to communicate with the
external service. Configuration is provided by the Composer application at invocation time.
Once registered as an available method, users can configure specific target instances of that method on a per-bot
basis. For example, a user may install a "Publish to PVA" plugin, which implements the necessary protocols for
publishing to PVA. Then, in order to actually perform a publish, they would configure an instance of this
mechanism, "Publish to HR Bot Production Slot" that includes the necessary configuration information.
Publishing plugins support the following features:
publish - given a bot project, publish it. Required.
getStatus - get the status of the most recent publish. Optional.
getHistory - get a list of historical publish actions. Optional.
rollback - roll back to a previous publish (as provided by getHistory). Optional.
p u b l i sh (c o n fi g , p r o j e c t , m e t a d a t a , u se r )

This method is responsible for publishing the project using the provided config using whatever method the
plugin is implementing - for example, publish to Azure. This method is required for all publishing plugins.
In order to publish a project, this method must perform any necessary actions such as:
The LUIS lubuild process
Calling the appropriate runtime buildDeploy method
Doing the actual deploy operation
Parameters: | Parameter | Description |-- |-- | config | an object containing information from the publishing profile,
as well as the bot's settings -- see below | project | an object representing the bot project | metadata | any comment
passed by the user during publishing | user | a user object if one has been provided by an authentication plugin
Config will include:

{
templatePath: '/path/to/runtime/code',
fullSettings: {
// all of the bot's settings from project.settings, but also including sensitive keys managed in-app.
// this should be used instead of project.settings which may be incomplete
},
profileName: 'name of publishing profile',
... // All fields from the publishing profile
}

The project will include:


{
id: 'bot id',
dataDir: '/path/to/bot/project',
files: // A map of files including the name, path and content
settings: {
// content of settings/appsettings.json
}
}

Below is an simplified implementation of this process:

const publish = async(config, project, metadata, user) => {

const { fullSettings, profileName } = config;

// Prepare a copy of the project to build

// Run the lubuild process

// Run the runtime.buildDeploy process

// Now do the final actual deploy somehow...

g e t St a t u s(c o n fi g , p r o j e c t , u se r )

This method is used to check for the status of the most recent publish of project to a given publishing profile
defined by the config field. This method is required for all publishing plugins.
This endpoint uses a subset of HTTP status codes to report the status of the deploy:

STAT US M EA N IN G

200 Publish completed successfully

202 Publish is underway

404 No publish found

500 Publish failed

config will be in the form below. config.profileName can be used to identify the publishing profile being queried.

{
profileName: `name of the publishing profile`,
... // all fields from the publishing profile
}

Should return an object in the form:


{
status: [200|202|404|500],
result: {
message: 'Status message to be displayed in publishing UI',
log: 'any log output from the process so far',
comment: 'the user specified comment associated with the publish',
endpointURL: 'URL to running bot for use with Emulator as appropriate',
id: 'a unique identifier of this published version',
}
}

g e t H i st o r y (c o n fi g , p r o j e c t , u se r )

This method is used to request a history of publish actions from a given project to a given publishing profile
defined by the config field. This is an optional feature - publishing plugins may exclude this functionality if it is
not supported.
config will be in the form below. config.profileName can be used to identify the publishing profile being queried.

{
profileName: `name of the publishing profile`,
... // all fields from the publishing profile
}

Should return in array containing recent publish actions along with their status and log output.

[{
status: [200|202|404|500],
result: {
message: 'Status message to be displayed in publishing UI',
log: 'any log output from the process so far',
comment: 'the user specified comment associated with the publish',
id: 'a unique identifier of this published version',
}
}]

r o l l b a c k (c o n fi g , p r o j e c t , r o l l b a c k To Ve r si o n , u se r )

This method is used to request a rollback in the deployed environment to a previously published version. This
DOES NOT affect the local version of the project. This is an optional feature - publishing plugins may exclude this
functionality if it is not supported.
config will be in the form below. config.profileName can be used to identify the publishing profile being queried.

{
profileName: `name of the publishing profile`,
... // all fields from the publishing profile
}

rollbackToVersion will contain a version ID as found in the results from getHistory .


Rollback should respond using the same format as publish or getStatus and should result in a new publishing
task:
{
status: [200|202|404|500],
result: {
message: 'Status message to be displayed in publishing UI',
log: 'any log output from the process so far',
comment: 'the user specified comment associated with the publish',
endpointURL: 'URL to running bot for use with Emulator as appropriate',
id: 'a unique identifier of this published version',
}
}

Runtime templates
composer.addRuntimeTemplate(templateInfo)

Expose a runtime template to the Composer UI. Registered templates will become available in the "Runtime
settings" tab. When selected, the full content of the path will be copied into the project's runtime folder. Then,
when a user clicks Start Bot , the startCommand will be executed. The expected result is that a bot application
launches and is made available to communicate with the Bot Framework Emulator.

await composer.addRuntimeTemplate({
key: 'myUniqueKey',
name: 'My Runtime',
path: __dirname + '/path/to/runtime/template/code',
startCommand: 'dotnet run',
build: async(runtimePath, project) => {
// implement necessary actions that must happen before project can be run
},
buildDeploy: async(runtimePath, project, settings, publishProfileName) => {
// implement necessary actions that must happen before project can be deployed to azure

return pathToBuildArtifacts;
},
});

b u i l d (r u n t i m e P a t h , p r o j e c t )

Perform any necessary steps required before the runtime can be executed from inside Composer when a user
clicks the "Start Bot" button. Note this method should not actually start the runtime directly - only perform the
build steps.
For example, this would be used to call dotnet build in the runtime folder in order to build the application.
b u i l d D e p l o y (r u n t i m e P a t h , p r o j e c t , se t t i n g s, p u b l i sh P r o fi l e N a m e )

PA RA M ET ER DESC RIP T IO N

runtimePath the path to the runtime that needs to be built

project a bot project record

settings a full set of settings to be used by the built runtime

publishProfileName the name of the publishing profile that is the target of this
build

Perform any necessary steps required to prepare the runtime code to be deployed. This method should return a
path to the build artifacts with the expectation that the publisher can perform a deploy of those artifacts "as is" and
have them run successfully. To do this it should:
Perform any necessary build steps
Install dependencies
Write settings to the appropriate location and format
composer.getRuntimeByProject(project)

Returns a reference to the appropriate runtime template based on the project's settings.

// load the appropriate runtime config


const runtime = composer.getRuntimeByProject(project);

// run the build step from the runtime, passing in the project as a parameter
await runtime.build(project.dataDir, project);

composer.getRuntime(type)

Get a runtime template by its key.

const dotnetRuntime = composer.getRuntime('csharp-azurewebapp');

Bot project templates


Add a project template to the list available during the bot creation process. Plugins can bundle arbitrary bundle of
content that will be copied into the bot project at create time. The template should contain a functioning bot
project, along with any specializations and configuration defaults required to successfully run the project.
composer.addBotTemplate(template)

await composer.addBotTemplate({
id: 'name.my.template.bot'
name: 'Display Name',
description: 'Long description';
path: '/path/to/template'
});

Boilerplate content
In addition, boilerplate material will also be added to every new bot project. Plugins can bundle additional content
that will be copied into every project, regardless of which template is used.
composer.addBaseTemplate(template)

await composer.addBaseTemplate({
id: 'name.my.template.bot'
name: 'Display Name',
description: 'Long description';
path: '/path/to/template'
});

Accessors
composer.passport

composer.name

Plugin roadmap
These features are not currently implemented, but are planned for the near future:
Eventing - plugins will be able to emit events as well as respond to events emitted by other plugins and by
Composer core.
Front-end plugins - plugins will be able to provide React components that are inserted into the React
application at various endpoints.
Schema extensions - Plugins will be able to amend or update the schema.

Next
Learn how to extend Composer with plugins.
Best practices for building bots using Composer
9/21/2020 • 11 minutes to read

Bot Framework Composer is a visual authoring tool for building conversational AI software. By learning the
concepts described in this section, you'll become equipped to design and build a bot using Composer that aligns
with the best practices. Before reading this article, you should read the introduction to Bot Framework Composer
article for an overview of what you can do with Composer.
Use the basic authoring process to build your bots:
Create a bot
Create primary conversation flows by
adding triggers to dialogs
adding actions to triggers
authoring language understanding for user input
authoring language generation for bot responses
manipulating memory
Integrate with APIs
Add greater natural language complexity using entity binding and interruption support
The following list includes the best practices we recommend and things to avoid for building bots with Composer:

REC O M M EN DED N OT REC O M M EN DED

Plan your bot -------

Give your bot a bit of personality Make your bot too chatty

Consider when to use dialogs Nest more than two deep conditionals

Consider the non-happy path -------

Name dialogs clearly -------

Keep dialogs short -------

Create a dialog for help/cancel -------

Keep a root menu -------

Keep state properties consistent -------

Define LG templates and reuse them consistently -------

Parameterize reusable LG templates -------

Add variations for bot responses -------

Make your prompt text clear -------


REC O M M EN DED N OT REC O M M EN DED

Prepare for ambiguity in the responses -------

Add prompt properties -------

Use LUIS prebuilt entities ------

Design bots
Plan your bot
Before building your bot application, make a plan of the bot you want to build. Consider the following questions:
What your bot is used for? Be clear about the kind of bot you plan to build. This will determine the
functionalities you want to implement in the bot.
What problems does your bot intend to solve? Be clear about the problems your bot intend to solve.
Solving problems for customers is the top factor you should consider when building bots. You should also
consider things such as how to solve the problems easily and of course with the best user experience you can
provide.
Who will use your bot? Different customers will expect different user experiences. This will also determine
the complexity you should involve in your bot design. Consider what language to implement the bot.
Where will your bot run? You should decide the platforms your bot will run on. For example, a bot designed
to run on a mobile device will have more features like sending SMS to implement.
Give your bot a bit of personality
If your bot responses are too robotic, users will find chatting with your bot boring and confusing. Here are some
tips to give your bot a bit of personality:
Use language generation to create multiple variations of messages. However, a little bit of personality goes a
long way. Don't over use it, otherwise you will end up creating a bot with too much personality.
Consider the context where your bot will be used. Bots used in private scenarios can be more conversational
than bots in public. A bot will talk more to a new user than to an experienced user.
Define language generation templates and reuse them across the bot consistently. This will make your bot's
personality consistent.
Use cards to give your bots a bit of personality if your platform supports.
Don't make your bot too chatty
People don't like chatty bots who send lots of messages and do not solve their problems.
Being concise and clear in messages is highly recommended in your bot design. Make the messages your bot
sends relevant and information-dense. Don't say less or more than the conversation requires.

Design dialogs
Consider when to use dialogs
Think of dialogs as modular pieces with specific functionalities. Each dialog contains instructions for how the bot
will react to the input. Dialogs allow you more granular control of where you start and restart, and they allow you
to hide details of the "building blocks" that people do not need to know.
Consider using dialogs when you want to:
Reuse things.
Have interruptions that are local to that flow (for example, contextual help inside a date collection flow).
Have a place in your conversation that you need to jump to easily from other places.
Nest more than two deep conditionals within a dialog.
The following example shows a bot which nests two switch statements. This is inefficient and hard to read.

Instead of using nested switch statement, you can use dialogs to encapsulate the functionalities.

Consider the non-happy path


When designing dialogs, you should consider asking follow-up questions, clarifications, and whether you want to
use local interruption which only happens within the context of a child dialog.
Prepare for unexpected responses. When your bot asks questions such as "What's your name?", get
your bot prepared for answers such as "Why?" and "No". You can make use of prompt capabilities such as
using Unrecognized prompt and Invalid prompt properties. Read how to make use of them in the add
prompt properties section.
Use interruptions . Consider using the Allow Interruptions property to either handle a global interruption
or a local interruption within the context of the dialog.

TIP
The Allow Interruptions property is located in the Proper ties panel of the Other tab of any prompt actions. You can set
the value to be true or false .

Name dialogs clearly


You should name your dialogs clearly when you create them as you cannot change the name of your dialog once
created. You should use a naming scheme early as you might end up with a lot of dialogs.
Some commonly used naming schemes include but are not limited to the following:
Camel case: MyDialog
Pascal case: myDialog
Snake case: my_dialog
Kebab Case: my-dialog

NOTE
Don't use spaces and special characters in dialog names.

Keep dialogs short


It's easy for you to view shorter dialogs in the authoring canvas and see all the "hidden details" in a dialog when
you want to. You should also try to limit the complexity of your dialog system. See some examples in the
AskingQuestionsSample. You can click through and see the dialogs.

Create a dialog for help/cancel


It is a good practice to create a dialog for help/cancel, either locally or globally. By creating global help/cancel
dialog, you can let users exit out of any process at any time and get back to the main dialog flow. Local help/cancel
dialog will be very helpful to handle interruptions within the context of a dialog.
You can read more in the weather bot tutorial- adding help and cancel functionality to your bot article to see how
to create a dialog for help/cancel functionalities.

Design conversation flows


Keep a root menu
It is a good practice to present a root menu with some common choices like at the beginning of the interaction.
That menu should come back into play when the user completes a task. Think of this root menu is like an app's
home screen. You begin your interaction with the app from the home screen and end with the home screen. Then
you can take your next action.
Keep state properties consistent
It's useful to establish conventions for your state properties across conversation , user , and dialog state for
consistency and to prepare for context sharing scenarios.
When creating a property, think about the lifetime of that property. Do not hesitate to use turn memory scope to
capture volatile information. There is no point using dialog.confirmation.outcome when you are asking a
confirmation prompt because the lifetime of that confirmation is only for that particular turn of the conversation.
You should not feel bad about using that turn memory scope. The memory concept article contains basic
concepts of memory scopes used in Composer.
Don't nest more than two deep conditionals
When you nest more than two deep conditionals (loops or switch statements) in a single dialog, you should
consider building child dialogs to encapsulate them. Making use of child dialogs to encapsulate bots functionalities
is highly recommended for maintaining conversation flows.

Design Language Generation


Define LG templates and reuse them consistently
Reusing LG templates helps to keep consistency across the bot. You can define LG templates in advance in the
common.lg file and use them in different dialogs when necessary. In Composer, you can select Bot Responses and
then select All in the navigation to see the templates defined and shared by all the dialogs.
To import the common templates into another LG file, add the line [import](common.lg) at the top.
It is a good practice to build templates for reusable components and use them consistently. For example, things
like acknowledgement phrases and apology phrases will be used by a bot in several different places, like every
time you have an invalid prompt or unrecognized intent.
Here is an example of an acknowledgePhrase template with three variations:

#acknowledgePhrase
- I'm sorry you are having this problem. Let's see if there is anything we can do.
- I know it is frustrating – let's see how we can help…
- I completely understand your situation. Let me try my best to help.

Parameterize reusable LG templates


For things like prompts, for texts in send activity, composition is one of the key things in LG, which enables
reusability but also enables parameterization. You can have some lowest unit of composition.
When defining reusable templates, parameterize them so they can be used in different scenarios by passing in the
appropriate options for expansion or evaluation.
You can set up templates with parameters that take properties as parameters, so you can do things like date
formatters or card builders.
When you set up a template, instead of referring directly to a specific property Dialog.foo , use a parameter
name instead. For example, in the code below, specify name as a parameter instead of referring to a property.
Set up cards in parts and compose them into bigger cards.
Example:

# welcomeUser(name)
- ${greeting()}, ${ personalize(name)}

# greeting
- Hello
- Howdy
- Hi

# personalize(name)
- IF: ${name != ''}
- ${ name }
- ELSE:
- HUMAN

Having set up these templates, you can now use them in a variety of situations. For example:

> Greet a user whose name is stored in `user.name`


- ${ welcomeUser(user.name) }

> Greet a user whose name you don't know:


- ${ welcomeUser() }

> Use personalization in another message:


- That's ok, ${ personalize(user.name) } , we can try again!

TIP
Read more in .lg file format and structured response template.

Add variations for bot responses


Language generation lets you define multiple variations of a phrase to make bots replies less robotic. For example,
in the weather bot tutorial - adding language generation article, you can define multiple variations of a greeting
message to make the conversation more natural. However, this does not mean you should add as many variations
as possible, it depends on how you want your bot to respond and in what tone and voice.

Design inputs
Make your prompt text clear
Make sure your prompt texts are clear and unambiguous. Ambiguity is a problem in languages and it is something
we should avoid when we phrase the text of a prompt.
Consider giving your user input hints including using suggested responses. This will help make your prompt clear
and avoid ambiguity. For example, instead of saying "What is your birthday", you can say "What is your birthday?
Please include the day month and year in the form of DD/MM/YYYY".
Prepare for ambiguity in the responses
While ambiguity is something you try to avoid in outgoing messages, you should also be prepared for ambiguity
in the incoming responses from users. This helps to make your bot perform better but also prepares you for
platforms like voice where users more commonly add words.
When people are talking out loud, they tend to add words in their responses than when they are typing into a text
box. For example, in a text box they will just say "my birthday is 1/25/78" while the spoken input can be something
like "my birthday is in January, it's the 25th".
Sometimes when people make their bots personality rich they introduce language ambiguity. For example, be
cautious when you use greeting messages such as "What's up?", which is a question that users will try to answer. If
you don't prepare your bot to responses like "Nothing", it will end up confusion.
Add prompt properties
Make use of the prompt features such as Unrecognized prompt and Invalid prompt. These are powerful properties
that give you a lot of control over how your bot responds to unrecognized and invalid answers. Access these
properties under the Other tab of any type of input (Ask a question ) action.

Add guidance along the prompts for re-prompting, otherwise the bot will keep asking the same question without
telling the users why it is asking again.
Use validations when possible. The Invalid prompt fires when the input does not pass the defined validation rules.
Here are two examples of how to phrase in the Unrecognized prompt and Invalid prompt fields.
Unrecognized prompt
Sorry, I do not understand '${this.value}'. Please enter a zip code in the form of 12345.
Invalid prompt
Sorry, '${this.value}' is not valid. I'm looking for a 5 digit number as zip code. Please specify a zip code in
the form 12345.

Design recognizers
Use LUIS prebuilt entities
LUIS provides a list of prebuilt entities which are very handy to use. When you think of defining entities, check the
list of LUIS prebuilt entities first instead of reinventing your own wheels. Some commonly-used prebuilt entities
include: time , date and Number .
For the best practices of building LUIS models, you should read the best practices for building a language
understanding (LUIS) app article.
Additional information
Best practices for building a language understanding (LUIS) app
Best practices for a QnA Maker knowledge base
How to use samples in Composer
9/21/2020 • 3 minutes to read

Bot Framework Composer provides example bots designed to illustrate the scenarios you are most likely to
encounter when developing your own bots. This article is designed to help you make the best use of these
examples. You will learn how to create a new bot based off any of the examples, which you can use to learn from
or as a starting point when creating your own bots in Composer.

Prerequisites
Install Bot Framework Composer.
(Optional) LUIS account and a LUIS authoring key.

Open a sample
The Examples can be found on the right side of the Composer home page.
To open a bot sample from Composer follow the steps:
1. Select the sample you want to open from the Examples list.

2. In the Define conversation objective form:


a. Name : You can use the default name provided or enter a new name here.
b. Description (optional): Descriptive text to describe the bot you are creating.
c. Location : The location your bot source code files will be saved to.
3. After you select Next in the Define conversation objective form, the new bot based off the example bot
you selected will open in Composer.

NOTE
When you select a bot from the Examples list, a copy of the original sample is created in the Location you specify
in the Define conversation objective form. Any changes you make to that bot will be saved without affecting
the original example. You can create as many bots based off the examples as you want, without impacting the
original examples and you are free to modify and use them as a starting point when creating your own bots.

4. Select the Star t Bot button located on the Composer toolbar, then select Test in Emulator to test your
new bot in the Bot Framework Emulator.
Learn from Samples
Composer currently provides eleven bot samples with different specialties. These samples are a good resource to
learn how to build your own bot using Composer. You can use the samples and learn how to send text messages,
how to ask questions, and how to control conversation flow, etc.
Below is a table of the eleven bot samples in Composer and their respective descriptions.

SA M P L E DESC RIP T IO N

Echo Bot A bot that echoes whatever message the user enters.

Empty Bot A basic bot that is ready for your creativity.

Simple Todo A sample bot that shows how to use Regex recognizer to
define intents and allows you to add, list and remove items.

Todo with LUIS A sample bot that shows how to use LUIS recognizer to
define intents and allows you to add, list and remove items. A
LUIS authoring key is required to run this sample.

Asking Questions A sample bot that shows how to prompt user for different
types of input.

Controlling Conversation Flow A sample bot that shows how to use branching actions to
control a conversation flow.

Dialog Actions A sample bot that shows how to use actions in Composer
(does not include Ask a question actions already covered in
the Asking Questions example).

Interruptions A sample bot that shows how to handle interruptions in a


conversation flow. A LUIS authoring key are required to run
this sample.
SA M P L E DESC RIP T IO N

QnA Maker and LUIS A sample bot that shows how to use both QnA Maker and
LUIS. A LUIS authoring key and a QnA Knowledge Base is
required to run this sample.

QnA Sample A sample bot that is provisioned to enable users to create


QnA Maker knowledge base in Composer.

Responding with Cards A sample bot that shows how to send different cards using
language generation.

Responding with Text A sample bot that shows how to send different text messages
to users using language generation.

Next
Learn how to send text messages.
Send text messages to users
9/21/2020 • 7 minutes to read

The primary way a bot communicates with users is through message activities. Some messages may simply
consist of plain text, while others may contain richer content such as cards. In this article, you will learn the
different types of text messages you can use in Bot Framework Composer and how to use them.

Text message types


In Composer, all messages that are sent to the user are defined in the Language Generation (LG) editor and follow
the .lg file format. For additional information about language generation in Composer, refer to the language
generation article.
The table below lists the different types of text messages you can use in Composer.

M ESSA GE T Y P E DESC RIP T IO N

Simple text A simple LG defined to generate a simple text response.

Text with memory An LG template that relies on a property to generate a text


response.

LG with parameter An LG template that accepts a property as a parameter and


uses that to generate a text response.

LG composition An LG template composed with pre-defined templates to


generate a text response.

Structured LG An LG template defined using structured response template


to generate a text response.

Multiline text An LG template defined with multiline response text.

If/Else An If/Else conditional template defined to generate text


responses based on user's input.

Switch A Switch conditional template defined to generate text


responses based on user's input.

The user scenario


When your bot receives messages from the user, all intent and entity values in the message are extracted and
passed on to the dialog's event handler (trigger). In the trigger you can define actions the bot should take to
respond to the user. Sending messages back to the user is one type of action you can define in the trigger.
Below is a screenshot of the Send a response action in Composer. How to get there:
1. Select + in the Authoring canvas .
2. Select Send a response from the action menu.
The Responding with Text example
This section is an introduction to the Responding with Text example (sample bot) that is used in this article to
explain how to send text messages in Composer.
Do the following to get the Responding with Text example running in Composer:
1. Select Home from the Composer Menu .
2. In the Examples section of the home screen, scroll down the list of examples and select Responding with
Text .

3. After the sample loaded in Composer, select Design from the left side menu and then select the Dialog
star ted trigger in the main dialog to get an idea of how this sample works.
4. Select Bot Responses from the Composer Menu to see the templates that are called when the user selects
one of the items from the choices they are presented with when the Multiple choice action executes. You
will be referring to these templates throughout this article as each potential text message type is discussed
in detail.
!bot responses](./media/send-messages/responding-with-text-sample-bot-responses.png)

Text messages defined


Each of the sections below will detail how each type of text message is defined using the simple response template
format in the .lg file that is exposed in Composer in the Bot Responses page. Each text message can also be
defined in the LG editor in the Proper ties panel when a Send a response action is selected.
Simple text
To define a simple text message, use the hyphen (-) before the text that you want your bot to respond to users, for
example:

- Here is a simple text message.

You can also define a simple text message with multiple variations. When you do this, the bot will respond
randomly with any of the simple text messages, for example:

# SimpleText
- Hi, this is simple text
- Hey, this is simple text
- Hello, this is simple text

Text with memory


This is how you would display a message to the user that is contained in a property that is stored in memory. This
property can be defined programmatically as it is in the Responding with Text example, or can be set at run-time
based on user input.
How to send a text with memory message:
1. Create a new action to send the response by selecting the "+" icon in the Authoring canvas and selecting
Send a response from the list of actions.
2. In the Proper ties panel, enter the desired property in the LG editor. Note that all entries are preceded by a
hyphen (- ). In the example the following property is used: - ${user.message} .

TIP
You reference a parameter using the syntax ${user.message} .
You reference a template using the syntax ${templateName()} .

To learn more about setting properties in Composer, refer to the Conversation flow and memory article. To learn
more about using expressions in your responses, refer to the Adaptive expressions article.
LG with parameter
You can think of LG with parameter like a function with parameters, for example the template in the .lg file
(entered in the LG editor in the properties panel or in the Bot Responses page) looks like the following:

# LGWithParam(user)
- Hello ${user.name}, nice to talk to you!

In this LG template:

EL EM EN T DESC RIP T IO N

LGWithParam() The template name.

user The object passed to the template as its parameter.

${user.name} This is replaced with the value contained in the property


user.name .

LG composition
An LG composition message is a template composed of one or more existing LG templates. To define an LG
Composition template you need to first define the component template(s) then call them from your LG
composition template. For example:

# Greeting
- nice to talk to you!

# LGComposition(user)
- ${user.name} ${Greeting()}

In this template # LGComposition(user) , the # Greeting template is used to compose a portion of the new
template. The syntax to include a pre-defined template is ${templateName()} .

The LG composition message. See the Dialog star ted action of the LGComposition dialog in the
Responding with Text example.
Structured LG
A Structured LG message uses the structured response template format. Structured response templates enable
you to define complex structures such as cards.
For bot applications, the structured response template format natively supports
Activity definition. This is used by the Structured LG message.
Card definition. See the Sending responses with cards article for more information.
Any chatdown style constructs. For information on chatdown see the chatdown readme.
The Responding with Text example demonstrates using the Activity definition, for example:

# StructuredText
[Activity
Text = text from structured
]

This is a simple structured LG with output response text from structured . The definition of a structured template
is as follows:

# TemplateName
> this is a comment
[Structure-name
Property1 = <plain text> .or. <plain text with template reference> .or. <expression>
Property2 = list of values are denoted via '|'. e.g. a | b
> this is a comment about this specific property
Property3 = Nested structures are achieved through composition
]

To learn more about structured response templates, you can refer to the structured response template article.
To see how the activity definition is used in messages using cards, see the AdaptiveCard and [AllCards]](./how-
to-send-cards.md#allcards) sections of the Sending responses with cards article.
For a detailed explanation of the activity definition see the Bot Framework -- Activity readme on GitHub.
Multiline text
If you need your response to contain multiple lines, you can include multi-line text enclosed in three accent
characters: ```, for example:

# multilineText
- ``` you have such alarms
alarm1: 7:am
alarm2: 9:pm
```

TIP
Multi-line variation can request template expansion and entity substitution by enclosing the requested operation in ${} .
With multi-line support, you can have the language generation sub-system fully resolve a complex JSON or XML (e.g. SSML
wrapped text to control bot's spoken reply).

If/Else condition
Instead of using conditional branching, you can define a conditional template to generate text responses based on
user's input. For example:

# timeOfDayGreeting(timeOfDay)
- IF: ${timeOfDay == 'morning'}
- good morning
- ELSEIF: ${timeOfDay == 'afternoon'}
- good afternoon
- ELSE:
- good evening

In this If/Else conditional template, bot will respond in text message morning , afternoon or evening based on
user's input to match specific conditions defined in the template.
Switch condition
The Switch condition template is similar to the If/Else condition template, you can define a Switch condition
template to generate text messages in response to user's input, or based on a prebuilt function that requires no
user interaction. For example, the Responding with Text example creates a template named Switch condition that
calls the template #greetInAWeek that uses the dayOfWeek and utcNow functions:

# greetInAWeek
- SWITCH: ${dayOfWeek(utcNow())}
- CASE: ${0}
- Happy Sunday!
-CASE: ${6}
- Happy Saturday!
-DEFAULT:
- Working day!

In this Switch condition template, the bot will respond with any of the following: Happy Sunday! , Happy Saturday
or Working day! based on the value returned by the {dayOfWeek(utcNow()){} functions. utcNow() is a pre-built
function that returns the current timestamp as a string. dayOfWeek() is a function which returns the day of the
week from a given timestamp. Read more about pre-built functions in Adaptive expressions.
Further reading
Language generation
.lg file format
Structured response template
Adaptive expressions

Next
Learn how to ask for user input.
Send responses with cards
9/21/2020 • 10 minutes to read

Cards enable you to create bots that can communicate with users in a variety of ways as opposed to simply using
plain text messages. You can think of a card as an object with a standard set of rich user controls that you can
choose from to communicate with and gather input from users. There are times when you need messages which
simply consist of plain text and there are times when you need richer message content such as images, animated
GIFs, video clips, audio clips and buttons. If you are looking for examples about sending text messages to users
please read the send text messages to users article, if you need rich message content, cards offer several options
which will be detailed in this article. If you are new to the concept of Cards, it might be helpful to read the Cards
section of the design the user experience article.

The structured response template


All of the Bot Responses are defined in an .lg file that is exposed in Composer in the Bot Responses page.
Templates are the core element of a .lg file and there are three types of templates that can be used in .lg files:
Simple response, conditional response and structured response templates. Cards are defined using the structured
response template format.
A structured response template for cards consists of the following elements:

# TemplateName
[Card-name
title = title of the card
subtitle = subtitle of the card
text = description of the card
image = url of your image
buttons = name of the button you want to show in the card
]

T EM P L AT E C O M P O N EN T DESC RIP T IO N

# TemplateName The template name. Always starts with "#". This is used when
invoking the card.

[] The entire template is contained within the square brackets.

Card-name The name of the type of card being referenced.

title The title that will appear in the card when displayed to the
user.

subtitle The subtitle that will appear in the card when displayed to the
user.

text The text that will appear in the card when displayed to the
user.

image The url pointing to the image that will appear in the card
when displayed to the user.
T EM P L AT E C O M P O N EN T DESC RIP T IO N

buttons Name of the button you want to show in the card.

Additional resources for structured response templates


The structured response template readme.
The .lg file format.
For more information on Language Generation in general you can refer to the language generation concept
article.

Card types
Composer currently supports the following Card types:

C A RD T Y P E DESC RIP T IO N

Hero Card A card that typically contains a single large image, one or
more buttons, and simple text.

Thumbnail Card A card that typically contains a single thumbnail image, one or
more buttons, and simple text.

Signin Card A card that enables a bot to request that a user sign-in. It
typically contains text and one or more buttons that the user
can click to initiate the sign-in process.

Animation Card A card that can play animated GIFs or short videos.

Video Card A card that can play a video file.

Audio Card A card that can play an audio file.

Adaptive Card A customizable card that can contain any combination of text,
speech, images, buttons, and input fields.

All Card To display all cards.


The Responding with Cards example
This section is an introduction to the Responding with Cards example (sample bot) that is used in this article to
explain how to incorporate cards into your bot using Composer.
Do the following to get the Responding with Cards example running in Composer:
1. Select Home from the Composer Menu
2. In the Examples section of the Home page, scroll down the list of examples and select Responding with
Cards .

Now that you have it loaded in Composer, take a look to see how it works.
3. Select Design from the Composer Menu .
4. Select the Unknown intent trigger in the main dialog to get an idea of how this sample works.
NOTE
In this sample, the Unknown intent trigger contains a Multiple choice action (from the Ask a question menu)
where the the User Input list style is set to List and the users selection is stored into the user.choice property.
The user.choice property is passed to the next action, which is a Branch: switch (multiple options) action
(from the Create a condition menu). The item that the user selects from the list will determine which flow is taken,
for example, if HeroCardWithMemory is selected, HeroCardWithMemor y() is called, which calls the
HeroCardWithMemory template in the .lg file that can be found by selecting Bot Responses from the Composer
Menu as shown in the following image.

5. Select Bot Responses from the Composer Menu to see the templates that are called when the user selects
one of the items from the choices they are presented with when the Multiple choice action executes. You
will be referring to these templates throughout this article as each card is discussed in detail.

Define rich cards


Each of the sections below will detail how each card is defined as a structured response template in the .lg file that
is exposed in Composer in the Bot Responses page.

NOTE
LG provides some variability in card definition, which will eventually be converted to be aligned with the SDK card definition.
For example, both image and images fields are supported in all the card definitions in LG even though only images are
supported in the SDK card definition. For HeroCard and Thumbnail cards in LG, the values defined in either image or
images field will be converted to an images list. For the other types of cards, the last defined value will be assigned to the
image field. The values you assign to the image/images field can be in one of the following formats: string, adaptive
expression, or array in the format using | . Read more here.
HeroCard
A Hero card is a basic card type that allows you to combine images, text and interactive elements such as buttons
in one object and present a mixture of them to the user. A HeroCard is defined using structured template as
follows:

# HeroCard
[HeroCard
title = BotFramework Hero Card
subtitle = Microsoft Bot Framework
text = Build and connect intelligent bots to interact with your users naturally wherever they are, from
text/sms to Skype, Slack, Office 365 mail and other popular services.
image = https://sec.ch9.ms/ch9/7ff5/e07cfef0-aa3b-40bb-9baa-7c9ef8ff7ff5/buildreactionbotframework_960.jpg
buttons = ${cardActionTemplate('imBack', 'Show more cards', 'Show more cards')}
]

This example of hero card will enable your bot to send an image from a designated URL back to users when an
event to send a hero card is triggered. The hero card will include a button to show more cards when pressed.
HeroCardWithMemory
A HeroCardWithMemory is a HeroCard that demonstrates how to call Simple response templates in the .lg file just
as you would call a function.

# HeroCardWithMemory(name)
[Herocard
title=${TitleText(name)}
subtitle=${SubText()}
text=${DescriptionText()}
images=${CardImages()}
buttons=${cardActionTemplate('imBack', 'Show more cards', 'Show more cards')}
]

If you look in the Bot Responses page you will see where the values come from that populate the HeroCard:
ThumbnailCard
A Thumbnail card is another type of basic card type that combines a mixture of images, text and buttons. Unlike
Hero cards which present designated images in a large banner, Thumbnail cards present images as thumbnail. It is
card that typically contains a single thumbnail image, one or more buttons, and simple text. A ThumbnailCard is
defined using structured template as follows:

# ThumbnailCard
[ThumbnailCard
title = BotFramework Thumbnail Card
subtitle = Microsoft Bot Framework
text = Build and connect intelligent bots to interact with your users naturally wherever they are, from
text/sms to Skype, Slack, Office 365 mail and other popular services.
image = https://sec.ch9.ms/ch9/7ff5/e07cfef0-aa3b-40bb-9baa-7c9ef8ff7ff5/buildreactionbotframework_960.jpg
buttons = Get Started
]

SigninCard
A Signin card is a card that enables a bot to request that a user sign in. A SinginCard is defined using structured
template as follows:
# SigninCard
[SigninCard
text = BotFramework Sign-in Card
buttons = ${cardActionTemplate('signin', 'Sign-in', 'https://login.microsoftonline.com/')}
]

AnimationCard
Animation cards contain animated image content (such as .gif ). Typically this content does not contain sound,
and is typically presented with minimal transport controls (e.g, pause/play) or no transport controls at all.
Animation cards follow all shared rules defined for ort controls (e.g. rewind/restart/pause/play). Video cards
follow all shared rules defined for Media cards. An AnimationCard is defined using structured template as follows:

# AnimationCard
[AnimationCard
title = Microsoft Bot Framework
subtitle = Animation Card
image = https://docs.microsoft.com/en-us/bot-framework/media/how-it-works/architecture-resize.png
media = http://i.giphy.com/Ki55RUbOV5njy.gif
]

VideoCard
Video cards contain video content in video format such as .mp4 . Typically this content is presented to the user
with advanced transport controls (e.g. rewind/restart/pause/play). Video cards follow all shared rules defined for
Media cards. A VideoCard is defined using structured template as follows:

# VideoCard
[VideoCard
title = Big Buck Bunny
subtitle = by the Blender Institute
text = Big Buck Bunny (code-named Peach) is a short computer-animated comedy film by the Blender Institute
image = https://upload.wikimedia.org/wikipedia/commons/thumb/c/c5/Big_buck_bunny_poster_big.jpg/220px-
Big_buck_bunny_poster_big.jpg
media = http://download.blender.org/peach/bigbuckbunny_movies/BigBuckBunny_320x180.mp4
buttons = Learn More
]
AudioCard
Audio cards contain audio content in audio format such as .mp3 and .wav . Audio cards follow all shared rules
defined for Media cards. An AudioCard is defined using structured template as follows:

# AudioCard
[AudioCard
title = I am your father
subtitle = Star Wars: Episode V - The Empire Strikes Back
text = The Empire Strikes Back (also known as Star Wars: Episode V – The Empire Strikes Back)
image = https://upload.wikimedia.org/wikipedia/en/3/3c/SW_-_Empire_Strikes_Back.jpg
media = http://www.wavlist.com/movies/004/father.wav
buttons = Read More
]

AdaptiveCard
Adaptive cards are a new open card exchange format adopted by Composer that enable developers define their
cards content in a common and consistent way using JSON. Once defined, the adaptive card can be used in any
supported channel, automatically adapting to the look and feel of the host.
Adaptive cards not only support custom text formatting, they also support the use of containers, speech, images,
buttons, customizable backgrounds, user input controls for dates, numbers, text, and even customizable drop-
down lists.
An AdaptiveCard is defined as follows:

# AdaptiveCard
[Activity
Attachments = ${json(adaptivecardjson())}
]

This tells Composer that it is referencing a template named adaptivecardjson that is in the JSON format. If you
look in the Bot Responses you will see that template, this is the template used to generate the AdaptiveCard.

{
"$schema": "http://adaptivecards.io/schemas/adaptive-card.json",
"version": "1.0",
"type": "AdaptiveCard",
"speak": "Your flight is confirmed for you and 3 other passengers from San Francisco to Amsterdam on Friday,
October 10 8:30 AM",
"body": [
{
"type": "TextBlock",
"text": "Passengers",
"weight": "bolder",
"isSubtle": false
},
{
"type": "TextBlock",
"text": "${PassengerName()}",
"separator": true
},
{
"type": "TextBlock",
"text": "${PassengerName()}",
"spacing": "none"
},
{
"type": "TextBlock",
"text": "${PassengerName()}",
"spacing": "none"
},
{
"type": "TextBlock",
"text": "2 Stops",
"weight": "bolder",
"spacing": "medium"
},
{
"type": "TextBlock",
"text": "Fri, October 10 8:30 AM",
"weight": "bolder",
"spacing": "none"
},
{
"type": "ColumnSet",
"separator": true,
"columns": [
{
"type": "Column",
"width": 1,
"items": [
{
"type": "TextBlock",
"text": "San Francisco",
"isSubtle": true
},
{
{
"type": "TextBlock",
"size": "extraLarge",
"color": "accent",
"text": "SFO",
"spacing": "none"
}
]
},
{
"type": "Column",
"width": "auto",
"items": [
{
"type": "TextBlock",
"text": " "
},
{
"type": "Image",
"url": "http://adaptivecards.io/content/airplane.png",
"size": "small",
"spacing": "none"
}
]
},
{
"type": "Column",
"width": 1,
"items": [
{
"type": "TextBlock",
"horizontalAlignment": "right",
"text": "Amsterdam",
"isSubtle": true
},
{
"type": "TextBlock",
"horizontalAlignment": "right",
"size": "extraLarge",
"color": "accent",
"text": "AMS",
"spacing": "none"
}
]
}
]
},
{
"type": "TextBlock",
"text": "Non-Stop",
"weight": "bolder",
"spacing": "medium"
},
{
"type": "TextBlock",
"text": "Fri, October 18 9:50 PM",
"weight": "bolder",
"spacing": "none"
},
{
"type": "ColumnSet",
"separator": true,
"columns": [
{
"type": "Column",
"width": 1,
"items": [
{
"type": "TextBlock",
"text": "Amsterdam",
"text": "Amsterdam",
"isSubtle": true
},
{
"type": "TextBlock",
"size": "extraLarge",
"color": "accent",
"text": "AMS",
"spacing": "none"
}
]
},
{
"type": "Column",
"width": "auto",
"items": [
{
"type": "TextBlock",
"text": " "
},
{
"type": "Image",
"url": "http://adaptivecards.io/content/airplane.png",
"size": "small",
"spacing": "none"
}
]
},
{
"type": "Column",
"width": 1,
"items": [
{
"type": "TextBlock",
"horizontalAlignment": "right",
"text": "San Francisco",
"isSubtle": true
},
{
"type": "TextBlock",
"horizontalAlignment": "right",
"size": "extraLarge",
"color": "accent",
"text": "SFO",
"spacing": "none"
}
]
}
]
},
{
"type": "ColumnSet",
"spacing": "medium",
"columns": [
{
"type": "Column",
"width": "1",
"items": [
{
"type": "TextBlock",
"text": "Total",
"size": "medium",
"isSubtle": true
}
]
},
{
"type": "Column",
"width": 1,
"items": [
"items": [
{
"type": "TextBlock",
"horizontalAlignment": "right",
"text": "$4,032.54",
"size": "medium",
"weight": "bolder"
}
]
}
]
}
]
}

AdaptiveCard References
Adaptive Cards overview
Adaptive Cards Sample
Adaptive Cards for bot developers
AllCards
The "#AllCards" template displays all of the cards as Attachments of the Activity object.

# AllCards
[Activity
Attachments = ${HeroCard()} | ${ThumbnailCard()} | ${SigninCard()} | ${AnimationCard()} |
${VideoCard()} | ${AudioCard()} | ${AdaptiveCard()}
AttachmentLayout = ${AttachmentLayoutType()}
]

Further reading
Bot Framework - Cards
Add media to messages
Language generation
Structured response template

Next
Learn how to define triggers and events.
Asking for user input
9/21/2020 • 11 minutes to read

Bot Framework Composer makes it easier to collect and validate a variety of data types, and handle instances
when users input invalid or unrecognized data.

The Asking Questions example


This section is an introduction to the Asking Questions example (sample bot) that is used in this article to
explain how to incorporate prompts for user input into your bot using Composer.
Do the following to get the Asking Questions example running in Composer:
1. Select Home from the Composer Menu .
2. In the Examples section of the Home page, scroll down the list of examples and select Asking Questions .

Now that you have it loaded in Composer, take a look to see how it works.
3. Select Design from the Composer Menu .
4. Select the Greeting trigger in the main dialog to get an idea of how this sample works.
5. In this sample, the Greeting trigger always is the first thing that runs when the bot starts. This trigger
executes the Send a response action. The Send a response action calls the WelcomeUser template:
${WelcomeUser()} . To see what the WelcomeUser template does, select Bot Responses from the Composer
Menu and search for #WelcomeUser in the Name column.

IMPORTANT
When the bot first starts, it executes the greeting trigger. The Send a response action associated with the
greeting trigger starts to execute and calls the ${WelcomeUser()} template where different options are defined
and presented to the user. When the user responds by typing in the name or number of the item they wish to select,
the bot searches the user input for known patterns. When a pattern is found the bot sends an event with the
corresponding intent. That intent is captured by the Intent recognized trigger that was created to handle that
intent. For example, if the user enters '01' or 'TextInput' the TextInput trigger handles that event and calls the
TextInput dialog. The remaining steps in this section will walk you through this process.

6. Select the main dialog and look at the Proper ties pane, note that the Recognizer Type is set to Regular
Expression and RegEx patterns to intents has a list of all of the intents that includes the Intent name
and its corresponding Pattern . The following image shows the correlation between the list of RegEx
patterns to intents in the main dialog and the message displayed to the user when the bot first starts.
In each of the following sections you will learn how to create each of these user input types, using the
corresponding dialog as an example.

Text input
The Text input prompts users for their name then responds with a greeting using the name provided. This is
demonstrated in the Asking Questions example in the TextInput dialog. You create a text input prompt by
selecting the + icon in the Authoring canvas then selecting Text input from the Ask a question menu.
Optionally, the next section details how to create an entire text input dialog or you can go directly to the Number
Input section.
Create a text input action
To create a text input action:
1. Select the + icon then select Text input from the Ask a Question menu.

2. Enter Hello, I'm Zoidberg. What is your name? (This can't be interrupted) into the Prompt field in
the Proper ties panel.
3. Select the User Input tab, then enter user.name into the Proper ty to fill field

4. Create a new action by selecting the + icon in the Authoring canvas then select Send a response from
the list of actions.
5. Enter Hello ${user.name}, nice to talk to you! into the LG editor in the Proper ties panel.

NumberInput
The NumberInput example prompts the user for their age and other numerical values using the Number input
action.
As seen in the NumberInput dialog the user is prompted for two numbers: their age stored as user.age and the
result of 2*2.2 stored as a user.result . When using number prompts you can set the Output Format to either
float or integer .
Create a number input action
To create a number input action:
1. Select the + icon in the Authoring canvas . When the list of actions appear, select Number input from the
Ask a question menu.

In the Bot Asks tab of the Proper ties panel, enter - What is your age?
Select the User Input tab then enter user.age into the Proper ty to fill field.
NOTE
You can set the Output Format field in the the User Input tab to either float or integer . float is the
default.

Select the Other tab then enter - Please input a number. into the Invalid Prompt field
2. Create another action by selecting the + icon in the Authoring panel and selecting Send a response and
enter -Hello, your age is ${user.age}! into the prompt field. The will cause the bot to respond back to the
user with their age.
3. Next, follow step 1 in this section to create another Number Input action.
In the Bot Asks tab of the Proper ties panel, enter - 2 * 2.2 equals?
Select the User Input tab then enter user.result into the Proper ty to fill field.
Select the Other tab then enter - Please input a number. into the Invalid Prompt field.
4. Create a Branch: If/Else action by selecting the + icon in the Authoring panel and selecting Branch:
If/Else from the Create a condition menu.
5. Select the + icon in the true branch and select Send a response .

6. Enter -2 * 2.2 equals ${user.result}, that's right! into the prompt field. The will cause the bot to
respond back to the user with "2 * 2.2 equals 4.4, that's right!".
7. Create a conditional action by selecting the + icon in the false branch. This will execute when the user
enters an invalid answer.

8. Create another action by selecting the + icon in the Authoring panel and selecting Send a response .
9. Enter -2 * 2.2 equals ${user.result}, that's wrong! into the prompt field.

Confirmation
Confirmation prompts are useful after you've asked the user a question and want to confirm their answer.
Unlike the Multiple choice action that enables your bot to present the user with a list to choose from,
confirmation prompts ask the user to make a binary (yes/no) decision.
Create a confirmation action
To create a confirmation action:
1. Select the + icon then select Confirmation from the Ask a Question menu.

2. Enter -Would you like ice cream? in the Prompt field of the Proper ties panel.
3. Switch to the User Input tab ()

TIP
You can also switch to the User Input tab by selecting the User answers action in the Authoring canvas .

4. Enter user.confirmed in the Proper ty to fill field.


5. Select the + icon in the Authoring panel and select Send a response .
6. Enter -confirmation: ${user.confirmed} into the prompt field.

Multiple choice
Multiple choice enables you to present your users with a list of options to choose from.
Create a multiple choice action
To create a prompt with a list of options that the user can choose from:
1. Select the + icon then select Multiple choice from the Ask a Question menu.
2. Select Bot Asks tab and enter - Please select a value from below: in the Prompt with multi-choice
field.
3. Switch to the User Input tab.
Enter user.style in the Proper ty field.
Scroll down to the Array of choices section and select one of the three options (simple choices ,
structured choices , expression ) to add your choices. For example, if you choose simple choices ,
you can add the choices one at a time in the field. Every time you add a choice option, make sure you
press Enter.
a. Test1
b. Test2
c. Test3
Additional information: The User Input tab
The Output Format field is set to value by default. This means the value, not the index will be returned,
for this example that means any one of these three values will be returned: 'test1', 'test2', 'test3'.
By default the locale is set to en-us . The locale sets the language the recognizer should expect from the
user (US English in this sample).
By default the List style is set to Auto . The List style sets the style for how the choice options are
displayed. The table below shows the differences in appearance for the three choices:

L IST ST Y L E A P P EA RA N C E DESC RIP T IO N

None No options will be display.

Auto Composer decides the formatting,


usually Suggested Action buttons.

Inline Separates options using the value in


the Inline separator field.
L IST ST Y L E A P P EA RA N C E DESC RIP T IO N

List Displays options as list. A numbered list


if Include numbers is selected.

SuggestedAction Displays options as SuggestedAction


buttons.

HeroCard Displays Hero Card with options as


buttons within card.

There are three boxes related to inline separation, or how your bot separates the text of your choices:
Inline separator - character used to separate individual choices when there are more than two choices,
usually , .
Inline or - separator used when there are only two choices, usually or .
Inline or more - separator between last two choices when there are more than two options, usually
, or .
The Include numbers option allows you to use plain or numbered lists when the List Style is set to List .

Attachment
The Attachment Input example demonstrates how to enable users to upload images, videos, and other media.
When running this example bot in the Emulator, once this option is selected from the main menu you will be
prompted to "Please send an image.", select the paperclip icon next to the text input area and select an image file.
Create an attachment input action
To implement an Attachment Input action:
1. Select the + icon then select File or attachment from the Ask a Question menu.
2. Enter - Please send an image. in the Prompt field of the Proper ties panel.
3. Switch to the User Input tab ()
Enter dialog.attachments in the Proper ty to fill field.
Enter all in the Output Format field.

TIP
You can set the Output Format to first (only the first attachment will be output even is multiple were selected)
or all (all attachments will be output when multiple were selected).

5. Select the + icon in the Authoring panel and select Send a response .
6. Enter -${ShowImage(dialog.attachments[0].contentUrl, dialog.attachments[0].contentType)} into the prompt
field.

DateTimeInput
The DateTimeInput sample demonstrates how to get date and time information from your users using Date or
time prompt.
Create a date time input action
To prompt a user for a date:
1. Select the + icon then select Date or time from the Ask a Question menu.
2. Enter -Please enter a date. in the Prompt field of the Proper ties panel.
3. Switch to the User Input tab and enter user.date in the Proper ty to fill field.
4. Switch to the Other tab and enter - Please enter a date. in the Invalid Prompt field.
5. Select the + icon in the Authoring panel and select Send a response .
6. Enter -You entered: ${user.date[0].value}

Prompt settings and validation


In the Other tab of the Proper ties panel, you can set user input validation rules using adaptive expressions as
well as respond independently to user input to accommodate an unrecognized, invalid and Default value
responses.

IMPORTANT
The value to be validated is present in the this.value property. this is a memory scope that pertains to the active
action's properties. Read more in the memory concept article.

Unrecognized Prompt : This is the message that is sent to a user if the response entered was not
recognized. It is a good practice to add some guidance along with the prompt. For example when a user
input is the name of a city but a five-digit zip code is expected, the Unrecognized Prompt can be the
following:

Sorry, I do not understand '${this.value}'. Please enter a zip code in the form of 12345.

Validation Rules : This is the rule defined in adaptive expressions to validate the user's response. The input
is considered valid only if the expression evaluates to true . An example validation rule specifying that the
user input be 5 characters long can look like the following:

length(this.value) == 5

Invalid Prompt : This is the message that is sent to a user if the response entered is invalid according to
the Validation Rules. It is a good practice to specify in the message that it is not valid and what is expected.
For example:
Sorry, '${this.value}' is not valid. I'm looking for a 5 digit number as zip code. Please specify a
zip code in the form 12345
.

Default Value Response : The value that is returned after the max turn count has been hit. This will be sent
to the user after the last failed attempt. If this is not specified, the prompt will simply end and move on
without telling the user a default value has been selected. In order for the default value response to be used,
you must specify both the default value and the default value response.
Max turn count : The maximum number of re-prompt attempts before the default value is selected. When
Max turn count is reached to limit, the property will end up being set to null unless a default value is
specified. Please note that if your dialog is not designed to handle a null value, it may crash the bot.
Default value : The value returned when no value is supplied. When a default value is specified, you should
also specify the default value response.
Allow interruptions (true/false): This determines whether parent should be able to interrupt child dialog.
Consider using the Allow Interruptions property to either handle a global interruption or a local
interruption within the context of the dialog.
Always prompt (true/false): Collect information even if specified property isn't empty.

Next
Learn how to manage conversation flow using conditionals and dialogs.
Best practices for building bots using Composer.
Controlling conversation flow
9/21/2020 • 11 minutes to read

The conversations a bot has with its users are controlled by the content of its dialog. Dialogs contain templates for
messages the bot will send, along with instructions for the bot to carry out tasks. While some dialogs are linear -
just one message after the other - more complex interactions will require dialogs that branch and loop based on
what the user says and the choices they make. This article explains how to add both simple and complex
conversation flow using examples from the sample bot provided in the Composer examples.

The Controlling Conversation Flow example


This section is an introduction to the Controlling Conversation Flow example that is used to explain how to
control conversation flow in your bot using Composer.
Do the following to get the Controlling Conversation Flow example running in Composer:
1. Select Home from the Composer Menu .
2. In the Examples section of the Home page, scroll down the list of examples and select Controlling
Conversation Flow .

Now that you have it loaded in Composer, take a look to see how it works.
3. Select Design from the Composer Menu .
4. Select the Greeting trigger in the main dialog to get an idea of how this sample works.
5. In this sample, the Greeting trigger always is the first thing that runs when the bot starts. This trigger
executes the Send a response action. The Send a response action calls the WelcomeUser template:
${WelcomeUser()} . To see what the help template does, select Bot Responses from the Composer Menu
and search for #WelcomeUser in the Name column.

IMPORTANT
When the bot first starts, it executes the greeting trigger. The greeting trigger presents the user with different
options using SuggestedActions . When the user selects one of them, the bot sends an event with the
corresponding intent. That intent is captured by the Intent recognized trigger that was created to handle that
intent. For example, if the user enters 'IfCondition' the IfCondition trigger handles that event and calls the
IfCondition dialog. The remaining steps in this section will walk you through this process.

6. Select the main dialog and look at the Proper ties pane, note that the Recognizer Type is set to Regular
Expression .
IMPORTANT
In each of the following sections you will learn how to create each of these different ways to control the conversation
flow, using the corresponding dialog as an example.

7. To see the intents for each trigger select the trigger and look in the Proper ties panel. The following image
shows a Regular Expression such that if the user enters either IfCondition or 01 the IfCondition trigger will
execute, the (?i) starts case-insensitive mode in a regular expression.

Conditional branching
Composer offers several mechanisms for controlling the flow of the conversation. These building blocks instruct
the bot to make a decision based on a property in memory or the result of an expression. Below is a screenshot of
the Create a Condition menu:
Branch: If/Else instructs the bot to choose between one of two paths based on a yes / no or true /
false type value.

Branch: Switch (multiple options) branch instructs the bot to choose the path associated with a specific
value - for example, a switch can be used to build a multiple-choice menu.
Branch: If/Else
The Branch: If/Else action creates a decision point for the bot, after which it will follow one of two possible
branches. To create an Branch: If/Else branch select the + icon in the Authoring canvas then select Branch:
If/Else in the Create a Condition menu.
The decision is controlled by the Condition field in the Proper ties panel, and must contain an expression that
evaluates to true or false. For example, in the screenshot below the bot is evaluating whether user.age is greater
than or equal to 18.

Once the condition has been set, the corresponding branches can be built. The editor will now display two parallel
paths in the flow - one that will be used if the condition evaluates to true , and one if the condition evaluates
false . Below the bot will Send a response based on whether user.age>=18 evaluates to true or false .
Branch: Switch
In a Branch: Switch , the value of the parameter defined in the Condition field of the Proper ties panel is
compared to each of the values defined in the Cases section that immediately follows the Condition field. When
a match is found, the flow continues down that path, executing the actions it contains. To create a Branch: Switch
action, select the + icon in the Authoring canvas then select Branch: Switch from the Create a Condition
menu.
Like Branch:If/Else you set the Condition to be evaluated in the Proper ties panel. Underneath you can create
Branches in your switch condition by entering the value and press Enter . As each case is added, a new branch
will appear in the flow which can then be customized with actions. See below how the Nick and Tom branches are
added both in the property panel on the right and in the authoring canvas. In addition, there will always be a
"default" branch that executes if no match is found.

Loops
Below is a screenshot of the Looping menu:

Loop: for each item instructs the bot to loop through a set of values stored in an array and carry out the
same set of actions with each one. For very large arrays there is
Loop: for each page (multiple items) that can be used to step through the array one page at a time.
Continue loop instructs the bot to stop executing this template and continue with the next iteration of the
loop.
Break out of loop instructs the bot to stop executing this loop.
Loop: for each item
The Loop: for each item action instructs the bot to loop through a set of values stored in an array and carry out
the same set of actions with each element of the array.
For the sample in this section you will first create and populate an array, then create the for each item loop.
Create and populate an array
To create and populate an array:
1. Select Edit an Array proper ty from the Manage proper ties menu.
2. In the Proper ties panel, edit the following fields:
Type of change : Push
Items proper ty : dialog.ids
Value : 10000+1000+100+10+1
3. Repeat the previous two steps to add two more elements to the array, setting the Value to:
200*200
888888/4
4. (optional) send a response to the user. You do that by selecting the + in the Authoring canvas then Send
a response . Enter -Pushed dialog.id into a list into the Proper ties panel.

Now that you have an array to loop through, you can create the loop.
Loop through the array
To create the for each loop:
1. Select the + icon in the Authoring Canvas then Loop: for each item from the Looping menu.
2. Enter the name of the array you created, dialog.ids into the Items proper ty field.
3. To show the results when the bot is running, enter a new action to occur with each iteration of the loop to
display the results. You do that by selecting the + in the Authoring canvas then Send a response . Enter
- ${dialog.foreach.index}: ${dialog.foreach.value} into the Proper ties panel.

Once the loop begins, it will repeat once for each item in the array. Note that it is not currently possible to end the
loop before all items have been processed. If the bot needs to process only a subset of the items, use Branch:
If/Else and Branch: Switch branches within the loop to create nested conditional paths.
Loop: for each page
Loop: for each page (multiple items) loops are useful for situations in which you want to loop through a large
array one page at a time. Like Loop: for each item , the bot iterates an array, the difference is that For Each
Loops executes actions per item page instead of per item in the array.
For the sample in this section you will first create and populate an array, then create the for each page loop.
Create and populate an array
To create and populate an array:
1. Select Edit an Array proper ty from the Manage proper ties menu.
2. Add properties to the array you just created by selecting + in the Authoring canvas , then Edit an Array
Proper ty from the Manage proper ties menu.
3. In the Proper ties panel, edit the following fields:
Type of change : Push
Items proper ty : dialog.ids
Value : 1
4. Repeat steps 3, incrementing the Value by 1 each time until you have 6 properties in your array.

IMPORTANT
You will notice that this differs from the Controlling Conversation Flow example, the reason is to show another
example of the Page size field in the loop that you will create next.

Loop through the array


To create the Loop: for each page action:
1. Select the + icon in the Authoring canvas , then Loop: for each page (multiple items) from the
Looping menu.
2. Enter the name of the array you created, dialog.ids into the Items proper ty field.
3. To show the results when the bot is running, enter a new action to occur with each iteration of the loop to
display the results. You do that by selecting the + in the Authoring canvas then Send a response . Enter
- ${dialog.foreach.index}: ${dialog.foreach.value} into the Proper ties panel.
After setting the aforementioned properties your Loop: for each page (multiple items) loop is ready. As seen
in the sample below, you can nest for Loop: for each item within your Loop: for each page (multiple items)
loop, causing your bot to loop through all the items in one page and take an action before handling the next page.

Using dialogs to control conversation


In addition to conditional branching and looping, it is also possible to compose multiple dialogs into a larger more
complex interaction. Below are the available Dialog management options:

Begin a new Dialog


Child dialogs are called from the parent dialog using the Begin a new dialog action from within any trigger. You
do this by selecting the + icon in the Authoring canvas then select Begin a new dialog from the Dialog
management menu.
Once the child dialog is called, the parent dialog pauses execution until the child completes and returns control
back to its parent which then resumes where it left off.
It is possible to pass parameters into the child dialog. Parameters can be added to Options field in the Begin a
new dialog action Proper ties panel. The value of each parameter is saved as a property in memory.
If you choose to use expression :

In the screen shot above, ChildDialog will be started, and passed 2 options:
the first will contain the value of the key foo and be available inside the menu dialog as dialog.<field> , in
this case, dialog.foo .
the second will contain the value of the key value and will available inside the menu dialog as dialog.<field>
, in this case, dialog.value .

Note that it is not necessary to map memory properties that would otherwise be available automatically - that is,
the user and conversation scopes will automatically be available for all dialogs. However, values stored in the
turn and dialog scope do need to be explicitly passed.
In addition to passing these key/value pairs into a child dialog, it is also possible to receive a return value from the
child dialog. This return value is specified as part of the End this dialog action, as described below.
In addition to Begin a new dialog , there are a few other ways to launch a child dialog.
Replace this Dialog
Replace this Dialog works just like Begin a new dialog , with one major difference: the parent dialog does not
resume when the child finishes. To replace a dialog select the + icon in the Authoring canvas then select
Replace this Dialog from the Dialog management menu.
Repeat this Dialog
Repeat this Dialog causes the current dialog to repeat from the beginning. Note that this does not reset any
properties that may have been set during the course of the dialog's first run. To repeat a dialog select the + icon in
the Authoring canvas then select Repeat this Dialog from the Dialog management menu.
Ending Dialogs
Any dialog called will naturally end and return control to its parent dialog when it reaches the last action in its flow.
While it is not necessary to explicitly call End this dialog , it is sometimes desirable to end a dialog before it
reaches the end of the flow - for example, you may want to end a dialog if a certain condition is met.
Another reason to call the End this dialog action is to pass a return value back to the parent dialog. The return
value of a dialog can be a property in memory or an expression, allowing developers to return complex values
when necessary. To do this, select the + icon in the Authoring canvas then select End this Dialog from the
Dialog management menu.
Imagine a child dialog used to collect a display name for a user profile. It asks the user a series of questions about
their preferences, finally helping them enter a valid user name. Rather than returning all of the information
collected by the dialog, it can be configured to return only the user name value, as seen in the example below. The
dialog's End this dialog action is configured to return the value of dialog.new_user_name to the parent dialog.

Conditional versions of a message in LG


In addition to creating explicit branches and loops in the flow, it is also possible to create conditional versions of
messages using the Language Generation syntax. The LG syntax supports the same Adaptive expressions as is
used in the action blocks.
For example, you can create a welcome message that is different depending on whether the user.name property is
set or not. The message template could look something like this:

- IF: ${user.name != null}


- Hello, {user.name}
- ELSE:
- Hello, human!

Learn more about using memory and expressions in LG.

Further Reading
Adaptive dialogs
Adaptive expressions

Next
Language Generation
Adding LUIS for language understanding
9/21/2020 • 2 minutes to read

This article will instruct how to integrate language understanding in your bot using the cloud-based service LUIS.
LUIS lets your bots identify valuable information from user input by interpreting user needs (intents) and
extracting key information (entities). Understanding user intent makes it possible for your bot to know how to
respond with helpful information using language generation.

Prerequisites
Knowledge of language understanding and events and triggers
A LUIS account and a LUIS authoring key.

Set Recognizer Type to LUIS


Composer uses recognizers to interpret user input. A dialog can use only one type of recognizer, each of which are
set independently from the other dialogs in your bot.
Follow the steps to set LUIS as the recognizer in your dialog:
1. Select the dialog that needs language understanding capabilities in the Navigation pane.
2. In the Proper ties panel, select Default recognizer from the Recognizer Type drop-down list.

NOTE
The Default recognizer can be one of the following recognizers:
None - do not use recognizer.
LUIS recognizer - to extract intents and entities from a user's utterance based on the defined LUIS application.
QnA Maker recognizer - to extract intents from a user's utterance based on the defined QnAMaker application.
Cross-trained recognizer set - to compare recognition results from more than one recognizer to decide a winner.

Create an Intent recognized trigger


You need to create a trigger to handle each intent you create. Follow the steps to create an Intent recognized
trigger to handle your LUIS intent.
1. Select the desired dialog in the Navigation pane. Select + Add then + Add new trigger in the tool bar.
2. On the Create a trigger screen.
a. Enter a name for the trigger such as Greeting .
b. Enter example phrases in the Trigger phrases field using the .lu file format.

- Hi!
- Hello.
- Hey there.

After you define your trigger and configure it to specific intent, you can add actions to be executed after the
trigger is fired. One option is sending a response message.
3. Create a new action by selecting the plus (+) icon in the Authoring canvas , then Send a response from
the drop-down list.

4. Enter This is a greeting intent! in the LG editor in the Proper ties panel.
TIP
The response message in the LG editor is governed by the rules defined in the .lg file format.

Publish
After you are done with all previous steps, you are ready to publish your language understanding data to LUIS.
1. Select the Star t Bot button located in the Composer Toolbar
2. Every time you add or edit anything in your LU model your data will be saved in LUIS. The first time you
will be prompted for your LUIS Primar y Key , enter it when prompted by the Publish LUIS models
form then select OK .
TIP
If you go to your LUIS account, you will find the newly published application.

Test in Emulator
It's always a good idea to verify that your bot works correctly when you add new functionality. You can test your
bot's new language understanding capabilities using the Emulator.
1. Select Test in Emulator in the Composer tool bar.

2. When the Emulator is running send it messages using the various utterances you created to see if it
executes your Intent recognized triggers.

Next
Try the ToDoBotWithLuisSample in Composer to see how LUIS is used in a bot.
Learn how to add a QnA Maker knowledge base to your bot.
Creating QnA Maker knowledge base in Composer
9/21/2020 • 3 minutes to read

In Bot Framework Composer, you can create your own QnA Maker knowledge base (KB) and publish it to
https://www.qnamaker.ai. This article shows how to start from QnA Maker knowledge base before creating a bot,
add QnA Maker knowledge base when developing bots, and publish your QnA Maker knowledge base.

Prerequisites
A basic bot built using Composer.
A subscription to Microsoft Azure.
A basic understanding of QnA Maker service and how to create a QnA Maker resource in the Azure portal.
A QnA Maker Subscription key when you create your QnA Maker resource.

IMPORTANT
If you built Composer from source, you need to run a command before you can create a QnA Maker knowledge base in
Composer. Before running yarn startall to start your Composer:
On Windows, set QNA_SUBSCRIPTION_KEY=<Your_QnA_Subscription_Key>
On macOS or Linux, export QNA_SUBSCRIPTION_KEY=<Your_QnA_Subscription_Key>

If you are using the desktop application version of Composer, this step is not necessary.

About QnA Maker


Like LUIS, QnA Maker is a cloud-based Natural Language Processing (NLP) service that creates a natural
conversational layer over your data. QnA Maker is especially useful when you have static information to mange in
your bot. The static question and answer pairs are referred to as QnA Maker knowledge base, based on which your
QnA Maker service processes the question and responds with the best answer. Since Composer 1.1.0 release, you
can create and manage your QnA Maker knowledge base in Composer, in addition to the existing LUIS integration
for language understanding.

Start from QnA Maker knowledge base


Follow steps 1 - 4 to create your QnA Maker knowledge base before creating your bot. You can choose to import
QnA Maker knowledge base from URL or create your own QnA question and answer sets. The steps are The steps
are shown in the image following step 4.
1. On your Composer home screen, select + New .
2. On the Create bot from template or scratch? screen select Create from knowledge base (QnA
Maker) . Select Next .
3. Enter a name for your bot, add a description (optional), and specify a location to store the bot. Select Next .
4. On the Populate your KB screen, enter a URL in the URL field. For example:
https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/troubleshooting . Select Create
knowledge base . If you want to create your own QnA question and answer set, select Create knowledge
base from scratch .
Add QnA Maker knowledge base when developing bots
Follow steps 1 - 5 to add your QnA Maker knowledge base during the process of developing your bots. You can
choose to import your QnA Maker knowledge base from URL or create your own QnA question and answer set.
The steps are shown in the image following step 5.
1. Select the dialog you want to author the QnA Maker knowledge base then select Default recognizer from
the Recognizer Type list.
2. Select + Add from the tool bar then select Add new trigger .
3. On the Create a trigger screen, select QnA Intent recognized from the What is the type of this
trigger? drop down list. Select Submit . This step creates a QnA Intent recognized trigger with
provisioned actions.
4. On the properties panel on the right, select Go to QnA all-up view page . This directs you to the page to
author your QnA Maker knowledge base.
5. You can choose to import QnA Maker knowledge base from URL or create your own QnA question and
answer set.
a. To import QnA Maker knowledge base from URL: select + Add on the tool bar then select Impor t QnA
From URL . On the Populate your KB screen, enter a URL to import your knowledge base from URL.
Select Create knowledge base from scratch .
b. To create your own QnA question and answer set: select + Add QnA Pair and add your own question
and answer set.
TIP
For more information about QnA Intent recognized trigger and how it works with recognizers, read the
[recognizer]((concept-dialog.md#recognizer) section of the dialog conceptual article and the QnA Intent recognized
section of the defining triggers article.

Publish your QnA Maker knowledge base


After you finish creating your QnA Maker knowledge base, you can proceed to publish it.
1. Select Star t Bot on the tool bar.
2. On the Publish models screen, enter your QnA subscription key. Select OK .

Additional information
Manage QnA Maker resources.
Add a QnA Maker knowledge base to your bot in Composer.
How to add a QnA Maker knowledge base to your bot
9/21/2020 • 4 minutes to read

This article will teach you how to add QnA Maker knowledge base to your bot created using Bot Framework Composer.
You will find this helpful when you want to send a user question to your bot then have the QnA Maker knowledge base
provide the answer.

Prerequisites
A basic bot built using Composer
A QnA Maker knowledge base

Add QnA Maker integration


To access the Connect to QnA Knowledgebase action, you need to select + under the node you want to add the QnA
knowledge base and then select Connect to QnAKnowledgeBase from the Access external resources action menu.

Review settings
Review the QnA Maker settings panel when selecting the QnA Maker dialog. While you can edit settings in the panel, a
security best practice is to edit security-related settings (such as the endpoint key, knowledge base ID and hostname) from
the Settings menu. This menu writes the values to the appsettings.json file and persists the values in the browser
session. If you edit the settings from the QnA Maker settings panel, these settings are less secure because they are written
to the dialog file.
The values for KnowledgeBase id , Endpoint Key , and Hostname as shown in the preceding screenshot are locations for the
values in the appsettings.json file. Do not change these values in this panel. Changes made to this panel are saved to a
file on disk. If you manage the Composer files with source control, the security settings saved in the panel will also be
checked into source control.
Editing from the Settings menu of Composer saves the changes to the appsettings.json file which should be ignored by
your source control software.

Required and optional settings


The following settings configure the bot's integration with QnA Maker.

REQ UIRED SET T IN G IN F O RM AT IO N

Required Knowledge base ID - provided by You shouldn't need to provide this value.
appsettings.json as QnA Maker portal's Settings for the
settings.qna.knowledgebaseid knowledge base, after the knowledge base
is published. For example,
12345678-MMMM-ZZZZ-AAAA-
123456789012
.

Required Endpoint key - provided by You shouldn't need to provide this value.
appsettings.json as QnA Maker portal's Settings for the
settings.qna.endpointkey knowledge base, after the knowledge base
is published. For example,
12345678-AAAA-BBBB-CCCC-
123456789012
.

Required Hostname - provided by You shouldn't need to provide this value.


appsettings.json as QnA Maker portal's Settings for the
settings.qna.hostname knowledge base, after the knowledge base
is published. For example,
https://{qnamakername}.azurewebsites.net/qnamaker
.
REQ UIRED SET T IN G IN F O RM AT IO N

Optional Fallback answer This answer is specific to this bot and is


not pulled for the QnA Maker service's
match for no answer. For example,
Answer not found in kb.

Required Threshold This answer is a floating point number


such as 0.3 indicating 30% or better.

Optional Active learning card title Text to display to user before providing
follow-up prompts, for example:
Did you mean: .

Optional Card no match text Text to display as a card to the user at the
end of the list of follow-up prompts to
indicate none of the prompts match the
user's need. For example:
None of the above.

Optional Card no match response Text to display as a card to the user as a


response to the user selecting the card
indicating none of the follow-up prompts
matched the user's need. For example:
Thanks for the feedback.

Edit settings
Securely editing the QnA Maker settings should be completed using Settings . These values are held in the browser
session only.

1. Select the cog in the side menu. This provides the ability to edit the Dialog settings .
2. Edit the values for the knowledge base ID, the endpoint key, and the host name. The endpoint key and host name
are available from the QnA Maker portal's Publish page.

Knowledge base limits


You can use Connect to a QnAKnowledgeBase Action to connect to only one knowledge base per dialog.
If your knowledge bases are domain agnostic and your scenario does not require you to keep them as separate knowledge
bases, you can merge them to create one knowledge base and use Connect to QnAKnowledgeBase Action to build
your dialog.
If your knowledge bases have content from different domains and your scenario requires you to connect multiple
knowledge bases and show the single answer from the knowledge base with higher confidence score to the end user, use
the Send an HTTP request Action to make two HTTP calls to two published knowledge bases, manipulate the response
payload to compare the confidence scores and decide which answer should be shown to the end user.

Bots with Language Understanding (LUIS) and QnA Maker


Composer allows you to build bots that contain both QnA Maker and LUIS dialogs. A best practice is to set the confidence
threshold for LUIS intent prediction and trigger QnA Maker through Intent event. The QnA Maker and LUIS Sample
demonstrates the best practice to build a bot using QnA Maker and LUIS intents. See how to open a sample in Composer
in the How to use samples article.
Defining triggers
9/21/2020 • 7 minutes to read

Each dialog in Bot Framework Composer includes a set of triggers (event handlers) that contain actions
(instructions) for how the bot will respond to inputs received when the dialog is active. There are several different
types of triggers in Composer. They all work in a similar manner and can even be interchanged in some cases. This
article explains how to define each type of trigger. Before you walk through this article, please read the events and
triggers concept article.
The table below lists the five different types of triggers in Composer and their descriptions.

T RIGGER T Y P E DESC RIP T IO N

Unknown intent The Unknown intent trigger fires when an intent is defined
and recognized but there is no Intent recognized trigger
defined for that intent.

Intent recognized When an intent (LUIS or Regex) is recognized the Intent


recognized trigger fires.

QnA Intent recognized When an intent (QnAMaker) is recognized the QnA Intent
recognized trigger fires.

Duplicated intents recognized The Duplicated intents recognized trigger fires when
multiple intents are recognized. It compares recognition
results from more than one recognizer to decide a winner.

Dialog events When a dialog event such as BeginDialog occurs this trigger
fires.

Activities When an activity event occurs, such as when a new


conversation starts, the Activities trigger will fire.

Custom event When an Emit a custom event occurs the Custom event
trigger will fire.

Unknown intent
This is a trigger used to define actions to take when there is no Intent recognized trigger to handle an existing
intent.
Follow the steps to define an Unknown intent trigger:
1. Select the desired dialog. Select + Add and then Add new trigger from the tool bar. Select Submit . You
will then see an empty Unknown intent trigger in the authoring canvas.
2. Select the + sign under the trigger node to add any action node(s) you want to include. For example, you
can select Send a response to send a message "This is an unknown intent trigger!". When this trigger is
fired, the response message will be sent to the user.
Intent recognized
This is a trigger type used to define actions to take when an intent is recognized. This trigger works in conjunction
with LUIS recognizer and Regular Expression recognizer.

NOTE
Please note that the Default recognizer can work as a LUIS recognizer when you define LUIS models. Read more in the
recognizers section of the dialogs concept article.

Follow the steps to define an Intent recognized trigger with Regular Expression recognizer:
1. Select the desired dialog in the Navigation pane of Composer's Design page.
2. In the Proper ties panel of your selected dialog, choose Regular Expression as recognizer type for your
dialog.

3. Create an Intent recognized trigger. Select New Trigger in the Navigation pane then Intent
recognized from the drop-down list.
Enter a name in the What is the name of this trigger field. This is also the name of the intent.
Enter a Regular Expression pattern in the Please input regex pattern field.
The following image shows the definition of an Intent recognized trigger named BookFlight . User input
that matches the Regex pattern will fire this trigger.
A regular expression is a special text string for describing a search pattern that can be used to match simple or
sophisticated patterns in a string. Composer exposes the ability to define intents using regular expressions and
also allows regular expressions to extract simple entity values. While LUIS offers the flexibility of a more fully
featured language understanding technology, the Regular Expression recognizer works well when you need to
match a narrow set of highly structured commands or keywords.
In the example above, a book-flight intent is defined. However, this will only match the very narrow pattern "book
flight to [somewhere]", whereas the LUIS recognizer will be able to match a much wider variety of messages.
Learn how to define an Intent recognized trigger with LUIS recognizer in the how to use LUIS article.

QnA Intent recognized


This is a trigger type used to define actions to take when an intent is recognized. This trigger works in conjunction
with QnA Maker recognizer.

NOTE
Please note that the Default recognizer can work as a QnA recognizer when you define QnA Maker knowledge base. Read
more in the recognizers section of the dialogs concept article.

Follow the steps to define a QnA Intent recognized trigger:


1. Select the desired dialog in the Navigation pane of Composer's Design page.
2. In the Proper ties panel of your selected dialog, choose Default recognizer as recognizer type for your
dialog.

3. Select + Add and then + New Trigger in the tool bar. Select QnA Intent recognized trigger from the
drop-down list.

Duplicated intents recognized


This is a trigger used to define actions to take when multiple intents are recognized. This trigger works in
conjunction with CrossTrained recognizer.

NOTE
Please note that the Default recognizer can work as a CrossTrained recognizer when you have both LUIS and QnA intents
defined. Read more in the recognizers section of the dialogs concept article.

Follow the steps to define a Duplicated intents recognized trigger:


1. Select the desired dialog in the Navigation pane of Composer's Design page.
2. In the Proper ties panel of your selected dialog, choose Default recognizer as recognizer type for your
dialog.

3. Select + Add and then + New Trigger in the tool bar. Select Duplicated intents recognized trigger
from the drop-down list.

Dialog events
This is a trigger type used to define actions to take when a dialog event such as BeginDialog is fired. Most dialogs
will include a trigger configured to respond to the BeginDialog event, which fires when the dialog begins and
allows the bot to respond immediately. Follow the steps below to define a Dialog star ted trigger:
1. Select the desired dialog. Select + Add and then Add new trigger from the toolbar.
2. In the Create a trigger window, select Dialog events from the drop-down list.
3. Select Dialog star ted (Begin dialog event) from the Which event? drop-down list then select Submit .

4. Select the + sign under the Dialog star ted node and then select Begin a new dialog from the Dialog
management menu.

5. Before you can use this trigger you must associate a dialog to it. You do this by selecting a dialog from the
Dialog name drop-down list in the Proper ties panel on the right side of the Composer window. You can
select an existing dialog or create a new one. the example below demonstrates selecting and existing dialog
named weather.

Activities
This type of trigger is used to handle activity events such as your bot receiving a ConversationUpdate Activity. This
indicates a new conversation began and you use a Greeting (ConversationUpdate activity) trigger to handle
it.
The following steps demonstrate hot to create a Greeting (ConversationUpdate activity) trigger to send a
welcome message:
1. Select the desired dialog. Select + Add and then Add new trigger from the tool bar.
2. In the Create a trigger window, select Activities from the drop-down list.
3. Select Greeting (ConversationUpdate activity) from the Which activity type? drop-down list then
select Submit .
4. After you select Submit , you will see the trigger node in the authoring canvas.
5. Select the + sign under the ConversationUpdate Activity node and add any desired action such as Send
a response .

Custom event
The Custom event trigger will only fire when a matching Emit a custom event occurs. It is a trigger that any
dialog in your bot can consume. To define and consume a Custom event trigger, you need to create a Emit a
custom event first. Follow the steps below to create a Emit a custom event :
1. Select the trigger you want to associate your Custom event with. Select the + sign and then select Emit a
custom event from the Access external resources drop-down list.

2. In the Proper ties panel on the right side of the Composer window, enter a name ("weather") into the
Event name field, then set Bubble event to be true .
TIP
When Bubble event is set to be true , any event that is not handled in the current dialog will bubble up to that
dialogs parent dialog where it will continue to look for handlers for the custom event.

Now that your Emit a custom event has been created, you can create a Custom event trigger to handle this
event. When the Emit a custom event occurs, any matching Custom event trigger at any dialog level will fire.
Follow the steps to create a Custom event trigger to be associated with the previously defined Emit a custom
event .
3. Select + Add and then + Add new trigger from the tool bar.
4. In the pop-up window, select Custom events from the drop-down list and enter a name ("weather") into
the What is the name of the custom event field. SelectSubmit .

5. Now you can add an action to your custom event trigger, this defines what will happen when it is triggered.
Do this by selecting the + sign and then Send a response from the actions menu. Enter the desired
response for this action in the Language Generation editor, for this example enter "This is a custom
trigger!".
Now you have completed both of the required steps needed to create and execute a custom event. When Emit a
custom event fires, your custom event trigger will fire and handle this event, sending the response you defined.

Next
Learn how to control conversation flow.
Define intents with entities
9/21/2020 • 6 minutes to read

Conversations do not always progress in a linear fashion. Users will want to cover specify information, present
information out of order, or make corrections etc. Bot Framework Composer supports language understanding in
these advanced scenarios, with the advanced dialog capabilities offered by adaptive dialogs and LUIS application.
In this article, we will cover some details of how LUIS recognizer extracts the intent and entity you may define in
Composer. the code snippets come from the To do with LUIS example. Read the How to use samples article and
learn how to open the example bot in Composer.

Prerequisites
A basic understanding of the intent and entity concepts.
A basic understanding of how to define an Intent Recognized trigger.
A basic understanding of how to use LUIS in Composer.
A LUIS account and a LUIS authoring key.

The Todo with LUIS example


This section is an introduction to the Todo with LUIS example (sample bot) that is used in this article to explain
how to define intent with entities using Composer.
Do the following to get the Todo with LUIS example running in Composer:
1. Select Home from the Composer Menu
2. In the Examples section of the Home page, scroll down the list of examples and select Todo with LUIS .

Now that you have it loaded in Composer, take a look to see how it works.

LUIS for entity extraction


In addition to specifying intents and utterances as instructed in the how to use LUIS in Composer article, it is also
possible to train LUIS to recognize named entities. Extracted entities are passed along to any triggered actions or
child dialogs using the syntax @{Entity Name} . For example, given an intent definition like below:
# BookFlight
- book me a flight to {city=shanghai}
- travel to {city=new york}
- i want to go to {city=paris}

When triggered, if LUIS is able to identify a city, the city name will be made available as @city within the
triggered actions. The entity value can be used directly in expressions and LG templates, or stored into a memory
property for later use. The JSON view of the query "book me a flight to London" in LUIS app looks like this:

{
"query": "book me a flight to london",
"prediction": {
"normalizedQuery": "book me a flight to london",
"topIntent": "BookFlight",
"intents": {
"BookFlight": {
"score": 0.9345866
}
},
"entities": {
"city": [
"london"
],
"$instance": {
"city": [
{
"type": "city",
"text": "london",
"startIndex": 20,
"length": 6,
"score": 0.834206,
"modelTypeId": 1,
"modelType": "Entity Extractor",
"recognitionSources": [
"model"
]
}
]
}
}
}
}

Flexible entity extraction


In Composer, you can achieve flexible entity extraction by setting the Recognizer Type of your desired dialog to
LUIS and subsequently using language understanding notation to define user response to the specific input. You
can take a look at the AskForName input and all its configured properties under the BeginDialog trigger of the
UserProfile dialog in the ToDoWithLuis example.
The input has the following configuration:

"property": "user.name"
"value": "=coalesce(@userName, @personName)"
"allowInterruptions": "!@userName && !@personName"

And the following Expected response LU configuration:

- my name is {@userName = vishwac}


- I'm {@userName = tom}
- you can call me {@userName = chris}
- I'm {@userName = scott} and I'm {@userAge = 36} years old
> add few patterns
- my name is {@userName}

> add entities


@ prebuilt personName hasRoles userName

There are two key properties in the example above: value and allowInterruptions .
The expression specified in value property will be evaluated on every single time user responds to the specific
input. In this case, the expression =coalesce(@userName, @personName) attempts to take the first non null entity
value userName or personName and assigns it to user.name . The input will issue a prompt if the property
user.name is null even after the value assignment unless always prompt evaluates to true .

The next property of interest is allowInterruptions . This is set to the following expression:
!@userName && !@personName . This literally means what this expression reads - allow an interruption if we did not
find a value for entity userName or entity personName .
Notice that you can just focus on things the user can say to respond to this specific input in the
Expected responses . With these capabilities, you get to provide labelled examples of the entity and use it no
matter where or how it was expressed in the user input.
If a specific user input does not work, simply try adding that utterance to the Expected response .

Out of order entity extraction


To see how the out of order entity extraction is wired up, you can see the AskForTitle and AskForListType inputs,
which are under the BeginDialog trigger of the Add dialog in the ToDoWithLuis example.

Take a look at this example below:

user: add an item to the list


bot: sure, what is the title?
user: buy milk
bot: ok. pick the list type - todo | shopping
user: shopping list
bot: ok. i've added that.

The user could have answered multiple questions in the same response. Here is an example

user: add an item to the list


bot: sure, what is the title?
user: add buy milk to the shopping list
bot: ok. I've added that.

By including the value property on each of these inputs, we can pick up any entities recognized by the recognizer
even if it was specified out of order.

Interruption
Interruptions can be handled at two levels - locally within a dialog as well as re-routing as a global interruption. By
default adaptive dialog does this for any inputs:
1. On every user response to an input action's prompt,
2. Run the recognizer configured on the parent adaptive dialog that holds the input action
3. Evaluate the allowInterruption expression. a. If it evaluates to true , evaluate the triggers that are tied to the
parent adaptive dialog that holds the input action. If any triggers match, execute the actions associated with
that trigger and then issue a re-prompt when the input action resumes. b. If it evaluates to false , evaluate the
value property and assign it as a value to the property . If null run the internal entity recognizer for that
input action (e.g. number recognizer for number input etc) to resolve a value for that input action.
The allowInterruption property is located in the Proper ties panel of the Other tab of an input action. You can
set the value to be true or false .
Handling interruptions locally
With this, you can add contextual responses to inputs via OnIntent triggers within a dialog. Consider this
example:

user: hi
bot: hello, what is your name?
user: why do you need my name?
bot: I need your name to address you correctly.
bot: what is your name?
user: I will not give you my name
bot: Ok. You can say "My name is <your name>" to re-introduce yourself to me.
bot: I have your name as "Human"
bot: what is your age?

You can see the Why , NoValue or Cancel triggers, which are under the userprofile dialog in the ToDoWithLuis
example.

Handling interruptions globally


Adaptive dialogs have a consultation mechanism which propagates a user message up the parent dialogs until a
dialog has a trigger that fires. If no dialog's triggers fired upon consultation then the active input action gets the
user utterance back for its own processing. Consider this example:
user: hi
bot: hello, what is your name?
user: what can you do?
bot: I'm a demo bot. I can manage todo or shopping lists.
bot: what is your name?

Notice that the bot understood interruption and presented the help response. You can see the UserProfile and
Help dialogs in the ToDoWithLuis example.

Further reading
Entities and their purpose in LUIS
.lu file format
Use OAuth
9/21/2020 • 3 minutes to read

In Bot Framework Composer, you can use the OAuth login action to enable your bot to access external resources
using permissions granted by the end user. This article explains how to use basic OAuth to authenticate your bot
with an external service such as GitHub.

NOTE
It is not necessary to deploy your bot to Azure for the authentication to work.

Prerequisites
Microsoft Azure subscription.
A basic bot built using Composer.
Install ngrok.
A service provider your bot is authenticating with such as GitHub.
Basic knowledge of user authentication within a conversation.

Create the Azure Bot Service registration


If you've already got an Azure Bot Service channel registration, you can skip to the configure the oauth connection
settings in azure step.
If you don't have an Azure Bot Service channel registration, follow these instructions to create a registration in the
Azure portal.
Make sure you note the app ID and app password that is generated during this process. You'll need these values
in this configure the oauth connection settings in Composer step.

Configure the OAuth Connection Settings in Azure


1. From the bot channel registration inside Azure, select the Settings tab on the left. At the bottom of the
resulting pane, you'll see a section titled OAuth Connection Settings . Select Add Setting .
This will open a New Connection Settings , where you can configure the OAuth connection. The options
will differ depending on the service you are authenticating with. Pictured below is the settings pane for
configuring a login to Github:

Note the Name of your connection - you will need to enter this value in Composer exactly as it is displayed
in this setting.
2. Enter the values of Client ID , Client Secret , and optionally Scopes depending on the service you are
authenticating with. In this example of GitHub, follow the steps to get these values:
a. Go to GitHub developer's setting webpage and click New OAuth App on the upper right corner. This
will redirect you to the GitHub OAuth App registration website where you fill in the values as
instructed in the following:
Application name : a name you would like to give to your OAuth application, e.g. Composer
Homepage URL : the full URL to your application homepage, e.g. http://microsoft.com

Authorization callback URL : the callback URL of your application, e.g.


https://token.botframework.com/.auth/web/redirect . Read more here.

b. Select Register application . Then you will see the Client ID , Client Secret values generated in the
application webpage as the following:

c. Copy the Client ID and Client Secret values and paste them to your Azure's Service Provider
Connection Setting. These values configure the connection between your Azure resource and GitHub.
Optionally, enter user, repo, admin in Scopes . This field specifies the permission you want to grant
to the caller. Save this setting.
Now, with the Name , Client ID , Client Secret , and Scopes of your new OAuth connection setting in
Azure, you are ready to configure your bot.

Configure the OAuth Connection Settings in Composer


1. Select Settings in the Composer menu, then Settings in the Navigation pane. Enter the MicrosoftAppId
and MicrosoftAppPassword values with the app ID and app password values from your Azure Bot
Service registration.

2. Add the OAuth Login action to your dialog.


a. Select OAuth login from the Access external resources menu.
b. The Connection Name in the Proper ties panel must be set to the same value you used in Azure
for the Name of the connection setting.
c. You will also need to configure at least the Text and Title values, which configure the message that
will be displayed alongside the login button, as well as the proper ty field, which will bind the results
of the OAuth action to a variable in your bot's memory.

Your bot is now configured to use this OAuth connection!

Use the OAuth results in your bot


When you launch the bot in the Emulator and trigger the appropriate dialog, the bot will present a login card.
Clicking the button in the card will launch the OAuth process in a new window.

You'll be asked to login to whatever external resource you've specified. Once complete, the window will close
automatically, and your bot will continue with the dialog.
The results of the OAuth action will now be stored into the property you specified. To reference the user's OAuth
token, use <scope.name>.token -- so for example, if the OAuth prompt is bound to dialog.oauth , the token will be
dialog.oauth.token .

To use this to access the protected resources, pass the token into any API calls you make with the HTTP Request
action. You can refer to the token value in URL, body or headers of the HTTP request using the normal LG syntax,
for example: ${dialog.oauth.token} .

Next
Learn how to send an HTTP request and use OAuth.
Send an HTTP request and use OAuth
9/21/2020 • 2 minutes to read

This article will teach you how to send an HTTP request using OAuth for authorization. It is not necessary to deploy
your bot to Azure for this to work.

Prerequisites
A basic bot you build using Composer
A target API for your bot to call
Basic knowledge of How to send an HTTP request without OAuth
Basic knowledge of How to use OAuth in Composer

Set up OAuth in Composer


Follow the steps to set up OAuth in your bot. Please note that the Token proper ty you set will store the OAuth
token result and you can reference it using ${dialog.token.token} . Also make sure your Composer settings have
appropriate appID and app password of the Azure Bot Service registration instructed in the configure the oauth
connection settings in composer step of the how to use oauth article.
Optionally, you can add a Send a response action to test if your bot can get the OAuth token.
1. Select the + icon then Send a response from the list of actions. Enter The token is:
${dialog.token.token} in the language generation editor. When this action fires, the bot will output the
value of the authentication token.

2. Select the Restar t Bot button in the Composer toolbar, then Test in Emulator . You should be able to see
the authentication token in the Emulator as shown below:
Now, with the OAuth setup ready and token successfully obtained, you are ready to add the HTTP request in your
bot.

Add a Send an HTTP request action


1. The http request action is found under the Access external resources menu in the flow + button.

2. In the Proper ties panel, set the method to GET and set the URL to your target API. For example, a typical
Github API URL such as https://api.github.com/users/your-username/orgs .
3. Add headers to include more info in the request. For example we can add two headers to pass in the
authentication values in this request.
a. In the first line of header, add Authorization in the Key field and bearer ${dialog.token.token} in the
Value field. Press Enter.
b. In the second line of header, add User-Agent in the Key field and Vary in the Value field. Press Enter.
4. Finally, set the Result proper ty to dialog.api_response and Response type in Json .

NOTE
HTTP action sets the following information in the Result proper ty : statusCode, reasonPhrase, content, and headers.
Setting the Result proper ty to dialog.api_response means we can access those values via
dialog.api_response.statusCode , dialog.api_response.reasonPhrase , dialog.api_response.content and
dialog.api_response.headers . If the response is json, it will be a deserialized object available via
dialog.api_response.content .

Test
You can add an IF/ELSE branch to test the response of this HTTP request.
1. Set Condition to dialog.api_response.statusCode == 200 in the properties panel.
2. Add two Send a response actions to be fired based on the testing results (true/false) of the condition. This
means if dialog.api_response.statusCode == 200 is evaluated to be true , send a response
called with success! ${dialog.api_response} , else send a response calling api failed.
3. Restart your bot and test it in the Emulator. After login successfully, you should be able to see the response
content of the HTTP request.
Connecting to a skill
9/21/2020 • 7 minutes to read

Since Bot Framework SDK version 4.7, you can extend your bot using another bot called a skill bot. A skill is a bot
that can perform a set of tasks for another bot. A skill consumer is a bot that can invoke one or more skills. In Bot
Framework Composer, you can export a bot built with Composer as a skill, you can also use the Connect to a
skill action to enable a bot to connect to a skill. This article explains how to do both tasks.

IMPORTANT
Connecting to a skill in Composer is a technical process that involves many steps such as setting up Composer and
configuring Azure resources. A high level of technical proficiency will be necessary to execute this process.

Prerequisites
Microsoft Azure subscription.
A basic bot built with Composer.
Install the Bot Framework Emulator version 4.7.0 or later.
A good understanding of skills in the Bot Framework SDK.

Create Azure Registration resources


You will need to register both the skill bot and the consumer bot to Azure Bot Service so that the bot-to-bot
communication can be established.
Create a Bot Channels Registration
1. Navigate to your Azure portal and select + Create a resource on top of the menu.
2. Enter Bot Channels Registration in the search box and press Enter .
3. Select Create from the pop-up window and then fill in the creation form to create your Bot Channels
Registration .
Get the App ID and password
You will need to use the App ID and password generated during the previous process for application
configuration. If you already have these values, you can skip to the next section to configure these values to your
bot application. If you do not have these values, follow the steps:
1. Select the Bot Channels Registration you just created and then select Settings from the left panel. The
App ID is displayed in the Microsoft App ID (Manage) section.
2. Select Manage from Microsoft App ID (Manage) on the Settings page to generate a new password .
3. Select +New client secret from the Cer tificate & secrets page. Enter some description in the
Description field (this is optional) and select an expiration time from the Expires list. Select Add .

4. Copy the value from the Value field of the displayed table. This is the generated password of your Bot
Channels Registration .

For more information about creating a Bot Channels Registration , refer to the Register a bot with Azure Bot
Service article.

Prepare a skill bot


This section introduces how to export a bot created with Composer as a skill. If you already have a skill bot, you
can skip to the prepare a consumer bot section.
1. Open your bot in Composer or follow the steps to create a basic bot in Composer.
2. Select Settings from the Composer menu and update the settings with the MicrosoftAppId and
MicrosoftAppPassword of the Bot Channel Registration you created for your skill bot in the create azure
registration resources section.
3. Select Star t in the toolbar and then mouse over Test in Emulator to get the port number the bot is
running. Record the port number.

4. Select Expor t and then Expor t as a skill in the toolbar.

5. In this step you will need to enter some values in the different forms to generate your skills manifest.

TIP
When you selects a trigger you want to include in the manifest, the editor adds the corresponding activity type that
the trigger handles to the manifest's activities property. Also, if the trigger is an on intent handler, the intent is added
to the intents array in the dispatch models property. When you select a dialog you want to include, an event activity
gets added to the activities property with the dialogs Dialog Interface.

An example skill manifest may look like the following.


{
"$schema": "https://schemas.botframework.com/schemas/skills/skill-manifest-2.0.0.json",
"$id": "TodoSimple",
"name": "Todo skill",
"version": "1.0",
"description": "This skill echoes whatever the user says",
"publisherName": "Microsoft",
"privacyUrl": "https://myskill.contoso.com/privacy.html",
"copyright": "Copyright (c) Microsoft Corporation. All rights reserved.",
"license": "",
"iconUrl": "https://myskill.contoso.com/icon.png",
"tags": [
"sample",
"echo"
],
"endpoints": [
{
"name": "default",
"protocol": "BotFrameworkV3",
"description": "Production endpoint for SkillBot.",
"endpointUrl": "http://localhost:3983/api/messages",
"msAppId": "0000000-0000-0000-0000-0000000000000"
}
],
"activities": {
"message": {
"type": "message",
"description": "A message activity containing the utterance that the skill will echo back
to the user"
}
}
}

P RO P ERT Y VA L UE

$id The skill ID.

name The skill name.

endpointUrl The skill's endpoint URL, such as


"http://localhost:port/api/messages", where "port" is the
port number the skill bot is running on.

msAppId The app ID of the Bot Channels Registration you created


for this skill bot.

6. After you select Save . Select Restar t Bot in the toolbar. You can find the manifest folder in your bot's
project folder such as C:\Users\UserName\Documents\Composer\SkillBotName\manifests . You can also test if the
skill manifest works by entering http://localhost:<port>/manifests/<your-skill-manifest-file-name>.json in
your browser. Now you have created a local skill bot! Record your skill manifest URL, you will need to use it
in the add a connect to a skill action section.
7. Publish skill (optional)
If you want to publish your skill bot, you can follow the instructions in the publish a bot article. An example
remote skill manifest may look like this:
https://SkillBot-dev.scm.azurewebsites.net/manifests/SkillBot-manifest.json .
NOTE
If you publish a local skill to a remote host such as Azure web app, you may need to update the endpointUrl and
msAppId values in your skill manifest to make the skill callable, because endpointUrl should no longer point to
localhost and msAppId should be updated.

Prepare a consumer bot


1. Follow the steps to create another basic bot in Composer. This bot will be your consumer bot to consume
the skill you prepared in the previous section.
2. Select Star t in the toolbar and then mouse over Test in Emulator to get the port number of your
consumer bot. Record the port number.

IMPORTANT
It is noted your skill bot and your consumer bot will have different port numbers. You need to use the correct port
numbers in the settings to avoid errors.

NOTE
If your skill is remote, you need to follow the next steps to install and run ngrok . If your skill is local, you can skip to
the configure settings in Composer section directly.

3. Open a terminal and run ngrok with the following command to create a new tunnel (you may need to
navigate to where the ngrok executable is in your filesystem), the port specified is the same port your
consumer bot is running on:
OSX

./ngrok http 3984 --host-header=localhost

Windows

"ngrok.exe" http 3984 --host-header=localhost

4. Save the http entry generated by ngrok .


Configure settings in Composer
1. Select Settings from Composer menu and then select Settings in the navigation pane.
2. Add the app ID and password values generated for your consumer bot, following the steps in the create
azure registration resources section.
3. Select Skills from the Composer menu. In the Skills page, if your skill is remote, enter
<ngrok address>/api/skills in the Skill Host Endpoint field. If your skill is local, you should enter
localhost:port/api/skills in the Skill Host Endpoint field.

Add a Connect to a skill action


1. Navigate to the Design page. Select + under the node where you want to include the action and then select
Connect to a skill from Access external resources in the action menu.

2. In the Connect to a skill properties panel, select Add a new Skill Dialog from the Skill Dialog Name
field.

3. Enter the skill manifest URL in the Manifest url field. If your skill is local, the URL will be like this:
http://localhost:<port>/manifests/<your-skill-manifest-file-name>.json , where port is the port number
your skill bot is running on, and your-skill-manifest-file-name is the name of your skill bot's manifest file.
4. Select Default from the Skill Endpoint drop down list.

5. In the Activity field, configure the activity you want to send to the skill. Depending on the skill manifest
definition, it can be a message, an event or invoke type.

6. (Optional) Enter dialog.result in the Proper ty field at the bottom. When the skill dialog ends, its return
value (if any) is stored in this property.
Your consumer bot is now connected to a skill!

Test in the Emulator


You can now test your bot in Composer.
1. Select Star t Bot on the top right of the Composer screen.

2. Enter a text in the Emulator and see the response.

Additional information
Call a sample skill bot from Composer
If you use the sample skill bot in the Bot Framework Samples repository, you should consider the following:
You should update the MicrosoftAppId and MicrosoftAppPassword in your bot's appsettings.json file (
80.skills-simple-bot-to-bot\EchoSkillBot ) with the values you created for the skill bot in the create Azure
Registration resources section.
You should update the manifest file that lives in this directory:
80.skills-simple-bot-to-bot\EchoSkillBot\wwwroot\manifest .
The endpointUrl can be http://localhost:<port-of-skill-bot-running>/api/messages .
msAppId can be the Microsoft App ID you created for the skill bot in the create Azure Registration
resources section.
The manifest URL of your sample skill bot can be:
http://localhost:<port>/manifest/echoskillbot-manifest-1.0.json .
Make sure your sample skill bot is running when you test in the Emulator.

Further reading
About skills
A skills-simple-bot-to-bot sample.
Adding custom actions
9/21/2020 • 7 minutes to read

In Bot Framework Composer, actions are the main contents of a trigger. Actions help to maintain the conversation
flow and instruct bots to fulfill user's requests. Composer provides different types of actions such as Send a
response , Ask a question , and Create a condition . Besides these built-in actions, you can create and customize
your own actions in Composer.
This article will walk you through how to include a sample custom action named MultiplyDialog that multiplies
two numbers passed as inputs. The sample custom action lives inside the runtime/customaction subfolder of the
bot, and can be viewed here on GitHub.

Prerequisites
A basic understanding of actions in Composer.
A basic bot built using Composer.
A sample custom action called MultiplyDialog in the customaction folder.
Bot Framework CLI 4.10 or later.

Setup the bf-dialog tool


The bf-dialog tool is part of the suite of the Bot Framework CLI tools. The bf-dialog tool will create a "schema file"
that describes the built-in and custom capabilities of your bot project. It does this by merging partial schema files
included with each component with the root schema provided by Bot Framework.

TIP
For more information about Bot Framework SDK schemas, read here. For more information about how to create schema
files, read here.

Open a command line and follow the steps to setup the bf-dialog tool:
To point npm to nightly builds

npm config set registry https://botbuilder.myget.org/F/botframework-cli/npm/

To install the Bot Framework CLI tools

npm i -g @microsoft/botframework-cli

To install the bf-dialog tool

bf plugins:install @microsoft/bf-dialog

About the example custom action


The sample custom action lives inside the runtime/customaction subfolder of the bot, and can be viewed here on
GitHub. After you export runtime, you will have the example custom action inside your bot's exported
runtime/customaction folder.
The example custom action component consists of the following:
A CustomAction.sln solution file.
A Microsoft.BotFramework.Composer.CustomAction.csproj project file.
A CustomActionComponentRegistration.cs code file for component registration.
A Schemas folder that contains the MultiplyDialog.schema file. This schema file describes the properties of
the example dialog component, arg1 , arg2 , and resultProperty .

{
"$schema": "https://raw.githubusercontent.com/microsoft/botframework-
sdk/master/schemas/component/component.schema",
"$role": "implements(Microsoft.IDialog)",
"title": "Multiply",
"description": "This will return the result of arg1*arg2",
"type": "object",
"additionalProperties": false,
"properties": {
"arg1": {
"$ref": "schema:#/definitions/integerExpression",
"title": "Arg1",
"description": "Value from callers memory to use as arg 1"
},
"arg2": {
"$ref": "schema:#/definitions/integerExpression",
"title": "Arg2",
"description": "Value from callers memory to use as arg 2"
},
"resultProperty": {
"$ref": "schema:#/definitions/stringExpression",
"title": "Result",
"description": "Value from callers memory to store the result"
}
}
}

Bot Framework Schemas are specifications for JSON data. They define the shape of the data and can be
used to validate JSON. All of Bot Framework's Adaptive Dialogs are defined using this JSON schema. The
schema files tell Composer what capabilities the bot runtime supports. Composer uses the schema to help
it render the user interface when using the action in a dialog.

IMPORTANT
You can follow instructions here to create schema files.

An Action folder that contains the MultiplyDialog.cs class, which defines the business logic of the custom
action, in this example, multiply two numbers passed as inputs and output result.

using System;
using System.Runtime.CompilerServices;
using System.Threading;
using System.Threading.Tasks;
using AdaptiveExpressions.Properties;
using Microsoft.Bot.Builder.Dialogs;
using Newtonsoft.Json;
namespace Microsoft.BotFramework.Composer.CustomAction
{
/// <summary>
/// Custom command which takes takes 2 data bound arguments (arg1 and arg2) and multiplies them
returning that as a databound result.
/// </summary>
public class MultiplyDialog : Dialog
{
[JsonConstructor]
public MultiplyDialog([CallerFilePath] string sourceFilePath = "", [CallerLineNumber] int
sourceLineNumber = 0)
: base()
{
// enable instances of this command as debug break point
this.RegisterSourceLocation(sourceFilePath, sourceLineNumber);
}

[JsonProperty("$kind")]
public const string Kind = "MultiplyDialog";

/// <summary>
/// Gets or sets memory path to bind to arg1 (ex: conversation.width).
/// </summary>
/// <value>
/// Memory path to bind to arg1 (ex: conversation.width).
/// </value>
[JsonProperty("arg1")]
public NumberExpression Arg1 { get; set; }

/// <summary>
/// Gets or sets memory path to bind to arg2 (ex: conversation.height).
/// </summary>
/// <value>
/// Memory path to bind to arg2 (ex: conversation.height).
/// </value>
[JsonProperty("arg2")]
public NumberExpression Arg2 { get; set; }

/// <summary>
/// Gets or sets caller's memory path to store the result of this step in (ex:
conversation.area).
/// </summary>
/// <value>
/// Caller's memory path to store the result of this step in (ex: conversation.area).
/// </value>
[JsonProperty("resultProperty")]
public StringExpression ResultProperty { get; set; }

public override Task<DialogTurnResult> BeginDialogAsync(DialogContext dc, object options = null,


CancellationToken cancellationToken = default(CancellationToken))
{
var arg1 = Arg1.GetValue(dc.State);
var arg2 = Arg2.GetValue(dc.State);

var result = Convert.ToInt32(arg1) * Convert.ToInt32(arg2);


if (this.ResultProperty != null)
{
dc.State.SetValue(this.ResultProperty.GetValue(dc.State), result);
}

return dc.EndDialogAsync(result: result, cancellationToken: cancellationToken);


}
}
}

To create a class such as MultiplyDialog.cs shown above:


Create a class which inherits from the Dialog class.
Define the properties for input and output. These will appear in Composer's property editor, and need to
be described in the schema file.
Implement the required BeginDialogAsync() method, which will contain the logic of the custom action.
You can use Property.GetValue(dc.State) to get value, and dc.State.SetValue(Property, value) to set
value.
Register the custom action component where it is called.
(optional) If there is more than one turn, you might need to add the ContinueDialogAsync class. Read
more in the Actions sample code in the Bot Framework SDK.
In the following sections, we will walk you through the steps to add the custom action in Composer and test it.

Export runtime
The first step to add a custom action is to export the bot runtime through the Runtime Config in Composer. This
process will generate a copy of your bot's runtime so that you can modify the code and add your custom action.

NOTE
Currently Composer supports the C# runtime and JavaScript (preview) runtime.

Once you have the exported bot runtime, you can make changes to the schema. The exported runtime folder will
broadly have the following structure.

bot
/bot.dialog
/language-generation
/language-understanding
/dialogs
/runtime
/azurewebapp
/azurefunctions
/schemas
sdk.schema

To export your bot runtime:


1. Navigate to the Settings page of your Composer and select Runtime Config from the Configuration
navigation pane.
2. Enable Use custom runtime under Bot runtime settings and click Get a new copy of the runtime
code .
3. In the pop-up window select C# and select Okay . Then you will see the exported runtime directory in the
Runtime code location field.
Customize your exported runtime
After you get a copy of your bot's runtime, you can start to modify the code to include the custom action.

NOTE
The following steps assume you are using azurewebapp as your deployment solution. If yo use azurefunctions the steps are
similar.

Follow these steps to customize the runtime:


1. Navigate to the runtime location (for example, C:\Users\UserName\Documents\Composer\bot\runtime )
generated from the export bot runtime section.
2. Navigate to the csproj file inside the runtime folder (for example,
bot\runtime\azurewebapp\Microsoft.BotFramework.Composer.WebApp.csproj ). Include a project reference to the
custom action project like:

<ProjectReference Include="..\customaction\Microsoft.BotFramework.Composer.CustomAction.csproj" />

3. Then still in the azurewebapp folder, open the Startup.cs file. Uncomment the following two lines to register
this action.

using Microsoft.BotFramework.Composer.CustomAction;

// This is for custom action component registration.


ComponentRegistration.Add(new CustomActionComponentRegistration());

4. Run the command dotnet build on the azurewebapp project to verify if it passes build after adding custom
actions to it. You should be able to see the "Build succeeded" message after this command.

Update the schema file


Now you have customized your runtime, the next step is to update the sdk.schema file to include the
MultiplyDialog.Schema file.

Navigate to the C:\Users\UserName\Composer\Bot\schemas folder. This folder contains a PowerShell script and a bash
script. Run either one of the following commands:

./update-schema.ps1 -runtime azurewebapp

sh ./update-schema.sh -runtime azurewebapp

NOTE
Please note that the runtime azurewebapp is chosen by default if no argument is passed.

You can validate that the partial schema ( MultiplyDialog.schema inside the customaction/Schema folder) has been
appended to the default sdk.schema file to generate one single consolidated sdk.schema file.
The above steps should have generated a new sdk.schema file inside the schemas folder for Composer to use.
Reload the bot and you should be able to include your custom action!

Test
Reopen the bot project in Composer and you should be able to test your added custom action!
1. Open your bot in Composer. Select a trigger you want to associate this custom action with.
2. Select + under the trigger node to see the actions menu. You will see Custom Actions added to the menu.
Select Multiply from the menu.

3. On the Proper ties panel on the right side, enter two numbers in the argument fields: Arg1 and Arg2 .
Enter dialog.result in the Result property field. For example, you can enter the following:
4. Add a Send a response action. Enter 99*99=${dialog.result} in the Language Generation editor.

5. Select Restar t Bot and you can see the testing result in the Emulator.
Additional information
Bot Framework SDK Schemas
Create schema files
Extending Composer with plugins
9/21/2020 • 4 minutes to read

Composer plugins are JavaScript modules. When loaded into Composer, the module is given access to a set of
Composer APIs which can then be used by the plugin to provide new functionality to the application. You can
extend and customize the behavior of Composer by installing plugins which can hook into the internal
mechanisms of Composer and change the way they operate. Plugins can also "listen to" the activity inside
Composer and react to it. In this article you will learn the following:
Set up multi-user authentication via plugins.
Set up customized storage and make storage user-aware.
Change the samples and templates in the composer and “new bot” flow.
Provide an alternate version of the runtime template.
Change the boilerplate content added to each project.

Prerequisites
A basic understanding of Composer plugins.
A fork of the Bot Framework Composer GitHub repository.

Set up multi-user authentication via plugins


By default, there is no authentication required to access the Composer application and anyone with the URL will be
able to use it. Some mechanism must be used to secure access to the application and the resources available to it!
This can be achieved using Composer's authentication and identity plugin endpoint. Composer has adopted
Passport.js as its primary auth mechanism. As a result, it is possible to use any of the 500+ compatible
authentication systems with Composer through only a few lines of code.
In your fork of Composer, add the new plugin under the Composer/plugins folder. Make sure the package.json file
is properly configured according to the instructions at the link above. Reload the Composer app, and the new
plugin should take affect automatically, requiring a login.
Please read the full API docs for auth and identity plugins here.
There is a sample plugin that implements login with GitHub in the Composer code repo.
Fill in details in config.json and make sure to set extendsComposer to true in package.json .

Set up customized storage and make storage user-aware


By default, Composer will read and write bot projects from the local filesystem. All users have the same access to
the filesystem.
To change the storage mechanism, or modify it to consider a user identity provided by an authentication plugin, it
is necessary to create a new plugin that uses the storage plugin endpoint.
By providing a custom storage implementation, it is possible to restrict users access to content using properties in
their user id. It is also possible to completely replace the storage mechanism used by Composer, for example, using
a database instead of the local filesystem.
Read the full API docs for storage plugins here.
There is a sample MongoDB implementation of the plugin in the Composer code repo.
Change the samples and templates in the Composer and "new bot"
flow
When users create new bots inside Composer, they can choose from a list of templates. These templates are simply
pre-packaged bot projects that are used as a template to create new projects. It is possible for a sample to include
bot assets, runtime code, settings and more.
You may want to add your own templates to this list, or completely replace it with a list of your own.
The templates and samples listed in Composer's home screen are controlled by a plugin located in the
Composer/plugins/samples/ folder.

To add new samples or templates, a new plugin can be added in the Composer/plugins folder that calls the
composer.addBotTemplate() .

To remove or modify the templates that ship with Composer, modify or remove the code inside the
Composer/plugins/samples/ folder.

Read full docs of using this endpoint here.

Provide an alternate version of the runtime template


Sometimes it is necessary to modify the code of the runtime used to operate the bots, for example, to install
different packages or bundle new features.
This is possible by modifying the runtime template that is bundled with Composer. Composer supports exporting
C# .NET project and JavaScript (preview) project located in the /runtime/dotnet folder.
When a user clicks "Start bot" or uses Composer’s publishing system, Composer will use a copy of this runtime
project. Changes made to the template in this folder will automatically be used.
It is also possible to specify one or more alternate runtimes that can be made available as part of a bot project
template, or as an option for users to choose in each bot's settings.
Read the full docs on providing new runtime templates here.

Change the boilerplate content added to each project


When a bot project is created, certain boilerplate material is copied into it, in addition to any material from the
selected template. For example, by default this includes a README file as well a folder full of utility scripts.
This material is controlled by the plugin in Composer/plugins/samples using a call to composer.addBaseTemplate() .
To customize this content, you can:
Edit the content present in the Composer/plugins/samples/assets/shared folder OR
Disable the Composer/plugins/samples plugin and create your own that calls composer.addBaseTemplate

Full docs for using this API are here.


Self-hosted Composer
9/21/2020 • 6 minutes to read

Bot Framework Composer is a visual authoring tool for building conversational AI software. Composer is available
as an open-source project. While the primary way it is distributed is as a bundled desktop application, it is possible
to use Composer in a variety of ways including as a shared, hosted service.
This article covers an approach to hosting Composer in the cloud as a service. It also covers topics related to
customizing and extending the behaviors of Composer in this environment.

Who is this for?


Hosting Composer in the cloud is a technical process that involves setting up and configuring cloud resources and
writing code. A high level of technical proficiency will be necessary to execute this process.
However, this document should provide enough background information to users interested in evaluating this
approach without having to implement it yourself.

Prerequisites
A subscription to Microsoft Azure.
Knowledge of Linux and familiarity with package management.
Familiarity with nginx and configuring an nginx web server and operating in a command line environment.

Composer application architecture


Composer is a fairly traditional web application and it consists of several major components:
The "core" Composer application – a React and Node webapp.
A "backend" webserver that serves the website and provides features like access to bot projects. This
component is a Node.js application.
A "frontend" client that provides the authoring tools and user interface. This component is a React JS
application.
A web application that acts as a bot runtime - dotnet or JavaScript webapps.
A "bot runtime" application that takes the content authored in Composer and interprets it to provide a
running bot that can be interacted with via the Emulator or other methods.
In most configurations, there will be one copy of the core Composer application running which will serve all users.
It is possible to restrict access to this service using Azure Active Directory (AAD) or a similar system.
In addition, there will be one or more bot runtime applications running for testing purposes – these are designed to
be managed through Composer. In the default configuration, these processes run "in the background" on the same
computer used to host Composer. It is possible to change this behavior using plugins.
In the next sections we will walk you through the steps to host Composer in an Azure VM.

Create a Virtual Machine


In its default configuration, Composer uses the local file system to read and write content, and also starts and stops
processes "in the background" to enable real-time testing of bots. As a result, our recommended hosting
environment for a bot is an Azure VM.
This type of host provides several key capabilities:
Ability to read and write files to the local filesystem.
Ability to execute processes.
Ability to have custom networking configuration and/or proxy settings.
When you create an Azure VM we recommend using ubuntu. In the VM networking configuration, allow inbound
connections to port 3000 (this allows connections to Composer and the bot apps). You will need a port range like
3979-3999 to allow for bots to run locally.

NOTE
You can choose the type of VM to host Composer, but this article is specific to hosting Composer in a ubuntu VM.

Get Composer up and running


1. In your VM instance, install the following prerequisites with correct versions.

TIP
Node.js (12.18.3 or later)
NVM (v0.35.1)
npm (6.14.6 or later)
Yarn (1.22.4 or later)
.NET Core (3.1 or later)

a. Update the packages available.

sudo apt update

b. Install node.

sudo apt install nodejs

c. Install npm the node package manager.

sudo apt install npm

d. Use npm to install yarn.

sudo npm install –g yarn

e. Install the nvm node version manager

curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.35.1/install.sh | bash

f. Use nvm to install the long term support version of node (currently 12.18.x)

nvm install --lts


g. Get the updated ubuntu packages for dotnet

wget https://packages.microsoft.com/config/ubuntu/18.04/packages-microsoft-prod.deb -O packages-


microsoft-prod.debsudo dpkg -i packages-microsoft-prod.deb

h. Install the dotnet SDK

sudo apt-get update; \ sudo apt-get install -y apt-transport-https && \ sudo apt-get update && \ sudo
apt-get install -y dotnet-sdk-3.1

2. Create a fork of the Composer repo. With a fork Composer you will be able to make small modifications to
the codebase and still pull upstream changes.
3. Follow the instructions below to build Composer and run it.

cd Composer
yarn
yarn build
yarn start

4. In your VM instance, load Composer in a browser at localhost:3000 and verify that you can use Composer
in it. Outside your VM, load Composer in a browser at http://<IP ADDRESS OF VM>:3000 and verify that you
can use Composer at this URL.

Set up nginx
Now you have deployed Composer into your VM and it runs at this URL: http://<IP ADDRESS OF VM>:3000 . Let's
make Composer run on port 80 instead of :3000 (difference between mycomposer.com:3000 and mycomposer.com )
using nginx. Nginx is a web server and proxy service. It can sit in front of the Composer service and pass requests
into Composer. It can also be used to enable SSL on the domain without binding with Composer, and to proxy the
individual bot processes instead of exposing their ports to the Internet.

TIP
HAProxy is also an option you may consider, but this documentation is specific to nginx.

1. Install nginx.

# install nginx web server


Sudo apt install nginx
# edit the main config file
sudo vi /etc/nginx/sites-enabled/default

2. Edit the default nginx config to proxy all requests to the composer app running at :3000 .
a. Find the section that says:

location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}

b. Replace the above with the following:


location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
}

3. Now you can load http://<ip address of VM>/ and you should see Composer. No port number is required.
You should be able to create and edit bots in Composer. You should also be able to start the bot – but the
URL for the bot will be "localhost" (mouse over Test in Emulator ). In the next step we will show you how to
fix this by patching the code of Composer in two small places.

Update Start Bot Emulator links


The default URL for the Emulator link is localhost . Now that Composer is hosted in your VM, we should update
this URL in Emulator.
1. In your fork, navigate to the Composer\plugins\localPublish\src\localPublish\src\index.js file and update
localhost to your IP or hostname. There are two places you need to update this.

2. Run yarn build:plugins to rebuild this plugin file.


3. Run yarn startall to restart you Composer app.
4. Now if you create a bot, click Star t Bot then click Test in Emulator . Emulator should open and connect to
your dev bot.

5. If you want to allow bots to run and be connected on this instance of Composer, you should open network
ports. In your Azure portal, go to the VM’s networking tab. Add inbound security rule.
Set up Composer to run after you log out
You can set up Composer to run even after you log out. Follow the steps:
1. Install pm2 process manager.

sudo npm install -g pm2

2. Start Composer using pm2. This will allow the app to continue running even once you log out.

pm2 start npm --name composer – start

3. Test to see if Composer is already running by using

pm2 list

You now have a working copy of Composer in a shared location.

IMPORTANT
Without additional steps, anyone can access this instance of Composer. Before you leave it running, take measures to secure
the access control either by installing an auth plugin covered in this article or by turning on service-level access controls via
the azure portal.

Next steps
Extend Composer with plugins.
Multilingual support
9/21/2020 • 3 minutes to read

Bot Framework Composer provides multilingual support for bot development in different languages, with English
as the default. With simple changes to the settings in Composer, you can author .lg and .lu files in your preferred
language, and give your bot the ability to talk to users in different languages.
This articles shows to build a basic bot in English ( en-us ) and walks through the process to author the bot in
Chinese ( zh-cn ).

NOTE
If your bot has LUIS or QnA integrations, you'll also need to consider additional constraints of LUIS supported languages
and QnA supported languages.

Prerequisites
Install Composer

How does multilingual support work?


Composer creates copies of your source language files so that you can add manual translation. If you build a bot
with just a single dialog, you can access your bot's source code (for example, in the directory:
C:\Users\UserName\Documents\Composer\CoolBot ) and see the following file structure:

/coolbot
coolbot.dialog
/language-generation
/en-us
common.en-us.lg
coolbot.en-us.lg
/language-understanding
/en-us
coolbot.en-us.lu

When adding languages, Composer creates copies of the language files. For examples, if you add Chinese ( zh-cn ),
your bot's file structure will look like the following:

/coolbot
coolbot.dialog
/language-generation
/en-us
common.en-us.lg
coolbot.en-us.lg
/zh-cn
common.zh-cn.lg
coolbot.zh-cn.lg
/language-understanding
/en-us
coolbot.en-us.lu
/zh-cn
coolbot.zh-cn.lu
NOTE
Both en-us and zh-cn are locales . A locale is a set of parameters that defines the user's language, region and any
special variant preferences that the user wants to see in their user interface.

After adding the languages, you can add manual translations with your source language files as reference. When
you are done with the translation process, you must set the locale in the Default language field. This tells your
bot in which language it must talk to the users. However, this locale setting will be overwritten by the client's (for
example, Bot Framework Emulator) locale setting.
In the next sections, we will use a basic bot in English and walk through the steps to author bots in multiple
languages.

Build a basic bot


To show how multilingual support works in Composer, we build a simple bot in English for demo purposes. If you
already have a bot, you can skip to the update language settings section.
This bot named Demo consists of a main dialog called Demo , a prebuilt Greeting trigger, and an Intent
recognized (LUIS) trigger named Joke with the following trigger phrases:

- Hey bot, tell me a joke.


- Tell me something funny.
- I want to hear something interesting.

When test the bot in the Emulator you get the following responses:
Update language settings
The first step to author bots in other languages is to add languages. You can add as many languages as you need in
the Settings page.
1. In the Settings page, select Edit on the top toolbar. Then select Add language from the drop-down menu.

2. In the pop-up window, there are three settings that need to be updated:
a. The first setting is the language to copy resources from. You can leave this as English .
b. The second setting is the preferred bot authoring languages. You can select multiple languages. Let's
select Chinese (Simplified, China) . Hover your mouse over the selection you'll see the locale .
c. The final setting is a check box. When checked, your selected language will be the active authoring
language. In cases you have multiple sections in the second part, the first selected language becomes
the active authoring language. Let's check the box and select Done .
You'll see the language being added to the following language drop-down lists:
Bot language is the language you choose to author your bot.
Default language is the locale you set as your bot's runtime language. This language setting will be
overwritten by the client's locale setting.

You'll also see the locale changed from en-us to zh-cn in the Composer title bar.``

Author your bot in the selected language


When you are done updating the language settings, you can start authoring your bot in your selected authoring
language.
1. Go to the Bot Responses page. Toggle Edit mode in the Bot Responses all-up view. Here you can add
manual translations of responses in your selected authoring language.
NOTE
Make sure you select all the dialogs and manually translate all responses.

2. Go to the User Input page. Select the dialog whose language you want to edit and toggle Edit mode to
add manual translations for user inputs in your selected authoring language.

NOTE
Make sure you select all the dialogs and add manual translations for all user input.

Test
After you finish translation, you need to go back to the Settings page and select your preferred language as your
bot's runtime language.
NOTE
Make sure your Emulator locale setting is consistent with your Default language setting in Composer. Alternatively can
leave your Emulator locale setting empty.

Testing in Emulator:
Capture your bot's telemetry
9/21/2020 • 3 minutes to read

Bot Framework Composer enables your bot applications to send event data to a telemetry service such as
Application Insights. Telemetry offers insights into your bot by showing which features are used the most, detects
unwanted behavior and offers visibility into availability, performance, and usage. In this article you will learn how to
implement telemetry into your bot using Application Insights.

Prerequisites
A subscription to Microsoft Azure.
A basic bot built using Composer.
Basic knowledge of Kusto queries.
How to use Log Analytics in the Azure portal to write Azure Monitor log queries.
The basic concepts of Log queries in Azure Monitor.

Create an Application Insights resource


Azure Application Insights displays data of your application in a Microsoft Azure resource. Creating a new resource
is part of setting up Application Insights to monitor an application. After creating your new resource, you can get its
instrumentation key and use it to configure settings in Composer. The instrumentation key links your telemetry to
the resource.

TIP
You can learn more about how to create an Application Insight resource and get the Instrumentation key by reading this
article.

Update settings in Composer


To connect to your Application Insights resource in Azure, you need to add the instrumentation key to the
applicationInsights section of the Bot Settings page. To do this:

1. Go the Settings page.


2. Select Bot Settings .
3. Find the applicationInsights section, then add your Application Insights instrumentation key to the
instrumentationKey setting.
Analyze bot's behavior
After making these changes to include the instrumentation key, you can run and interact with your bot to generate
telemetry data. To see this telemetry data, navigate to the Logs section of your Application Insights resource in
Azure.
For example, if you run a simple specified customEvents as a query, which shows all custom events, but you can
narrow down the events or fields you want to see by providing different queries. see Analyze your bot's telemetry
data for additional information on creating custom queries.

As standard you can track a number of events, including bot messages sent or received, LUIS results, dialog events
(started / completed / cancelled) and QnA Maker events. Specifically for QnA Maker, you can filter down to events
named QnAMakerRecognizerResult, which will include the original query, the top answers from the QmA Maker
Knowledge Base and the score etc.
Once you are gathering telemetry from your bot, you can also try using Power BI template, which contains some
QnA tabs, to view your data. The template was built for use with the Virtual Assistant template, and you can find
details of this here.

Additional information
In Composer, there are two additional settings in the app settings that you need to be aware of: logActivities and
logPersonalInformation. logActivities, which is set to true by default, determines if your incoming or outgoing
activities are logged. logPersonalInformation, which is set to false by default, determines if more sensitive
information is logged. You may see some of the fields blank if you do not enable this.

Since Composer 1.1.1 release, Composer features a new action for sending additional events to Application
Insights, alongside those that you can automatically capture as described above. Wherever you want to track a
custom event, you can add the Emit a telemetr y track event action, which can be found under the Debugging
Options menu. Once added to your authoring canvas, you specify a custom name for the event, which is the name
of the event that will appear in the customEvents table referenced above, along with optionally specifying one or
more additional properties to attach to the event.
Further reading
Analyze your bot's telemetry data.
Validation
9/21/2020 • 4 minutes to read

This article introduces the validation functionality provided in Bot Framework Composer. The validation
functionality helps you identify syntax errors and provide suggested fixes when you author .lg template, .lu
template, and expressions during the process of developing a bot using Composer. With the help of the validation
functionality, your bot-authoring experience will be improved and you can easily build a functional bot that can
"run".

NOTE
This article only covers the validation functionality implemented in Composer so far. More user scenarios will be added with
the progress of the project.

Prerequisites
Install Bot Framework Composer using Yarn.
A basic understanding of the Language Generation concepts and how to define LG templates.
A basic understanding of Language Understanding concepts.
A basic understanding of Adaptive expressions.

Error notifications
In Composer, there are a couple of error indicators when your bot has errors. Usually when you run a bot in
Composer, you should be able to select the Star t Bot button (if start for the first time) or the Restar t Bot button
on the upper right corner of the toolbar. However, sometimes you will see the Star t Bot button (or the Restar t Bot
button) grayed out and not clickable. This indicates the bot application has errors that must be fixed before the bot
can run.

The number with an error icon on the left side of the Star t Bot (or Restar t Bot ) button indicates the number of
errors. Select the error icon will navigate to the Notifications page which lists all the errors and warnings this bot
application has.
NOTE
You could access the Notifications page by selecting Notifications on the Composer menu.

Errors of .lg template and .lu template will show in both the Language Generation and Language
understanding inline editors and Bot Responses and User Input .

.lg files
When you author an .lg template that has syntax errors, a red wiggling line will show under the error in the
Language Generation inline editor.

In the example .lg template above abc is invalid. There are two things you can do to diagnose and fix the error:
1. Read the error message beneath the editor and click here to refer to the syntax documentation.
2. Hover your mouse over the erroneous part and read the detailed error message with suggested fixes.
NOTE
If you find the error message not helpful, you should read the .lg file format and use the correct syntax to compose
the language generation template.

Select Bot Responses on the Composer menu on the left side and toggle Edit Mode , you will find the error is
also saved and updated in Bot Responses .

The tiny red rectangle on the right end of the editor helps you to identify where the error is. This is especially
helpful when you have a long list of templates.
The error message at the bottom of the editor indicates the line numbers of the error. In this example, line3:0 -
line3:3 means the error locates in the third line of the editor from the first character (indexed 0 )to the fourth
character (indexed 3 ).
Hover your mouse over the erroneous part you will see the detailed error message with suggested fixes.

In this example, the error message indicates a - is missing in the template. After you add the - sign in the lg
template, you will see the error message disappear.

If you go back to the Language Generation inline editor, you will see the change is updated and error disappear
as well.

.lu files
When you create an Intent recognized trigger and your lu file has syntax errors, a red wiggling line will show
under the error in the Language Understanding inline editor.
Similar to the Language Generation editor, there are two things you can do to diagnose and fix the error:
1. Expand the error message at the bottom of the Language Understanding inline editor to read more about
the error including the line of errors and possible fixes. You can select here in the error message and refer to
the .lu file format syntax documentation.
2. Hover your mouse over the erroneous part and read the detailed error message with suggested fixes.

NOTE
If you find the error message not helpful, you should read the .lu file format and use the correct syntax to compose
the language understanding template.

Expressions
When you fill in property fields with invalid expressions, the entire form of the Proper ties panel will be in red
frame with error messages under it.
Select the double arrow icon on the upper right corner of the message will expand the error message.

To diagnose and fix the error, read the error message and select here to refer to the syntax documentation. In this
example, the error message indicates that there is a mismatch of the operator = . The correct operator should be !=
if it indicates not equal and == if it indicates equal. Read more about the Adaptive expressions syntax here.
After you fixed the error, the form of the Proper ties panel will turn from red to blue. This indicates that the
expressions entered in this field is syntactically correct.
Publish a bot
9/21/2020 • 5 minutes to read

In this article we will show you how to publish your bot to Azure Web App and Azure Functions. Bot Framework
Composer includes instructions and scripts to help make this process easier. Follow the steps in this article to
complete the publish process or refer to the README file in your bot's project folder, for example, under this
directory: C:\Users\UserName\Documents\Composer\BotName . Please also note that the process to publish your bot to
Azure Web App and Azure Functions are slightly different.

Prerequisites
A subscription to Microsoft Azure.
Node.js. Use version 12.13.0 or later.
A basic bot built using Composer.
Azure CLI.

Create Azure resources


The first step to publish your bot is creating all the necessary Azure resources. If you already have your Azure
resources provisioned, you can skip to the deploy bot to Azure section.
1. Open a new Command Prompt. Navigate to the scripts folder of your bot's project folder. For example:

cd C:\Users\UserName\Documents\Composer\BotName\scripts

2. Run the following command:

npm install

3. Then run the following command to provision your Azure resources.


If you publish your bot to Azure Web App run the following command:

node provisionComposer.js --subscriptionId=<YOUR AZURE SUBSCRIPTION ID> --name=<NAME OF YOUR RESOURCE


GROUP> --appPassword=<APP PASSWORD> --environment=<NAME FOR ENVIRONMENT DEFAULT to dev>

If you publish your bot to Azure Functions, run the following command:

node provisionComposer.js --subscriptionId=<YOUR AZURE SUBSCRIPTION ID> --name=<NAME OF YOUR RESOURCE


GROUP> --appPassword=<APP PASSWORD> --environment=<NAME FOR ENVIRONMENT DEFAULT to dev> --
customArmTemplate=DeploymentTemplates/function-template-with-preexisting-rg.json

P RO P ERT Y VA L UE

Your Azure Subscription ID Find it in your Azure resource in the Subscription ID


field.
P RO P ERT Y VA L UE

Name of your resource group The name you give to the resource group you are
creating.

App password At least 16 characters with at least one number, one letter,
and one special character.

Name for environment The name you give to the publish environment.

4. You will be asked to login to the Azure portal in your browser.

Note that if you see an error message "InsufficientQuota", you need to add an param '--
createLuisAuthoringResource false' and run the script again. For example:
For Azure Web App:

node provisionComposer.js --subscriptionId=<YOUR AZURE SUBSCRIPTION ID> --name=<NAME OF YOUR RESOURCE


GROUP>--appPassword=<APP PASSWORD> --environment=<NAME FOR ENVIRONMENT DEFAULT to dev> --
createLuisAuthoringResource false

For Azure Functions:

node provisionComposer.js --subscriptionId=<YOUR AZURE SUBSCRIPTION ID> --name=<NAME OF YOUR RESOURCE


GROUP> --appPassword=<APP PASSWORD> --environment=<NAME FOR ENVIRONMENT DEFAULT to dev> --
createLuisAuthoringResource false --customArmTemplate=DeploymentTemplates/function-template-with-
preexisting-rg.json

NOTE
If you use --createLuisAuthoringResource false in this step, you will need to manually add the LUIS authoring
key to the publish configuration in the deploy to new Azure resources section, otherwise, the bot will not work. The
default region is westus . If you want to provision to other regions, you can add --location region .

After running the last command, you will see the following. The process will take a few minutes.

5. After the previous step is completed, you will see a generated JSON file in command line.
{
"accessToken": "<SOME VALUE>",
"name": "<NAME OF YOUR RESOURCE GROUP>",
"environment": "<ENVIRONMENT>",
"hostname": "<NAME OF THE HOST>",
"luisResource": "<NAME OF YOUR LUIS RESOURCE>"
"settings": {
"applicationInsights": {
"InstrumentationKey": "<SOME VALUE>"
},
"cosmosDb": {
"cosmosDBEndpoint": "<SOME VALUE>",
"authKey": "<SOME VALUE>",
"databaseId": "botstate-db",
"collectionId": "botstate-collection",
"containerId": "botstate-container"
},
"blobStorage": {
"connectionString": "<SOME VALUE>",
"container": "transcripts"
},
"luis": {
"endpointKey": "<SOME VALUE>",
"authoringKey": "<SOME VALUE>",
"region": "westus"
},
"qna": {
"endpoint": "<SOME VALUE>",
"subscriptionKey": "<SOME VALUE>"
},
"MicrosoftAppId": "<SOME VALUE>",
"MicrosoftAppPassword": "<SOME VALUE>"
}
}

Deploy bot to Azure


Now that you have completed provisioning the Azure resources, let's deploy your bot to Azure.
Select your publish destination
1. In Composer, navigate to Publish page from the menu and select Add new profile .
2. Enter a name for your publish file and select where you would like to publish your bot: Publish bot to
Azure Web App (Preview) , or Publish bot to Azure Functions (Preview) .
In the next step, choose the option you want to proceed based on your Azure resources:
Deploy to new Azure resources.
If you use the provisioning scripts to create new Azure resources as instructed in the create Azure resources
section, you should select this option.
Deploy to existing Azure resources.
If you are NOT using the provisioning script and you are manually creating your own resources in Azure
portal, you should select this option.
Deploy to new Azure resources
This option applies to if you use the provisioning scripts to create new Azure resources as instructed in the create
Azure resources section.
You only need to add the generated JSON file from the create Azure resources step to the Publish Configuration
field and Save .

NOTE
If you use --createLuisAuthoringResource false in the 4th step of the create Azure resources section, you will need to
manually add the LUIS authoring key to the publish configuration, otherwise, the bot will not work. Also, the default region is
westus . If you want to provision to other regions, you can add --location region .

After this step, you can move on to the publish section.


Deploy to existing Azure resources
This option applies to if you are NOT using the provisioning script and you are manually creating your own
resources in Azure portal.
You will need to edit the JSON file and there are 2 optional settings in the publish profile:
hostname - this is the hostname of your azure webapp or azure function, if it does not match the form of
<name>-<env> ;

luisResource - this is the hostname of your LUIS endpoint resource if not in the form <name>-<env>-<luis> .
You can EXCLUDE cosmos db, applicationInsights, blob storage, or luis if you don't want those features
enabled or use them. This is the primary way you can opt-in or opt-out of those features in the runtime.
Examples :
Deploy your bot without configuring any other services:

{
"accessToken": "<your access token>",
"hostname": "<your web app name>",
"settings": {
"MicrosoftAppId": "<the appid of your bot channel registration>",
"MicrosoftAppPassword": "<the app password of your bot channel registration>"
}
}

If you have LUIS configured in your Composer bot, you should use this:
{
"accessToken": "<your access token>",
"hostname": "<your web app name>",
"luisResource": "<your luis service name>",
"settings": {
"luis": {
"endpointKey": "<your luis endpointKey>",
"authoringKey": "<your luis authoringKey>",
"region": "<your luis region, for example westus>"
},
"MicrosoftAppId": "<the appid of your bot channel registration>",
"MicrosoftAppPassword": "<the app password of your bot channel registration>"
}
}

NOTE
You should author and publish in the same region. Read more in the Authoring and publishing regions and
associated keys article.

After this step, you can move on to the publish section.


Publish
1. Select the file you want to publish from the navigation pane.

2. Select Publish to selected profile from Composer toolbar. In the pop-up window select Okay .

You will see the publishing page like this:


Test
You can open your Azure portal and test your newly published bot in Test in Web Chat .

Additional information
When publishing, if you encounter an error about your access token being expired. You can follow the steps to get
a new token:
Open a terminal window.
Run az account get-access-token .
This will result in a JSON object printed to the console, containing a new accessToken field.
Copy the value of the accessToken from the terminal and into the publish accessToken field in the profile in
Composer.
A glossary of concepts and terms used in Composer
9/21/2020 • 8 minutes to read

A|B|C|D|E|F|G|H|I|J|K|L|M|
N|O|P|Q|R|S|T|U|V|W|X|Y|Z

A
Action
Actions are the main component of a trigger, they are what enable your bot to take action whether in response to
user input or any other event that may occur. Actions are very powerful, with them you can formulate and send a
response, create properties and assign them values, manipulate the conversational flow and dialog management
and many other activities.
Additional Information:
See action in the dialog concept article.
Adaptive dialogs
Adaptive Dialogs are a new way to model conversations that take the best of waterfall dialogs and prompts in the
dialogs library. Adaptive Dialogs are event-based. Using adaptive dialogs simplifies sophisticated conversation
modelling primitives like building a dialog dispatcher and ability to handle interruptions elegantly. Adaptive dialogs
derive from dialogs and interact with the rest of the Bot Framework SDK dialog system.
Additional Information:
See adaptive dialogs.
Adaptive expressions
Adaptive expressions are a new expressions language used with the Bot Framework SDK and other conversational
AI components, like Bot Framework Composer, Language Generation, Adaptive dialogs, and Adaptive Cards.
Additional Information:
See adaptive expressions
Authoring canvas
A section of the Design page where users design and author their bot.

B
Bot Responses
An option in the Composer Menu. It navigates users to the Bot Responses page where the Language Generation
(LG) editor locates. From there users can view all the LG templates and edit them.

C
Child dialog
Every dialog that you create in Composer will be a child dialog. Dialogs can be nested multiple levels deep with the
main dialog being the root of all dialogs in Composer. Each child dialog must have a parent dialog and parent
dialogs can have zero or more child dialogs, but a child dialog can and must have only one parent dialog.
Additional Information:
The dialog concept article.
Learn to create a child dialog in the Tutorial: Adding dialogs to your bot

D
Design
An option in the Composer Menu. It navigates users to the Design page where users design and develop their bots.
Dialog
Dialogs are the basic building blocks in Composer. Each dialog represents a portion of the bot's functionality that
contains instructions for what the bot will do and how it will react to user input. Dialogs are composed of
Recognizers that help understand and extract meaningful pieces of information from user's input, a language
generator that helps generate responses to the user, triggers that enable your bot to catch and respond to events
and actions that help you put together the flow of conversation that will occur when a specific event is captured
via a Trigger. There are two types of dialogs in Composer: main dialog and child dialog.
Additional Information:
The dialog concept article.

E
Emulator
The Bot Framework Emulator is a desktop application that allows bot developers to test and debug their bots, either
locally or remotely. Using the Emulator, you can chat with your bot and inspect the messages it sends and receives.
The Emulator displays messages as they would appear in a web chat UI and logs JSON requests and responses as
you exchange messages with your bot. Before deploying your bot, you can run it locally and test it using the
Emulator.
Additional Information:
The latest release of the Bot Framework Emulator
Entity
An entity contains the important details of the user's intent. It can be anything, a location, date, time, cuisine type,
etc. An intent may have no entities, or it may have multiple entities, each providing additional details to help
understand the needs of the user.
Additional Information:
See entities in the Language Understanding concepts article.
Examples
A section in the Composer Home page listing all the example bots.
Additional Information:
Read more about how to use samples.
Export
The "export" activity will generate a copy of your bot's runtime so that it can be used for other purposes such as
adding custom actions and debugging in Visual Studio.

F
G
H
Home
An option in Menu and the start page of Composer.

I
Intent
An intent is the task that the user wants to accomplish or the problem they want to solve. Intent recognition in
Composer is its ability to determine what the user is requesting. This is accomplished by the recognizer using either
Regular Expressions or LUIS. When an intent is detected from the user's input, an event is emitted which can be
handled using the Intent recognized trigger. If the intent is not recognized by any recognizers, another event is
emitted which can be handled using the Unknown intent trigger.
Additional Information:
See intents in the Language Understanding concepts article.

J
K
L
Language Generation
Language Generation (LG), is the process to produce meaningful phrases and sentences in the form of natural
language. Language generation enables your bot to response to a user with human readable language.
Additional Information:
The Language Generation concept article.
LG editor
A section of the Bot Responses page. It is the language generation editor where users can view and edit all the
Language generation templates.
Language Understanding
Language Understanding (LU) deals with how the bot handles users input and converts them into something that it
can understand and respond to intelligently. It involves the use of either a LUIS or Regular Expression recognizer
along with utterances, intents and entities.
Additional Information:
The Language Understanding concept article.
LU editor
A section of the User Input page. It is the language understanding editor where users can view and edit all the
Language understanding templates.
LUIS
A recognizer type in Composer that enables you to extract intent and entities based on LUIS service.
Additional Information:
See how to use LUIS for language understanding in Composer.

M
Main dialog
The main dialog is the foundation of every bot created in Composer. There is only one main dialog and all other
dialogs are children of it. It gets initialized every time your bot runs and is the entry point into the bot.
Memory
A bot uses memory to store property values, in the same way that programming and scripting languages such as
C# and JavaScript do. A bots memory management is contained within the following scopes: user, conversation,
dialog and turn.
Additional Information:
See the conversation flow and memory concept article.
Menu
A list of options provided on the left side of the Composer screen from which a user can choose.

N
Navigation pane
A section of the Composer screen. It enables users to navigate to different parts of Composer.
Notifications
An option in the Composer Menu . It navigates users to the Notifications page that lists all the errors and
warnings of the current bot application.
Additional Information:
See the validation article.

O
P
Parent dialog
A parent dialog is any dialog that has one or more child dialogs, and any dialog can have zero or more child dialogs
associated with it. A parent dialog can also be a child of another dialog.
Prompt
Prompts refer to bots asking questions to users to collect information of a variety of data types (e.g. text, numbers).
Additional information:
Read more about prompts.
Property
A property is a distinct value identified by a specific address. An address is comprised of two parts, the scope and
name: scope.name. Some examples of typical properties in Composer could include: user.name, turn.activity,
dialog.index, user.profile.age.
Additional information:
Read more about property in the memory concept article.
Properties pane
A section of the Design page where users can edit properties.

Q
QnA maker
A cloud-based Natural Language Processing (NLP) service that easily creates a natural conversational layer over
your data.
Additional information:
See the What is the QnA Maker service article.

R
Recognizer
A recognizer enables your bot to understand and extract meaningful pieces of information from user's input. There
are currently two types of recognizers in Composer: LUIS and Regular Expression, both emit events which are
handled by [triggers](#trigger].
Regular Expression
A Regular Expression (regex) is a sequence of characters that define a search pattern. Regex provides a powerful,
flexible, and efficient method for processing text. The extensive pattern-matching notation of regex enables your
bot to quickly parse large amounts of text to find specific character patterns that can be used to determine user
intents, validate text to ensure that it matches a predefined pattern (such as an email address or zip codes), or
extract entities from utterances.
Root dialog
See main dialog.

S
Scope
When a property is in scope, it is visible to your bot. See memory concept article to know more about the different
scopes of memory.
Settings
An option in the Composer Menu . It navigates users to the Settings page where users manage settings for their
bot and Composer.

T
Title bar
A horizontal bar at the top of the Composer screen, bearing the name of the product and the name of current bot
project.
Toolbar
A horizontal bar under Title bar in the Composer screen. It is a strip of icons used to perform certain actions to
manipulate dialogs, triggers, and actions.
Trigger
Triggers are the main component of a dialog, they are how you catch and respond to events. Each trigger has a
condition and a collection of actions to execute when the condition is met.
Additional information:
See events and triggers concept article.
See how to define triggers article.

U
User Input
An option in the Composer Menu. It navigates users to the User Input page where the Language Understanding
editor locates. From there users can view all the Language Understanding templates and edit them.
Utterance
An utterance can be thought of as a continuous fragment of speech that begins and ends with a clear pause.
Composer's language processing examines a user's utterance to determine the intent and extract any entities it may
contain.
Additional Information:
See utterances in the Language Understanding concepts article.

V
W
X
Y
Z

You might also like