Bot Framework Composer Documentation
Bot Framework Composer Documentation
Bot Framework Composer is an open-source visual authoring canvas for developers and multidisciplinary teams to
build bots. Composer integrates language understanding services such as LUIS and QnA Maker and allows
sophisticated composition of bot replies using Language Generation. Composer is available as a desktop
application as well as a web-based component.
Built with the latest features of the Bot Framework SDK, Composer provides everything you need to build a
sophisticated conversational experience:
A visual editing canvas for conversation flows
Tools to author and manage language understanding (NLU) and QnA components
Powerful language generation and templating system
A ready-to-use bot runtime executable
Additional resources
Bot Framework SDK
Adaptive dialog
Language generation
Adaptive expressions
Next steps
Read best practices for building bots using Composer.
Learn how to create an echo bot using Composer.
What's new September 2020
9/21/2020 • 2 minutes to read
Bot Framework Composer, a visual authoring tool for building Conversational AI applications, has seen strong
uptake from customers and positive feedback since entering general availability at Microsoft BUILD 2020. We
continue to invest in ensuring Composer provides the best possible experience for our customers.
Welcome to the September 2020 release of Bot Framework Composer. This article summarizes key new features
and improvements in Bot Framework Composer 1.1.1 stable release. There are a number of updates in this version
that we hope you will like. Some of the key highlights include:
QnA Maker knowledge base creation
Integrated QnA Maker knowledge base creation and management in addition to the existing LUIS
integration for language understanding. This reduces the need for a customer to leave the context of the
Composer environment.
Multilingual authoring capabilities
Internationalization of the product, broadening its accessibility, as well as introducing multilingual
capabilities for bots built with Composer. This allows our customers to do the same.
JavaScript runtime in preview
A continued focus on the fundamentals of the application, with improved performance, enhancements to the
overall authoring experience, and broader inclusion for our user base with a preview of the Composer
runtime in JavaScript, in additional to the existing C# runtime. This enables customers to export the runtime
and use it for other purposes such as adding custom actions.
Skills manifest generation
An improved experience to generate Bot Framework skills manifest by adding triggers and dialogs selections
in the forms. This enables our customers to select triggers and dialogs they want to include in the manifest
and add corresponding activity types to the manifest's activities property.
Deeper integration with Azure platform
Deeper integration with the Azure platform for publishing applications built with Composer, along with
management of related services.
Additional integration with Power Vir tual Agents
Additional integration with Power Virtual Agents, part of the Power Platform, including improved capabilities
to extend PVA solutions by building Bot Framework skills.
Other improvements
Improved language generation editing performance
Support for UI schema fly-out menu and form
IntelliSense server for Composer text editor
Recoil refactor of state management
Insiders : Want to try new features as soon as possible? You can download the nightly Insiders build and try the
latest updates as soon as they are available!
Additional information
Read more in Composer 1.1.1 release notes here.
Install Bot Framework Composer
9/21/2020 • 2 minutes to read
You can choose to download and use Bot Framework Composer as an installable desktop application: Windows |
macOS | Linux. Make sure you install the Bot Framework Emulator and .NET Core SDK 3.1 or above. Alternatively,
you can build Composer from source.
2. After cloning the repository, navigate to the Bot Framework Composer folder. For example:
cd C:\Users\UserName\Documents\GitHub\BotFramework-Composer
3. Then run the following commands to navigate to the Composer folder and get all required packages:
cd Composer
yarn
4. Next, run the following command to build the Composer application, this command can take several
minutes to finish:
yarn build
NOTE
If you are having trouble installing or building Composer run yarn tableflip . This will remove all of the Composer
application's dependencies (node_modules) and then it reinstalls and rebuilds all of its dependencies. Once
completed, run yarn install and yarn build again. This process generally takes 5-10 minutes.
5. Again using Yarn, start the Composer authoring application and the bot runtime:
yarn startall
6. Once you see Composer now running at: appear in your terminal, you can run Composer in your
browser using the address http://localhost:3000.
Keep the terminal open as long as you plan to work with the Composer. If you close it, Composer will stop running.
The next time you need to run the Composer, all you will need is to run yarn startall from the Composer
directory.
Next steps
Create a echo bot using Composer.
Tour of Composer
9/21/2020 • 2 minutes to read
Bot Framework Composer provides Onboarding functionality to help you get familiar with the bot creation process.
This functionality consists of a product tour that includes five sections with each section containing one or more
tips.
Prerequisites
Install Composer
Create an Echo bot
6. You can navigate backwards or forward through the tips of a section by using the Previous or Next
buttons.
You can exit the tour at anytime by simply selecting anywhere outside the overview views. If you do, you will
see a popup window asking if you would like to Leave Product Tour . If you select Yes your onboarding
process will end. If you select Cancel your onboarding process continues.
7. Once you complete a section, select done and you will return to the the Onboarding Welcome! screen
where you can continue to the next section of the tour.
Sections that only contain a single tip will not have the Previous , Next or Done buttons but instead you can
select the Got it! button to move to the next section. Once you complete a section, you cannot go back to it
without restarting the onboarding tour.
8. Once you complete the tour, select Done! . The Onboarding switch in your settings will automatically be set
to Disabled .
You can restart the onboarding tour anytime by repeating these steps.
Next steps
Learn how to build a weather bot.
Create your first bot
9/21/2020 • 2 minutes to read
In this quickstart you will learn how to create a bot using the Echo Bot template in Composer and test it in the
Bot Framework Emulator.
Prerequisites
Download and use Bot Framework Composer as an installable desktop application: Windows | macOS | Linux.
Make sure you install the Bot Framework Emulator and .NET Core SDK 3.1 or above.
Or build Composer with source.
2. Enter a Name and Description for your bot. Choose where you want to save the bot or keep the default
location and click Next .
3. You will now see your bot's main dialog.
4. Test your bot by clicking Star t Bot in the top right. You will then see the Test in Emulator button show
up. Click Test in Emulator .
5. Type anything in the Emulator to have the bot echo back your response.
Welcome to the Bot Framework Composer tutorials. These start with the creation of a simple bot with each
successive tutorial building on the previous tutorial adding additional capabilities designed to teach some of the
basic concepts required to build bots with the Bot Framework Composer.
In these tutorials, you will build a weather bot using Composer, starting with a simple bot and gradually
introducing more sophistication. You'll learn how to:
Create a simple bot and test it in the Emulator
Add multiple dialogs to help your bot fulfill more than one scenario
Use prompts to ask questions and get responses from an HTTP request
Handle interruptions in the conversation flow in order to add global help and the ability to cancel at any time
Use Language Generation to power your bot's responses
Send responses with cards
Use LUIS in your bot
Prerequisites
A good understanding of the material covered in the Introduction to Bot Framework Composer, including the
naming conventions used for elements in the Composer.
Next step
Tutorial: Create a new bot and test it in the Emulator
Tutorial: Create a new bot and test it in the Emulator
9/21/2020 • 3 minutes to read
This tutorial walks you through creating a basic bot with the Bot Framework Composer and testing it in the
Emulator.
In this tutorial, you learn how to:
Create a basic bot using Bot Framework Composer
Run your bot locally and test it using the Bot Framework Emulator
Prerequisites
Bot Framework Composer
Bot Framework Emulator
2. In the Create from scratch? screen, you'll be presented with different options to create your bot. For this
tutorial, select Create from Scratch , then Next .
3. The Define conversation objective form:
a. Enter the name WeatherBot in the Name field.
b. Enter A friendly bot who can talk about the weather in the Description field.
c. Select the location to save your bot.
d. Save your changes and create your new bot by selecting Next .
TIP
Spaces and special characters are not allowed in the bot's name.
After creating your bot, Composer will load the new bot's main dialog in the editor. It should look like this:
NOTE
Each dialog contains one or more triggers that define the actions available to the bot while the dialog is active.
When you create a new bot, an Activities trigger of type Greeting (ConversationUpdate activity) is
automatically provisioned. Triggers help your dialog capture events of interest and respond to them using actions.
TIP
To help keep bots created in Composer organized, you can rename any trigger to something that better describes
what it does.
TIP
Steps 4-8 are demonstrated in the image immediately following step 8.
5. In the proper ties panel on the right side of the screen, select the trigger name and type
WelcomeTheUser .
6. Next you will start adding functionality to your bot by adding Actions to the WelcomeTheUser trigger.
You do this by selecting the plus (+) icon in the Authoring canvas , then select Send a response from the
list of actions.
TIP
Selecting the plus (+) icon in the Authoring canvas is used to add Actions to the conversation flow. You can use
this to add actions to the end of a flow, or insert new actions between existing actions.
Soon the Emulator will appear, and the bot should immediately greet you with the message you just
configured:
You now have a working bot, and you're ready to add some more substantial functionality!
Next steps
Tutorial: Adding dialogs to your bot
Tutorial: Adding dialogs to your bot
9/21/2020 • 3 minutes to read
In the previous tutorial you learned how to create a new bot using Bot Framework Composer. In this tutorial you
will learn how to add additional dialogs to your bot and test them using Bot Framework Emulator.
It can be useful to group functionality into different dialogs when building the features of your bot with
Composer. This helps keep the dialogs organized and allow sub-dialogs to be combined into larger and more
complex dialogs.
A dialog contains one or more triggers. Each trigger consists of one or more actions which are the set of
instructions that the bot will execute. Dialogs can also call other dialogs and can pass values back and forth
between them.
In this tutorial, you learn how to:
Build on the basic bot created in the previous tutorial by adding an additional dialog.
Run your bot locally and test it using the Bot Framework Emulator.
Prerequisites
Completion of the tutorial Create a new bot and test it in the Emulator
Knowledge about dialogs in Composer
TIP
Create all of your bot components and make sure they work together before creating detailed functionality.
2. Fill in the Name field with getWeather and the Description field with Get the current weather
conditions , then select Next .
3. Composer will create the new dialog with a pre-configured BeginDialog trigger.
For now, we'll just add a simple message to get things hooked up, then come back to flesh out the feature in
a later tutorial.
4. In the BeginDialog trigger, select the plus (+) icon in the Authoring canvas then select the Send a
response action.
5. Once the new action is created, enter the following text into the Proper ty panel:
Let's check the weather
You'll have a flow that looks like this:
Each dialog can have its own recognizer, a component that lets the bot examine an incoming message
and decide what it means by choosing between a set of predefined intents. Different types of
recognizers use different techniques to determine which intent, if any, to choose.
5. Select Intent recognized from the What is the type of this trigger? drop-down list. Enter weather for
both the What is the name of this trigger (RegEx) and the Please input regex pattern fields.
NOTE
This tells the bot to look for the word "weather" anywhere in an incoming message. Regular expression patterns are
generally much more complicated, but this is adequate for the purposes of this example.
6. Next, create a new action for the Intent recognized trigger you just created. You can do this by selecting
the + sign under the trigger node in the Authoring canvas_ then select Begin a new dialog from the
Dialog management menu.
7. In the Proper ties panel for the new Begin a new dialog action, select getWeather from the dialog
name drop-down list.
Now when a user enters weather , your bot will respond by activating the getWeather dialog.
In the next tutorial you will learn how to prompt the user for additional information, then query a weather service
and return the results to the user, but first you need to validate that the functionality developed so far works
correctly, you will do this using the Emulator.
2. Send the bot a message that says weather . The bot should respond with your test message, confirming
that your intent was recognized as expected, and the fulfillment action was triggered.
Next steps
Tutorial: Creating the weather bot - Adding actions to you dialog
Tutorial: Adding actions to your dialog
9/21/2020 • 6 minutes to read
In this tutorial you will learn how to add actions to your dialog in Composer. You will prompt the user for their zip
code, and then the bot will respond with the weather forecast for the specified location based on a query to an
external service.
In this tutorial, you learn how to:
Add actions in your trigger to prompt the user for information
Create properties with default values
Save data into properties for later use
Retrieve data from properties and use it to accomplish tasks
Make calls to external services
Prerequisites
Completion of the tutorial Adding dialogs to your bot.
Knowledge about dialogs in Composer, specifically actions.
Knowledge about conversation flow and memory.
4. Select the User Input tab in the Proper ties panel. This part of the prompt represents the user's response,
including where to store the value and how to pre-process it. Enter user.zipcode in the Proper ty field.
5. Next, in the User Input tab, select expression and then enter =trim(this.value) in the Output Format
field. trim() is a prebuilt function in adaptive expressions. This function trims all leading and trailing spaces
in the user's input before the value is validated and assigned to the property defined in the Proper ty field
user.zipcode .
6. Select the Other tab in the Proper ties panel. This is where you can specify your validation rules for the
prompt, as well as any error messages that will be displayed to the user if they enter an invalid value based
on the Validation Rules you create.
7. In the Unrecognized Prompt field, enter:
Sorr y, I do not understand '${this.value}'. Please specify a zip code in the form 12345
8. In the Validation Rules field, enter:
length(this.value) == 5
This validation rule states that the user input must be 5 characters long. If the user input is shorter or longer
than 5 characters your bot will send an error message.
IMPORTANT
Make sure to press the enter key after entering the validation rule. If you don't press enter the rule will not be added.
NOTE
By default prompts are configured to ask the user for information Max turn count number of times (defaults to 3).
When the max turn count is reached, the prompt will stop and the property will be set to the value defined in the
Default value field before moving forward with the conversation.
You have created an action in your BeginDialog trigger that will prompt the user for their zip code and placed it
into the user.zipcode property. Next you will pass the value of that property in an HTTP request to a weather
service, validate the response, and if it passes your validation display the weather report to the user.
3. Next, still in the Proper ties panel, enter the following in the Result proper ty field:
dialog.api_response
Result proper ty represents the property where the result of this action will be stored. The result can
include any of the following 4 properties from the HTTP response:
statusCode. This can be accessed via dialog.api_response.statusCode .
reasonPhrase. This can be accessed via dialog.api_response.reasonPhrase .
content. This can be accessed via dialog.api_response.content .
headers. This can be accessed via dialog.api_response.headers .
If the Response type is Json, it will be a deserialized object available via dialog.api_response.content
property.
4. After making an HTTP request, you need to test the status of the response and handle errors as they occur.
You can use an If/Else branch for this purpose. To do this, select + that appears beneath the Send HTTP
Request action you just created, then select Branch: if/else from the Create a condition menu.
5. In the Proper ties panel on the right, enter the following value into the Condition field:
dialog.api_response.statusCode == 200
6. In the True branch select the + button then select Set a Proper ty from the Manage proper ties menu.
7. In the Proper ties panel on the right, enter dialog.weather into the Proper ty field.
8. Next, enter =dialog.api_response.content into the Value field.
9. While still in the True branch, select the + button that appears beneath the action created in the previous
step, then select Send a response .
10. In the Proper ties panel on the right, enter the following response to send:
- The weather is ${dialog.weather.weather} and the temp is ${dialog.weather.temp}°
You will now tell the bot what to do in the event that the statusCode returned is not 200.
11. Select the + button in the False branch, then select Send a response and set the text of the message to:
I got an error : ${dialog.api_response.content.message}
12. For the purposes of this tutorial we will assume that if you are in this branch, it is because the zip code is
invalid, and if it is invalid it should be removed so that the invalid value does not persist in the
user.zipcode property. To remove the invalid value from this property, select the + button that follows the
Send a response action you created in the previous step, then select Delete a proper ty from the
Manage proper ties menu.
13. In the Proper ties panel on the right, enter user.zipcode into the Proper ty field.
The flow should appear in the Authoring canvas as follows:
You have now completed adding an HTTP request to your BeginDialog trigger. The next step is to validate that
these additions to your bot work correctly. To do that you can test it in the Emulator.
2. After the greeting, send weather to the bot. The bot will prompt you for a zip code. Give it your home zip
code, and seconds later, you should see the current weather conditions.
Next steps
Tutorial: Adding Help and Cancel functionality to your bot
Tutorial: Adding Help and Cancel functionality to
your bot
9/21/2020 • 4 minutes to read
In the last tutorial you learned how to add actions to a trigger. In this tutorial you will learn how to handle
interruptions to conversation flow. In Composer you can add help topics to your bot and let users exit out of any
process at any time.
In this tutorial, you learn how to
Create help topics that can be accessed from anywhere is the flow at any time.
Interrupt your bots flow to enable your users to exit out of any process before it is completed.
Prerequisites
Completion of the tutorial Adding actions to you dialog.
Composer will create the new help dialog with one BeginDialog trigger pre-configured.
3. Select the BeginDialog trigger in the Navigation pane.
4. Create a new action at the bottom of the flow by selecting the plus + icon in the Authoring canvas , then
select Send a response from the list of actions.
5. Enter the following text into the Proper ties panel on the right side of the Composer screen:
I am a weather bot! I can tell you the current weather conditions. Just say WEATHER.
Create an Intent Recognized trigger
1. Select WeatherBot (the main dialog) from the dialog navigation pane.
2. In the Proper ties panel on the right side, select Regular Expression from the Recognizer Type drop-
down list.
5. Next select + in the Authoring Canvas to create a new action, then select Begin a new dialog from the
Dialog management menu.
6. Next you need to specify the dialog to call when the help intent is recognized. You do this by selecting help
from the Dialog name drop-down list in the Proper ties panel.
Now, in addition to giving you the current weather, your bot should now also offer help. You can verify this
using the Emulator.
7. Select Restar t Bot and open it in the Emulator to verify you are able to call your new help dialog.
Notice that once you start the weather dialog by saying weather your bot doesn't know how to provide help since
it is still trying to resolve the zip code. This is why you need to configure your bot to allow interruptions to the
dialog flow.
Allowing interruptions
The getWeather dialog handles getting the weather forecast, so you will need to configure its flow to enable it to
handle interruptions, which will enable the new help functionality to work. The following steps demonstrate how
to do this.
1. Select the BeginDialog trigger in the getWeather dialog.
2. Select the Bot Asks (Text) action in the Authoring canvas .
3. Select the Other tab in the Proper ties panel. Set the Allow interruptions field to true .
This tells Bot Framework to consult the parent dialog's recognizer, which will allow the bot to respond to
help at the prompt as well.
4. Select Restar t Bot and open it in the Emulator to verify you are able to call your new help dialog.
5. Say weather to your bot. It will ask for a zip code.
6. Now say help . It will now provide the global help response, even though that intent and trigger are defined
in another dialog.
You have learned how to interrupt a flow to include help functionality to your bot. Next you will learn how to add a
global cancel command that lets users exit out of a flow without completing it.
Global cancel
1. Follow the steps described in the create a new dialog section above to create a new dialog named cancel
and add a Send a response action with the response of Canceling! .
2. Add another action by selecting + at the bottom of the flow in the Authoring canvas then select Cancel
all dialogs from the Dialog management menu. When Cancel all dialogs is triggered, the bot will
cancel all active dialogs, and send the user back to the main dialog.
Next you will add a cancel intent, the same way you added the help intent in the previous section.
3. Follow steps 1 to 5 described in the create an intent recognized trigger section above to create a cancel
intent in the main dialog (weatherBot ) and add a Begin a new dialog action in the cancel trigger. You
also need to specify the dialog to call when the cancel intent is recognized. You do this by selecting cancel
from the Dialog name drop-down list in the Proper ties panel.
Now, your users will be able to cancel out of the weather dialog at any point in the flow. You can verify this
using the Emulator.
4. Select Restar t Bot and open it in the Emulator to verify you are able to cancel.
5. Say weather to your bot. The bot will ask for a zip code.
6. Now say help . The bot will provide the global help response.
7. Now, say cancel . Notice that the bot doesn't resume the weather dialog but instead, it confirms that you
want to cancel, then waits for your next message.
Next steps
Tutorial: Adding Language Generation to your bot to power your bot's responses.
Tutorial: Adding language generation to your bot
9/21/2020 • 3 minutes to read
Now that your bot can perform its basic tasks it's time to improve your bot's conversational abilities. The ability to
understand what your user means conversationally and contextually and responding with useful information is
often the primary challenge for a bot developer. Bot Framework Composer integrates with the Bot Framework
Language Generation library, a set of powerful templating and message formatting tools that let you include
variation, conditional messages, and dynamic content. LG gives you greater control of how your bot responds to
the user.
In this tutorial, you learn how to:
Integrate Language Generation into your bot using Composer
Prerequisites
Completion of the tutorial Adding Help and Cancel functionality to your bot
Knowledge about Language Generation
Language Generation
Let's start by adding some variation to the welcome message.
1. Go to the Navigation pane and select the WeatherBot dialogs WelcomeTheUser trigger.
2. Select the Send a response action in the Authoring Canvas .
3. Replace the response text in the Proper ties panel with the following:
- Hi! I'm a friendly bot that can help with the weather. Try saying WEATHER.
- Hello! I am Weather Bot! Say WEATHER to get the current conditions.
- Howdy! Weather bot is my name and weather is my game.
Your bot will randomly select any of the above phrases when responding to the user. Each phrase must
begin with the dash (- ) character on a separate line. For more information see the Template and Anatomy of
a template sections of the Language Generation article.
4. To test your new phrases select the Restar t Bot button in the Toolbar and open it in the Emulator. Click
Restar t conversation a few times to see the results of the greetings being randomly selected.
Currently, the bot reports the weather in a very robotic manner:
You can improve the language used when delivering the weather conditions to the user by utilizing two
features of the Language Generation system: conditional messages and parameterized messages.
5. Select Bot Responses from the Composer menu.
You'll notice that every message you created in the flow editor also appears here and these LG template are
grouped by dialogs. They're linked, and any changes you make in this view will be reflected in the flow as
well.
6. Select getWeather in the navigation pane and toggle the Edit Mode switch in the upper right hand corner
so that it turns blue. This will enable a syntax-highlighted LG editor in the main pane. You can now edit LG
template in the selected dialog getWeather .
7. Scroll to the bottom of the editor and paste the following text:
# DescribeWeather(weather)
- IF: ${weather.weather=="Clouds"}
- It is cloudy
- ELSEIF: ${weather.weather=="Thunderstorm"}
- There's a thunderstorm
- ELSEIF: ${weather.weather=="Drizzle"}
- It is drizzling
- ELSEIF: ${weather.weather=="Rain"}
- It is raining
- ELSEIF: ${weather.weather=="Snow"}
- There's snow
- ELSEIF: ${weather.weather=="Clear"}
- The sky is clear
- ELSEIF: ${weather.weather=="Mist"}
- There's a mist in the air
- ELSEIF: ${weather.weather=="Smoke"}
- There's smoke in the air
- ELSEIF: ${weather.weather=="Haze"}
- There's a haze
- ELSEIF: ${weather.weather=="Dust"}
- There's a dust in the air
- ELSEIF: ${weather.weather=="Fog"}
- It's foggy
- ELSEIF: ${weather.weather=="Ash"}
- There's ash in the air
- ELSEIF: ${weather.weather=="Squall"}
- There's a squall
- ELSEIF: ${weather.weather=="Tornado"}
- There's a tornado happening
- ELSE:
- ${weather.weather}
This creates a new Language Generation template named DescribeWeather . The template lets the LG system
use the data returned from the weather service in the weather.weather variable to generate a friendlier
response.
8. Select Design from the Composer Menu.
9. Select the getWeather dialog, then its BeginDialog trigger in the Navigation pane.
10. Scroll down in the Authoring Canvas and select the Send a response action that starts with The weather
is....
11. Now replace the response with the following:
- ${DescribeWeather(dialog.weather)} and the temp is ${dialog.weather.temp}°
This syntax lets you nest the DescribeWeather template inside another template. LG templates can be
combined in this way to create more complex templates.
You are now ready to test your bot in the Emulator.
12. Select the Restar t Bot button in the Toolbar then open it in the Emulator.
Now, when you say weather , the bot will send you a message that sounds much more natural than it did
previously.
Next steps
Tutorial: Incorporating cards and buttons into your bot
Tutorial: Incorporating cards and buttons into your
bot
9/21/2020 • 2 minutes to read
The previous tutorial taught how to add language generation to your bot to include variation, conditional
messages, and dynamic content that give you greater control of how your bot responds to the use. This tutorial
will build on what you learned in the previous tutorial by adding richer message content to your bot using Cards
and Buttons.
In this tutorial, you learn how to:
Add cards and buttons to your bot using Composer
Prerequisites
Completion of the tutorial Adding language generation to your bot
Knowledge about Language Generation
Knowledge about Cards
Knowledge about Sending responses with cards in Composer
Adding buttons
Buttons are added as suggested actions. Your can add preset buttons to your bot that the user can select to provide
input. Suggested actions improve the user experience by letting users answer questions or make selections with
the tap of a button instead having to type responses.
First, update the prompt for the users zip code to include suggested actions for help and cancel actions.
1. Select the BeginDialog trigger in the getWeather dialog.
2. Select the Bot Asks (Text) action which is the second action in the flow.
Adding cards
Now you can change the weather report to also include a card.
1. Scroll to the bottom of the Authoring canvas and select the Send a response node in the True branch
that starts with ${DescribeWeather(dialog.weather)}...
2. Replace the response with this Thumbnail Card:
[ThumbnailCard
title = Weather for ${dialog.weather.city}
text = The weather is ${dialog.weather.weather} and ${dialog.weather.temp}°
image = ${dialog.weather.icon}
]
3. Click Restar t Bot in the Composer Toolbar . Once your bot has restarted click Test in Emulator .
In the Emulator, go through the bot flow, say weather and enter a zip code. Notice that the bot now
responds back with a card that contains the results along with a card title and image.
Next steps
Tutorial: Adding LUIS functionality to your bot
Tutorial: Using LUIS for Language Understanding
9/21/2020 • 4 minutes to read
Up until this point in the tutorials we have been using the Regular Expression recognizer to detect user intent.
The other recognizer currently available in Composer is the LUIS recognizer. The LUIS recognizer incorporates
Language Understanding (LU) technology that is used by a bot to understand a user's response to determine what
next to do in a conversation flow. Once the LUIS recognizer is selected you will need to provide training data in the
dialog to capture the users intent that is contained in the message, which you will then pass on to the triggers
which define how the bot will respond.
In this tutorial, you learn how to:
Add the LUIS recognizer to your bot.
Determine user intent and entities and use that to generate helpful responses.
Prerequisites
Completion of the tutorial Incorporating cards and buttons into your bot
Knowledge about Language Understanding concept article
A LUIS and LUIS authoring key.
In the next section, you will learn to create three Intent recognized triggers using LUIS recognizer in
WeatherBot . You can ignore or delete the Intent recognized triggers you created using Regular Expression in
the Add help and cancel command tutorial.
3. After you select Submit , you will see the trigger node in the authoring canvas.
4. In the Proper ties pane on the right hand of Composer screen, you can set the Condition property to
#Cancel.Score >= 0.8 .
This tells your bot not to fire the cancel trigger if the confidence score returned by LUIS is lower than 80%.
LUIS is a machine learning based intent classifier and can return a variety of possible matches, so you will
want to avoid low confidence results.
5. Repeat steps 1 through 3 to create the weather trigger in the WeatherBot.Main dialog. Add the following
LU training data phrases to the Trigger phrases field:
# weather
- get weather
- weather
- how is the weather
6. Repeat steps 1 through 3 to create the help trigger in the WeatherBot dialog. Set the Condition property
to #Help.Score >= 0.5 and add the following LU training data phrases to the Trigger phrases field:
# help
- help
- I need help
- please help me
- can you help
TIP
You can find your LUIS Primary key on the LUIS home page by selecting your user account icon in the top right side
of the screen, then Settings then copy the value of the Primar y key field in the Star ter_Key section of the User
Settings page.
> Define a regex zipcode entity. Any time LUIS sees a five digit number, it will flag it as 'zipcode'
entity.
$ zipcode : /[0-9]{5}/
The next step is to create an action in the BeginDialog trigger to set the user.zipcode property to the value of the
zipcode entity.
2. Select the getWeather dialog in the Navigation pane, then the BeginDialog trigger.
3. Select + in the Authoring Canvas to insert an action after the Send a response action (that has the
prompt Let's check the weather). Then select Set a proper ty from the Manage Proper ties menu.
4. In the Proper ties panel enter user.zipcode into the Proper ty field and =@zipcode in the Value field. The
user.zipcode property will now be set to the value of the zipcode entity, and if entered by the user, they
will no longer be prompted for it.
Modern conversational software has many different components, including source code, custom business logic,
cloud API, training data for language processing systems, and perhaps most importantly, the actual content used
in conversations with the bot's end users. Composer integrates all of these pieces into a single interface for
constructing the building blocks of bot functionality called Dialogs .
Each dialog represents a portion of the bot's functionality and contains instructions for how the bot will react to
the input. Simple bots will have just a few dialogs. Sophisticated bots may have dozens or hundreds of individual
dialogs.
In Composer, dialogs are functional components offered in a visual interface that do not require you to write
code. The dialog system supports building an extensible model that integrates all of the building blocks of a bot's
functionality. Composer helps you focus on conversation modeling rather than the mechanics of dialog
management.
Types of dialogs
You create a dialog in Composer to manage a conversation objective. There are two types of dialogs in
Composer: main dialog and child dialog. The main dialog is initialized by default when you create a new bot. You
can create one or more child dialogs to keep the dialog system organized. Each bot has one main dialog and can
have zero or more child dialogs. Refer to the Create a bot article on how to create a bot and its main dialog in
Composer. Refer to the Add a dialog article on how to create a child dialog and wire it up in the dialog system.
Below is a screenshot of a main dialog named MyBot and two children dialogs called Weather and Greeting .
At runtime, the main dialog is called into action and becomes the active dialog, triggering event handlers with the
actions you defined during the creation of the bot. As the conversation flows, the main dialog can call a child
dialog, and a child dialog can, in turn, call the main dialog or other children dialogs.
Anatomy of a dialog
The following diagram shows the anatomy of a dialog in Composer. Note that dialogs in Composer are based on
Adaptive dialogs.
Recognizer
The recognizer interprets what the user wants based on their input. When a dialog is invoked its recognizer will
start to process the message and try to extract the primary intent and any entity values the message includes.
After processing the message, both the intent and entity values are passed onto the dialog's triggers.
Composer currently supports two recognizers: The LUIS recognizer, which is the default, and the Regular
Expression recognizer. You can choose only one recognizer per dialog, or you can choose not to have a recognizer
at all.
Recognizers give your bot the ability to understand and extract meaningful pieces of information from user
input. All recognizers emit events when the recognizer picks up an intent (or extracts entities ) from a given user
utterance . The recognizer of a dialog is not always called into play when a dialog is invoked. It depends on how
you design the dialog system.
Below is a screenshot of recognizers in Composer.
Action
Triggers contain a series of actions that the bot will undertake to fulfill a user's request. Actions are things like
sending messages, responding to user questions using a knowledge base, making calculations, and performing
computational tasks on behalf of the user. The path the bot follows through a dialog can branch and loop. The bot
can ask even answer questions, validate input, manipulate and store values in memory, and make decisions.
Below is a screenshot of the action menu in Composer. Select the + sign below the trigger you can mouse over
the action menu.
Language Generator
As the bot takes actions and sends messages, the Language Generator is used to create those messages from
variables and templates. Language generators can create reusable components, variable messages, macros, and
dynamic messages that are grammatically correct.
Dialog actions
A bot can have from one to several hundred dialogs, and it can get challenging to manage the dialog system and
the conversation with users. In the Add a dialog section, we covered how to create a child dialog and wire it up to
the dialog system using Begin a new dialog action. Composer provides more dialog actions to make it easier
to manage the dialog system. You can access the different dialog actions by clicking the + node under a trigger
and then select Dialog management .
Below is a list of the dialog actions available in Composer:
Begin a new dialog An action that begins another dialog. When that dialog is
completed, it will return to the caller.
End this dialog A command that ends the current dialog, returning the
resultProperty as the result of the dialog.
Cancel all dialogs A command to cancel all of the current dialogs by emitting
an event that must be caught to prevent cancellation from
propagating
End this turn A command to end the current turn without ending the
dialog.
Repeat this Dialog An action that repeats the current dialog with the same
dialog.
Replace this Dialog An action that replaces the current dialog with the target
dialog.
With these dialog actions, you can easily create an extensible dialog system without worrying about the
complexities of dialog management.
Further reading
Dialogs library
Adaptive dialogs
Next
Events and triggers
Events and triggers
9/21/2020 • 5 minutes to read
In Bot Framework Composer, each dialog includes one or more event handlers called triggers. Each trigger
contains one or more actions. Actions are the instructions that the bot will execute when the dialog receives any
event that it has a trigger defined to handle. Once a given event is handled by a trigger, no further action is taken
on that event. Some event handlers have a condition specified that must be met before it will handle the event and
if that condition is not met, the event is passed to the next event handler. If an event is not handled in a child
dialog, it gets passed up to its parent dialog to handle and this continues until it is either handled or reaches the
bots main dialog. If no event handler is found, it will be ignored and no action will be taken.
To see the complete trigger menu in Composer, select + Add in the tool bar and + Add new trigger from the
drop-down list.
Anatomy of a trigger
The basic idea behind a trigger (event handler) is "When (event) happens, do (actions)". The trigger is a conditional
test on an incoming event, while the actions are one or more programmatic steps the bot will take to fulfill the
user's request.
A trigger contains the following properties:
A dialog can contain multiple triggers. You can view them under the specific dialog in the navigation pane. Each
trigger shows as the first node in the authoring canvas. A trigger contains actions defined to be executed. Actions
within a trigger occur in the context of the active dialog.
The screenshot below shows the properties of an Intent recognized trigger named Cancel that is configured to
fire whenever the Cancel intent is detected as shown in the properties panel. In this example the Condition field
is left blank, so no additional conditions are required in order to fire this trigger.
Types of triggers
There are different types of triggers that all work in a similar manner, and in some cases can be interchanged. This
section will cover the different types of triggers and when you should use them. See the define triggers article for
additional information.
Intent triggers
Intent triggers work with recognizers. After the first round of events is fired, the bot will pass the incoming
message through the recognizer. If an intent is detected, it will be passed into the trigger (event handler) with any
entities contained in the message. If no intent is detected by the recognizer, an Unknown intent trigger will fire,
which handles intents not handled by any trigger.
There are four different intent triggers in Composer:
Unknown intent
Intent recognized
QnA Intent recognized
Duplicated intents recognized
You should use intent triggers when you want to:
Trigger major features of your bot using natural language.
Recognize common interruptions like "help" or "cancel" and provide context-specific responses.
Extract and use entity values as parameters to your dialog.
For additional information see how to define this type of triggers in the how to define triggers article.
Dialog events
The base type of triggers are dialog triggers. Almost all events start as dialog events which are related to the
"lifecycle" of the dialog. Currently there are four different dialog events triggers in Composer:
Dialog star ted (Begin dialog event)
Dialog cancelled (Cancel dialog event)
Error occurred(Error event)
Re-prompt for input(Reprompt dialog event)
Most dialogs include a trigger configured to respond to the BeginDialog event, which fires when the dialog
begins. This allows the bot to respond immediately.
You should use dialog triggers to:
Take actions immediately when the dialog starts, even before the recognizer is called.
Take actions when a "cancel" signal is detected.
Take actions on messages received or sent.
Evaluate the content of the incoming activity.
For additional information, see the dialog events section of the article on how to define triggers.
Activities
Activity triggers are used to handle activities such as when a new user joins and the bot begins a new
conversation. Greeting (ConversationUpdate activity) is a trigger of this type and you can use it to send a
greeting message. When you create a new bot, the Greeting (ConversationUpdate activity) trigger is
initialized by default in the main dialog. This specialized option is provided to avoid handling an event with a
complex condition attached. Message events is a type of Activity trigger to handle message activities.
You should use Activities trigger when you want to:
Take actions when a user begins a new conversation with the bot.
Take actions on receipt of an activity with type EndOfConversation .
Take actions on receipt of an activity with type Event .
Take actions on receipt of an activity with type HandOff .
Take actions on receipt of an activity with type Invoke .
Take actions on receipt of an activity with type Typing .
Take actions when a message is received (on receipt of an activity with type MessageReceived ).
Take actions when a message is updated (on receipt of an activity with type MessageUpdate ).
Take actions when a message is deleted (on receipt of an activity with type MessageDelete ).
Take actions when a message is reacted (on receipt of an activity with type MessageReaction ).
For additional information, see Activities trigger in the article titled How to define triggers.
Custom events
You can create and emit your own events by creating an action associated with any trigger, then you can handle
that custom event in any dialog in your bot by defining a Custom event event trigger.
Bots can emit your user-defined events using Emit a custom event . If you define an Emit a custom event and
it fires, any Custom event in any dialog will catch it and execute the corresponding actions.
For additional information, see Custom event in the article titled How to define triggers.
Further reading
Adaptive dialog: Recognizers, rules, steps and inputs
Next
Conversation flow and memory
How to define triggers
Conversation flow and memory
9/21/2020 • 7 minutes to read
All bots built with Bot Framework Composer have a "memory", a representation of everything that is currently in
the bot's active mind. Developers can store and retrieve values in the bot's memory, and can use those values to
create loops, branches, dynamic messages and behaviors in the bot. Properties stored in memory can be used
inside templates or as part of a calculation.
The memory system makes it possible for bots built in Composer to do things like:
Store user profiles and preferences.
Remember things between sessions such as the last search query or a list of recently mentioned locations.
Pass information between dialogs.
The scope of the property determines when the property is available, and how long the value will be retained.
TIP
It's useful to establish conventions for your state properties across conversation , user , and dialog state for
consistency and to prepare for context sharing scenarios. It is also a good practice to think about the lifetime of the
property when creating it. Read more in the composer best practices article.
Prompts define the questions posed to the user and are set in the Prompt box under the Bot Asks tab in the
properties panel on the left.
Under the User Input tab you'll see Proper ty to fill , where the user's response will be stored. Prompt
responses can be formatted before being stored by selecting an option for Output Format .
In the above example of a number prompt, the result of the prompt "What is your age?" will be stored as the
user.age property.
For more information about implementing text other prompts see the article Asking users for input.
Set a property
Use Set a proper ty to set the value of a property.
The value of a property can be set to a literal value, like true , 0 , or fred , or it can be set to the result of a
computed expression. When storing simple values it is not necessary to initialize the property.
Set properties
Use Set proper ties to set a group of properties.
The value of each property is assigned individually in the Proper ties panel . You select Add to set the next one.
Delete a property
Use Delete a Proper ty to remove a property from memory.
Delete properties
Use Delete proper ties to remove properties from memory.
Finally, the parent dialog is configured to capture the return value inside the Begin a new dialog action:
When executed, the bot will execute the profile child dialog, collect the user's name and age in a temporary
scope, then return it to the parent dialog where it is captured into the user.profile property and stored
permanently.
Automatic properties
Some properties are automatically created and managed by the bot. These are available automatically.
turn.dialogEvents.event name.value Payload of a custom event fired using the EmitEvent action.
In this second example, the value of turn.choice is used to match against multiple Switch cases. Note that, while
it looks like a raw reference to a property, this is actually an expression and since no operation is being taken on
the property, the expression evaluates to the raw value.
Memory in loops
When using For each and For each page loops, properties also come into play. Both require an Items
proper ty that holds the array, and For each page loops also require a Page size , or number of items per page.
Memory in LG
One of the most powerful features of the Bot Framework system is Language Generation, particularly when used
alongside properties pulled from memory.
You can refer to properties in the text of any message, including prompts.
You can also refer to properties in LG templates. See Language Generation to learn more about the Language
Generation system.
To use the value of a property from memory inside a message, wrap the property reference in curly brackets:
{user.profile.name}
The screenshot below demonstrates how a bot can prompt a user for a value, then immediately use that value in
a confirmation message.
In addition to getting properties values, it is also possible to embed properties in expressions used in a message
template. Refer to the Adaptive expressions page for the full list of pre-built functions.
Properties can also be used within an LG template to provide conditional variants of a message and can be
passed as parameters to both built-in and custom functions. Learn more about LG.
Memory shorthand notations
Bot Framework Composer provides a variety of shortcuts for referring to properties in memory. Refer to the
Managing state documentation for the complete list of memory shorthand notations.
Further reading
Memory scopes in adaptive dialogs.
Next
Language Generation in Bot Framework Composer.
Natural Language Processing
9/21/2020 • 2 minutes to read
Natural Language Processing (NLP) is a technological process that enables computer applications, such as bots, to
derive meaning from a users input. To do this it attempts to identify valuable information contained in
conversations by interpreting the users needs (intents) and extract valuable information (entities) from a sentence,
and respond back in a language the user will understand.
Why do bots Need Natural Language Processing?
Bots are able to provide little to no value without NLP. It is what enables your bot to understand the messages your
users send and respond appropriately. When a user sends a message with “Hello”, it is the bots Natural Language
Processing capabilities that enables it to know that the user posted a standard greeting, which in turn allows your
bot to leverage its AI capabilities to come up with a proper response. In this case, your bot can respond with a
greeting.
Without NLP, your bot can’t meaningfully differentiate between when a user enters “Hello” or “Goodbye”. To a bot
without NLP, “Hello” and “Goodbye” will be no different than any other string of characters grouped together in
random order. NLP helps provide context and meaning to text or voice based user inputs so that your bot can come
up with the best response.
One of the most significant challenges when it comes to NLP in your bot is the fact that users have a blank slate
regarding what they can say to your bot. While you can try to predict what users will and will not say, there are
bound to be conversations that you did not anticipate, fortunately Bot Framework Composer makes it easy to
continually refine its NLP capabilities.
The two primary components of NLP in Composer are Language Understanding (LU) that processes and
interprets user input and Language Generation (LG) that produces bot responses.
Language Understanding
Language Understanding (LU) is the subset of NLP that deals with how the bot handles user inputs and converts
them into something that it can understand and respond to intelligently.
Additional information on Language Understanding
The Language Understanding concept article.
The Advanced intent and entity definition concept article.
The Using LUIS for Language Understanding how to article.
Language Generation
Language Generation (LG), is the process of producing meaningful phrases and sentences in the form of natural
language. Simply put, it is when your bot responds to a user with human readable language.
Additional information on Language Generation
The Language Generation concept article.
The Language Generation how to article.
Summary
Natural Language Processing is at the core of what most bots do in interpreting users written or verbal inputs and
responding to them in a meaningful way using a language they will understand.
While NLP certainly can’t work miracles and ensure a bot appropriately responds to every message, it is powerful
enough to make-or-break a bot’s success. Don’t underestimate this critical and often overlooked aspect of bots.
Language Generation
9/21/2020 • 5 minutes to read
Language Generation (LG) lets you define multiple variations of a phrase, execute simple expressions based on
context, and refer to conversational memory. At the core of language generation lies template expansion and
entity substitution. You can provide one-off variation for expansion as well as conditionally expand a template.
The output from language generation can be a simple text string, multi-line response, or a complex object
payload that a layer above language generation will use to construct a complete activity. Bot Framework
Composer natively supports language generation to produce output activities using the LG templating system.
You can use Language generation to:
Achieve a coherent personality, tone of voice for your bot.
Separate business logic from presentation.
Include variations and sophisticated composition for any of your bot's replies.
Construct cards, suggested actions and attachments using a structured response template.
Language generation is achieved through:
A Markdown based .lg file that contains the templates and their composition.
Full access to the current bot's memory so you can data bind language to the state of memory.
Parser and runtime libraries that help achieve runtime resolution.
TIP
You can read the composer best practices article for some suggestions using LG in Composer.
Templates
Templates are functions which return one of the variations of the text and fully resolve any other references to
templates for composition. You can define one or more text responses in a template. When multiple responses
are defined in the template, a single response will be selected at random.
You can also define one or more expressions using adaptive expressions, so when it is a conditional template,
those expressions control which particular collection of variations get picked. Templates can be parameterized,
meaning that different callers to the template can pass in different values for use in expansion resolution. For
additional information see .lg file format.
Composer currently supports three types of templates: simple response, conditional response, and structured
response. You can read define LG templates to learn how to define each of them.
You can split language generation templates into separate files and refer to them from one another. You can use
Markdown-style links to import templates defined in another file, like [description text](file/uri path) . Make
sure your template names are unique across files.
Anatomy of a template
A template usually consists of the name of the template, denoted with the # character, and one of the following:
A list of one-off variation text values defined using "-"
A collection of conditions, each with a:
conditional expression, expressed using adaptive expressions and
list of one-off variation text values per condition
A structure that contains:
structure-name
properties
Below is an example of a simple LG template with one-off variation text values.
Define LG templates
When you want to determine how your bot should respond to user input, you can define LG templates to
generate responses. For example, you can define a welcome message to the user in the Send a response
action. To do this, select the Send a response action node. You will see the inline LG editor where you can
define LG templates.
To define LG templates in Composer, you will need to know:
the aforementioned LG concepts
.lg file format
adaptive expressions
You can define LG templates either in the inline LG editor or the Bot Responses that lists all templates. Below is
a screenshot of the LG inline editor.
Select the Bot Responses icon (or the bot icon when collapsed) in the navigation pane to see all the LG
templates defined in the bot categorized by dialog. Select All in the navigation to see the templates defined and
shared by all the dialogs. Use the [import](common.lg) to import the common templates to a specific dialog.
Select any dialog or All in the navigation pane and toggle Edit Mode on the upper right corner to edit your LG
template.
Composer currently supports definitions of the following three types of templates: simple, conditional, and
structured response.
Simple response template
A simple response template generates a simple text response. Simple response template can be a single line
response, text with memory, or a response of multiline text. Use the - character before response text or an
expression with the property value to returns Here are a few examples of simple response templates from the
RespondingWithTextSample.
Here is an example of a single line text response:
- ${user.message}
# multilineText
- ``` you have such alarms
alarm1: 7:am
alarm2: 9:pm
```
# TestTemplate
SWITCH: {condition}
- CASE: {case-expression-1}
- output1
- CASE: {case-expression-2}
- output2
- DEFAULT:
- final output
# TemplateName
> this is a comment
[Structure-name
Property1 = <plain text> .or. <plain text with template reference> .or. <expression>
Property2 = list of values are denoted via '|'. e.g. a | b
> this is a comment about this specific property
Property3 = Nested structures are achieved through composition
]
- Hello, I'm the interruption demo bot! \n \[Suggestions=Get started | Reset profile]
Below is an example of a Thumbnail card from the Responding With Cards Sample:
# ThumbnailCard
[ThumbnailCard
title = BotFramework Thumbnail Card
subtitle = Microsoft Bot Framework
text = Build and connect intelligent bots to interact with your users naturally wherever
they are, from text/sms to Skype, Slack, Office 365 mail and other popular services.
image = https://sec.ch9.ms/ch9/7ff5/e07cfef0-aa3b-40bb-9baa-
7c9ef8ff7ff5/buildreactionbotframework_960.jpg
buttons = Get Started
]
References
.lg file format
Structured response template
Adaptive expressions
Next
Language understanding
Language Understanding
9/21/2020 • 6 minutes to read
Language Understanding (LU) is used by a bot to understand language naturally and contextually to determine
what next to do in a conversation flow. In Bot Framework Composer, the process is achieved through setting up
recognizers and providing training data in the dialog so that the intents and entities contained in the message
can be captured. These values will then be passed on to triggers which define how the bot responds using the
appropriate actions.
LU has the following characteristics when used in Bot Framework Composer:
LU is training data for LUIS recognizer.
LU is authored in the inline editor or in User Input using the .lu file format.
Composer currently supports LU technologies such as LUIS.
# Greeting
- Hi
- Hello
- How are you?
#<intent-name> describes a new intent definition section. Each line after the intent definition are example
utterances that describe that intent. You can stitch together multiple intent definitions in a language understanding
editor in Composer. Each section is identified by #<intent-name> notation. Blank lines are skipped when parsing
the file.
Utterances
Utterances are inputs from users and may have a lot of variations. Since utterances are not always well-formed,
we need to provide example utterances for specific intents to train bots to recognize intents from different
utterances. By doing so, your bots will have some "intelligence" to understand human languages.
In Composer, utterances are always captured in a markdown list and followed by an intent. For example, the
Greeting intent with some example utterances are shown in the Intents section above.
NOTE
You may have noticed that LU format is very similar to LG format but they are different. LU is for bots to understand user's
inputs (primarily capture intent and optionally entities ) and it is associated with recognizers, while LG is for bots to
respond to users as output, and it is associated with a language generator.
Entities
Entities are a collection of objects, each consisting of data extracted from an utterance such as places, time, and
people. Entities and intents are both important data extracted from utterances. An utterance may include zero or
more entities, while an utterance usually represents one intent. In Composer, all entities are defined and managed
inline. Entities in the .lu file format are denoted using {<entityName>=<labelled value>} notation. For example:
# BookFlight
- book a flight to {toCity=seattle}
- book a flight from {fromCity=new york} to {toCity=seattle}
The example above shows the definition of a BookFlight intent with two example utterances and two entity
definitions: toCity and fromCity . When triggered, if LUIS is able to identify a destination city, the city name will
be made available as @toCity within the triggered actions or a departure city with @fromCity as available entity
values. The entity values can be used directly in expressions and LG templates, or stored into a property in
memory for later use. For additional information on entities see the article advanced intents and entities.
Example
The table below shows an example of an intent with its corresponding utterances and entities. All three utterances
share the same intent BookFlight each with a different entity. There are different types of entities, you can find
more information in .lu file format.
IN T EN T UT T ERA N C ES EN T IT Y
Below is a similar definition of a BookFlight intent with entity specification {city=name} and a set of example
utterances. We use this example to show how they are manifested in Composer. Extracted entities are passed
along to any triggered actions or child dialogs using the syntax @city .
# BookFlight
- book a flight to {city=austin}
- travel to {city=new york}
- I want to go to {city=los angeles}
After publishing, LUIS will be able to identify a city as entity and the city name will be made available as @city
within the triggered actions. The entity value can be used directly in expressions and LG templates, or stored into a
property in memory for later use. Read here for advanced intents and entities definition.
Create a trigger
In the same dialog you selected the Default recognizer recognizer, select Add in the tool bar and then Add new
trigger .
In the pop-up trigger menu, select Intent recognized from the What is the type of this trigger? list. Fill in the
What is the name of this trigger (luis) field with an intent name and add example utterances in the Trigger
phrases field.
For example, you can create an Intent recognized trigger in the MyBot dialog with an intent named (weather )
and a few examples utterances.
After you select Submit you will see an Intent recognized trigger named weather in the navigation pane and
the trigger node in the authoring canvas. You can edit the .lu file inline on the right side of the Composer screen.
Select User Input from the Composer menu to view all the LU templates created. Select a dialog from the
navigation pane then toggle Edit Mode to edit the LU templates.
Publish LU to LUIS
The last step is to publish your .lu files to LUIS.
Select Star t Bot on the upper right corner of the Composer. Fill in your LUIS Primar y key and select OK .
NOTE
If you do not have a LUIS account, you can get one on the LUIS. If you have a LUIS account but do not know how to find
your LUIS primary key please see the Azure resources for LUIS section of the Authoring and runtime keys article.
Any time you select Star t Bot (or Restar t Bot ), Composer will evaluate if your LU content has changed. If so
Composer will automatically make the required updates to your LUIS applications then train and publish them. If
you go to your LUIS app website, you will find the newly published LU model.
References
What is LUIS
Language Understanding
.lu file format
Adaptive expressions
Using LUIS for language understanding
Extract data from utterance text with intents and entities
Next
Learn how to send messages to users.
Bot Framework Composer Plugins
9/21/2020 • 14 minutes to read
It is possible to extend and customize the behavior of Composer by installing plugins. Plugins can hook into the
internal mechanisms of Composer and change they way they operate. Plugins can also "listen to" the activity inside
Composer and react to it.
Plugin endpoints
Plugins currently have access to the following functional areas:
Authentication and identity - plugins can provide a mechanism to gate access to the application, as well as
mechanisms used to provide user identity.
Storage - plugins can override the built in filesystem storage with a new way to read, write and access bot
projects.
Web server - plugins can add additional web routes to Composer's web server instance.
Publishing - plugins can add publishing mechanisms.
Runtime templates - plugins can provide a runtime template used when "ejecting" from Composer.
Bot project templates - plugins can add items to the template list shown in the "new bot" flow.
Boilerplate content - plugins can provide content copied into all bot projects (such as a readme file or helper
scripts).
Combining these endpoints, it is possible to achieve scenarios such as:
Store content in a database
Require login via AAD or any other oauth provider
Create a custom login screen
Require login via GitHub, and use GitHub credentials to store content in a Git repo automatically
Use AAD roles to gate access to content
Publish content to external services such as remote runtimes, content repositories, and testing systems.
Configure a Passport strategy to be used by Composer. This is the equivalent of calling app.use(passportStrategy)
on an Express app. See PassportJS docs.
In addition to configuring the strategy, plugins will also need to use composer.addWebRoute to expose login, logout
and other related routes to the browser.
Calling this method also enables a basic auth middleware that is responsible for gating access to URLs, as well as a
simple user serializer/deserializer. Developers may choose to override these components using the methods
below.
composer.useAuthMiddleware(middleware)
Provide a custom middleware for testing the authentication status of a user. This will override the built-in auth
middleware that is enabled by default when calling usePassportStrategy() .
Developers may choose to override this middleware for various reasons, such as:
Apply different access rules based on URL
Do something more than check req.isAuthenticated such as validate or refresh tokens, make database calls
and provide telemetry.
composer.useUserSerializers(serialize, deserialize)
Provide custom serialize and deserialize functions for storing and retrieving the user profile and identity
information in the Composer session.
By default, the entire user profile is serialized to JSON and stored in the session. If this is not desirable, plugins
should override these methods and provide alternate methods.
For example, the below code demonstrates storing only the user ID in the session during serialization, and the use
of a database to load the full profile out of a database using that id during deserialization.
composer.useUserSerializers(serializeUser, deserializeUser);
composer.addAllowedUrl(url)
Allow access to url without authentication. url can be an express-style route with wildcards ( /auth/:stuff or
/auth(.*) )
This is primarily for use with authentication-related URLs. While /login is allowed by default, any other URL
involved in auth needs to be whitelisted.
For example, when using oauth, there is a secondary URL for receiving the auth callback. This has to be whitelisted,
otherwise access will be denied to the callback URL and it will fail.
plugLoader.loginUri
This value is used by the built-in authentication middleware to redirect the user to the login page. By default, it is
set to '/login' but it can be reset by changing this member value.
Note that if you specify an alternate URI for the login page, you must use addAllowedUrl to whitelist it.
PluginLoader.getUserFromRequest(req)
This is a static method on the PluginLoader class that extracts the user identity information provided by Passport.
This is for use in the web route implementations to get user and provide it to other components of Composer.
For example:
};
Storage
By default, Composer reads and writes assets to the local filesystem. Plugins may override this behavior by
providing a custom implementation of the IFileStorage interface. See interface definition here
Though this interface is modeled after a filesystem interaction, the implementation of these methods does not
require using the filesystem, or a direct implementation of folder and path structure. However, the implementation
must respect that structure and respond in the expected ways -- ie, the glob method must treat path patterns the
same way the filesystem glob would.
composer.useStorage(customStorageClass)
...
}
Web server
Plugins can add routes and middlewares to the Express instance.
These routes are responsible for providing all necessary dependent assets such as browser javaScript, CSS, etc.
Custom routes are not rendered inside the front-end React application, and currently have no access to that
application. They are independent pages -- though nothing prevents them from making calls to the Composer
server APIs.
composer.addWebRoute(method, url, callbackOrMiddleware, callback)
This is equivalent to using app.get() or app.post() . A simple route definition receives 3 parameters - the
method, URL and handler callback.
If a route-specific middleware is necessary, it should be specified as the 3rd parameter, making the handler
callback the 4th.
Signature for callbacks is (req, res) => {}
For example:
// simple route
composer.addWebRoute('get', '/hello', (req, res) => {
res.send('HELLO WORLD!');
});
composer.addWebMiddleware(middleware)
Bind an additional custom middleware to the web server. Middleware applied this way will be applied to all routes.
Signature for middleware is (req, res, next) => {}
For middleware dealing with authentication, plugins must use useAuthMiddleware() as otherwise the built-in auth
middleware will still be in place.
Publishing
composer.addPublishMethod(publishMechanism, schema, instructions)
By default, the publish method will use the name and description from the package.json file. However, you may
provide a customized name:
Provide a new mechanism by which a bot project is transferred from Composer to some external service. The
mechanisms can use whatever method necessary to process and transmit the bot project to the desired external
service, though it must use a standard signature for the methods.
In most cases, the plugin itself does NOT include the configuration information required to communicate with the
external service. Configuration is provided by the Composer application at invocation time.
Once registered as an available method, users can configure specific target instances of that method on a per-bot
basis. For example, a user may install a "Publish to PVA" plugin, which implements the necessary protocols for
publishing to PVA. Then, in order to actually perform a publish, they would configure an instance of this
mechanism, "Publish to HR Bot Production Slot" that includes the necessary configuration information.
Publishing plugins support the following features:
publish - given a bot project, publish it. Required.
getStatus - get the status of the most recent publish. Optional.
getHistory - get a list of historical publish actions. Optional.
rollback - roll back to a previous publish (as provided by getHistory). Optional.
p u b l i sh (c o n fi g , p r o j e c t , m e t a d a t a , u se r )
This method is responsible for publishing the project using the provided config using whatever method the
plugin is implementing - for example, publish to Azure. This method is required for all publishing plugins.
In order to publish a project, this method must perform any necessary actions such as:
The LUIS lubuild process
Calling the appropriate runtime buildDeploy method
Doing the actual deploy operation
Parameters: | Parameter | Description |-- |-- | config | an object containing information from the publishing profile,
as well as the bot's settings -- see below | project | an object representing the bot project | metadata | any comment
passed by the user during publishing | user | a user object if one has been provided by an authentication plugin
Config will include:
{
templatePath: '/path/to/runtime/code',
fullSettings: {
// all of the bot's settings from project.settings, but also including sensitive keys managed in-app.
// this should be used instead of project.settings which may be incomplete
},
profileName: 'name of publishing profile',
... // All fields from the publishing profile
}
g e t St a t u s(c o n fi g , p r o j e c t , u se r )
This method is used to check for the status of the most recent publish of project to a given publishing profile
defined by the config field. This method is required for all publishing plugins.
This endpoint uses a subset of HTTP status codes to report the status of the deploy:
STAT US M EA N IN G
config will be in the form below. config.profileName can be used to identify the publishing profile being queried.
{
profileName: `name of the publishing profile`,
... // all fields from the publishing profile
}
g e t H i st o r y (c o n fi g , p r o j e c t , u se r )
This method is used to request a history of publish actions from a given project to a given publishing profile
defined by the config field. This is an optional feature - publishing plugins may exclude this functionality if it is
not supported.
config will be in the form below. config.profileName can be used to identify the publishing profile being queried.
{
profileName: `name of the publishing profile`,
... // all fields from the publishing profile
}
Should return in array containing recent publish actions along with their status and log output.
[{
status: [200|202|404|500],
result: {
message: 'Status message to be displayed in publishing UI',
log: 'any log output from the process so far',
comment: 'the user specified comment associated with the publish',
id: 'a unique identifier of this published version',
}
}]
r o l l b a c k (c o n fi g , p r o j e c t , r o l l b a c k To Ve r si o n , u se r )
This method is used to request a rollback in the deployed environment to a previously published version. This
DOES NOT affect the local version of the project. This is an optional feature - publishing plugins may exclude this
functionality if it is not supported.
config will be in the form below. config.profileName can be used to identify the publishing profile being queried.
{
profileName: `name of the publishing profile`,
... // all fields from the publishing profile
}
Runtime templates
composer.addRuntimeTemplate(templateInfo)
Expose a runtime template to the Composer UI. Registered templates will become available in the "Runtime
settings" tab. When selected, the full content of the path will be copied into the project's runtime folder. Then,
when a user clicks Start Bot , the startCommand will be executed. The expected result is that a bot application
launches and is made available to communicate with the Bot Framework Emulator.
await composer.addRuntimeTemplate({
key: 'myUniqueKey',
name: 'My Runtime',
path: __dirname + '/path/to/runtime/template/code',
startCommand: 'dotnet run',
build: async(runtimePath, project) => {
// implement necessary actions that must happen before project can be run
},
buildDeploy: async(runtimePath, project, settings, publishProfileName) => {
// implement necessary actions that must happen before project can be deployed to azure
return pathToBuildArtifacts;
},
});
b u i l d (r u n t i m e P a t h , p r o j e c t )
Perform any necessary steps required before the runtime can be executed from inside Composer when a user
clicks the "Start Bot" button. Note this method should not actually start the runtime directly - only perform the
build steps.
For example, this would be used to call dotnet build in the runtime folder in order to build the application.
b u i l d D e p l o y (r u n t i m e P a t h , p r o j e c t , se t t i n g s, p u b l i sh P r o fi l e N a m e )
PA RA M ET ER DESC RIP T IO N
publishProfileName the name of the publishing profile that is the target of this
build
Perform any necessary steps required to prepare the runtime code to be deployed. This method should return a
path to the build artifacts with the expectation that the publisher can perform a deploy of those artifacts "as is" and
have them run successfully. To do this it should:
Perform any necessary build steps
Install dependencies
Write settings to the appropriate location and format
composer.getRuntimeByProject(project)
Returns a reference to the appropriate runtime template based on the project's settings.
// run the build step from the runtime, passing in the project as a parameter
await runtime.build(project.dataDir, project);
composer.getRuntime(type)
await composer.addBotTemplate({
id: 'name.my.template.bot'
name: 'Display Name',
description: 'Long description';
path: '/path/to/template'
});
Boilerplate content
In addition, boilerplate material will also be added to every new bot project. Plugins can bundle additional content
that will be copied into every project, regardless of which template is used.
composer.addBaseTemplate(template)
await composer.addBaseTemplate({
id: 'name.my.template.bot'
name: 'Display Name',
description: 'Long description';
path: '/path/to/template'
});
Accessors
composer.passport
composer.name
Plugin roadmap
These features are not currently implemented, but are planned for the near future:
Eventing - plugins will be able to emit events as well as respond to events emitted by other plugins and by
Composer core.
Front-end plugins - plugins will be able to provide React components that are inserted into the React
application at various endpoints.
Schema extensions - Plugins will be able to amend or update the schema.
Next
Learn how to extend Composer with plugins.
Best practices for building bots using Composer
9/21/2020 • 11 minutes to read
Bot Framework Composer is a visual authoring tool for building conversational AI software. By learning the
concepts described in this section, you'll become equipped to design and build a bot using Composer that aligns
with the best practices. Before reading this article, you should read the introduction to Bot Framework Composer
article for an overview of what you can do with Composer.
Use the basic authoring process to build your bots:
Create a bot
Create primary conversation flows by
adding triggers to dialogs
adding actions to triggers
authoring language understanding for user input
authoring language generation for bot responses
manipulating memory
Integrate with APIs
Add greater natural language complexity using entity binding and interruption support
The following list includes the best practices we recommend and things to avoid for building bots with Composer:
Give your bot a bit of personality Make your bot too chatty
Consider when to use dialogs Nest more than two deep conditionals
Design bots
Plan your bot
Before building your bot application, make a plan of the bot you want to build. Consider the following questions:
What your bot is used for? Be clear about the kind of bot you plan to build. This will determine the
functionalities you want to implement in the bot.
What problems does your bot intend to solve? Be clear about the problems your bot intend to solve.
Solving problems for customers is the top factor you should consider when building bots. You should also
consider things such as how to solve the problems easily and of course with the best user experience you can
provide.
Who will use your bot? Different customers will expect different user experiences. This will also determine
the complexity you should involve in your bot design. Consider what language to implement the bot.
Where will your bot run? You should decide the platforms your bot will run on. For example, a bot designed
to run on a mobile device will have more features like sending SMS to implement.
Give your bot a bit of personality
If your bot responses are too robotic, users will find chatting with your bot boring and confusing. Here are some
tips to give your bot a bit of personality:
Use language generation to create multiple variations of messages. However, a little bit of personality goes a
long way. Don't over use it, otherwise you will end up creating a bot with too much personality.
Consider the context where your bot will be used. Bots used in private scenarios can be more conversational
than bots in public. A bot will talk more to a new user than to an experienced user.
Define language generation templates and reuse them across the bot consistently. This will make your bot's
personality consistent.
Use cards to give your bots a bit of personality if your platform supports.
Don't make your bot too chatty
People don't like chatty bots who send lots of messages and do not solve their problems.
Being concise and clear in messages is highly recommended in your bot design. Make the messages your bot
sends relevant and information-dense. Don't say less or more than the conversation requires.
Design dialogs
Consider when to use dialogs
Think of dialogs as modular pieces with specific functionalities. Each dialog contains instructions for how the bot
will react to the input. Dialogs allow you more granular control of where you start and restart, and they allow you
to hide details of the "building blocks" that people do not need to know.
Consider using dialogs when you want to:
Reuse things.
Have interruptions that are local to that flow (for example, contextual help inside a date collection flow).
Have a place in your conversation that you need to jump to easily from other places.
Nest more than two deep conditionals within a dialog.
The following example shows a bot which nests two switch statements. This is inefficient and hard to read.
Instead of using nested switch statement, you can use dialogs to encapsulate the functionalities.
TIP
The Allow Interruptions property is located in the Proper ties panel of the Other tab of any prompt actions. You can set
the value to be true or false .
NOTE
Don't use spaces and special characters in dialog names.
#acknowledgePhrase
- I'm sorry you are having this problem. Let's see if there is anything we can do.
- I know it is frustrating – let's see how we can help…
- I completely understand your situation. Let me try my best to help.
# welcomeUser(name)
- ${greeting()}, ${ personalize(name)}
# greeting
- Hello
- Howdy
- Hi
# personalize(name)
- IF: ${name != ''}
- ${ name }
- ELSE:
- HUMAN
Having set up these templates, you can now use them in a variety of situations. For example:
TIP
Read more in .lg file format and structured response template.
Design inputs
Make your prompt text clear
Make sure your prompt texts are clear and unambiguous. Ambiguity is a problem in languages and it is something
we should avoid when we phrase the text of a prompt.
Consider giving your user input hints including using suggested responses. This will help make your prompt clear
and avoid ambiguity. For example, instead of saying "What is your birthday", you can say "What is your birthday?
Please include the day month and year in the form of DD/MM/YYYY".
Prepare for ambiguity in the responses
While ambiguity is something you try to avoid in outgoing messages, you should also be prepared for ambiguity
in the incoming responses from users. This helps to make your bot perform better but also prepares you for
platforms like voice where users more commonly add words.
When people are talking out loud, they tend to add words in their responses than when they are typing into a text
box. For example, in a text box they will just say "my birthday is 1/25/78" while the spoken input can be something
like "my birthday is in January, it's the 25th".
Sometimes when people make their bots personality rich they introduce language ambiguity. For example, be
cautious when you use greeting messages such as "What's up?", which is a question that users will try to answer. If
you don't prepare your bot to responses like "Nothing", it will end up confusion.
Add prompt properties
Make use of the prompt features such as Unrecognized prompt and Invalid prompt. These are powerful properties
that give you a lot of control over how your bot responds to unrecognized and invalid answers. Access these
properties under the Other tab of any type of input (Ask a question ) action.
Add guidance along the prompts for re-prompting, otherwise the bot will keep asking the same question without
telling the users why it is asking again.
Use validations when possible. The Invalid prompt fires when the input does not pass the defined validation rules.
Here are two examples of how to phrase in the Unrecognized prompt and Invalid prompt fields.
Unrecognized prompt
Sorry, I do not understand '${this.value}'. Please enter a zip code in the form of 12345.
Invalid prompt
Sorry, '${this.value}' is not valid. I'm looking for a 5 digit number as zip code. Please specify a zip code in
the form 12345.
Design recognizers
Use LUIS prebuilt entities
LUIS provides a list of prebuilt entities which are very handy to use. When you think of defining entities, check the
list of LUIS prebuilt entities first instead of reinventing your own wheels. Some commonly-used prebuilt entities
include: time , date and Number .
For the best practices of building LUIS models, you should read the best practices for building a language
understanding (LUIS) app article.
Additional information
Best practices for building a language understanding (LUIS) app
Best practices for a QnA Maker knowledge base
How to use samples in Composer
9/21/2020 • 3 minutes to read
Bot Framework Composer provides example bots designed to illustrate the scenarios you are most likely to
encounter when developing your own bots. This article is designed to help you make the best use of these
examples. You will learn how to create a new bot based off any of the examples, which you can use to learn from
or as a starting point when creating your own bots in Composer.
Prerequisites
Install Bot Framework Composer.
(Optional) LUIS account and a LUIS authoring key.
Open a sample
The Examples can be found on the right side of the Composer home page.
To open a bot sample from Composer follow the steps:
1. Select the sample you want to open from the Examples list.
NOTE
When you select a bot from the Examples list, a copy of the original sample is created in the Location you specify
in the Define conversation objective form. Any changes you make to that bot will be saved without affecting
the original example. You can create as many bots based off the examples as you want, without impacting the
original examples and you are free to modify and use them as a starting point when creating your own bots.
4. Select the Star t Bot button located on the Composer toolbar, then select Test in Emulator to test your
new bot in the Bot Framework Emulator.
Learn from Samples
Composer currently provides eleven bot samples with different specialties. These samples are a good resource to
learn how to build your own bot using Composer. You can use the samples and learn how to send text messages,
how to ask questions, and how to control conversation flow, etc.
Below is a table of the eleven bot samples in Composer and their respective descriptions.
SA M P L E DESC RIP T IO N
Echo Bot A bot that echoes whatever message the user enters.
Simple Todo A sample bot that shows how to use Regex recognizer to
define intents and allows you to add, list and remove items.
Todo with LUIS A sample bot that shows how to use LUIS recognizer to
define intents and allows you to add, list and remove items. A
LUIS authoring key is required to run this sample.
Asking Questions A sample bot that shows how to prompt user for different
types of input.
Controlling Conversation Flow A sample bot that shows how to use branching actions to
control a conversation flow.
Dialog Actions A sample bot that shows how to use actions in Composer
(does not include Ask a question actions already covered in
the Asking Questions example).
QnA Maker and LUIS A sample bot that shows how to use both QnA Maker and
LUIS. A LUIS authoring key and a QnA Knowledge Base is
required to run this sample.
Responding with Cards A sample bot that shows how to send different cards using
language generation.
Responding with Text A sample bot that shows how to send different text messages
to users using language generation.
Next
Learn how to send text messages.
Send text messages to users
9/21/2020 • 7 minutes to read
The primary way a bot communicates with users is through message activities. Some messages may simply
consist of plain text, while others may contain richer content such as cards. In this article, you will learn the
different types of text messages you can use in Bot Framework Composer and how to use them.
3. After the sample loaded in Composer, select Design from the left side menu and then select the Dialog
star ted trigger in the main dialog to get an idea of how this sample works.
4. Select Bot Responses from the Composer Menu to see the templates that are called when the user selects
one of the items from the choices they are presented with when the Multiple choice action executes. You
will be referring to these templates throughout this article as each potential text message type is discussed
in detail.
!bot responses](./media/send-messages/responding-with-text-sample-bot-responses.png)
You can also define a simple text message with multiple variations. When you do this, the bot will respond
randomly with any of the simple text messages, for example:
# SimpleText
- Hi, this is simple text
- Hey, this is simple text
- Hello, this is simple text
TIP
You reference a parameter using the syntax ${user.message} .
You reference a template using the syntax ${templateName()} .
To learn more about setting properties in Composer, refer to the Conversation flow and memory article. To learn
more about using expressions in your responses, refer to the Adaptive expressions article.
LG with parameter
You can think of LG with parameter like a function with parameters, for example the template in the .lg file
(entered in the LG editor in the properties panel or in the Bot Responses page) looks like the following:
# LGWithParam(user)
- Hello ${user.name}, nice to talk to you!
In this LG template:
EL EM EN T DESC RIP T IO N
LG composition
An LG composition message is a template composed of one or more existing LG templates. To define an LG
Composition template you need to first define the component template(s) then call them from your LG
composition template. For example:
# Greeting
- nice to talk to you!
# LGComposition(user)
- ${user.name} ${Greeting()}
In this template # LGComposition(user) , the # Greeting template is used to compose a portion of the new
template. The syntax to include a pre-defined template is ${templateName()} .
The LG composition message. See the Dialog star ted action of the LGComposition dialog in the
Responding with Text example.
Structured LG
A Structured LG message uses the structured response template format. Structured response templates enable
you to define complex structures such as cards.
For bot applications, the structured response template format natively supports
Activity definition. This is used by the Structured LG message.
Card definition. See the Sending responses with cards article for more information.
Any chatdown style constructs. For information on chatdown see the chatdown readme.
The Responding with Text example demonstrates using the Activity definition, for example:
# StructuredText
[Activity
Text = text from structured
]
This is a simple structured LG with output response text from structured . The definition of a structured template
is as follows:
# TemplateName
> this is a comment
[Structure-name
Property1 = <plain text> .or. <plain text with template reference> .or. <expression>
Property2 = list of values are denoted via '|'. e.g. a | b
> this is a comment about this specific property
Property3 = Nested structures are achieved through composition
]
To learn more about structured response templates, you can refer to the structured response template article.
To see how the activity definition is used in messages using cards, see the AdaptiveCard and [AllCards]](./how-
to-send-cards.md#allcards) sections of the Sending responses with cards article.
For a detailed explanation of the activity definition see the Bot Framework -- Activity readme on GitHub.
Multiline text
If you need your response to contain multiple lines, you can include multi-line text enclosed in three accent
characters: ```, for example:
# multilineText
- ``` you have such alarms
alarm1: 7:am
alarm2: 9:pm
```
TIP
Multi-line variation can request template expansion and entity substitution by enclosing the requested operation in ${} .
With multi-line support, you can have the language generation sub-system fully resolve a complex JSON or XML (e.g. SSML
wrapped text to control bot's spoken reply).
If/Else condition
Instead of using conditional branching, you can define a conditional template to generate text responses based on
user's input. For example:
# timeOfDayGreeting(timeOfDay)
- IF: ${timeOfDay == 'morning'}
- good morning
- ELSEIF: ${timeOfDay == 'afternoon'}
- good afternoon
- ELSE:
- good evening
In this If/Else conditional template, bot will respond in text message morning , afternoon or evening based on
user's input to match specific conditions defined in the template.
Switch condition
The Switch condition template is similar to the If/Else condition template, you can define a Switch condition
template to generate text messages in response to user's input, or based on a prebuilt function that requires no
user interaction. For example, the Responding with Text example creates a template named Switch condition that
calls the template #greetInAWeek that uses the dayOfWeek and utcNow functions:
# greetInAWeek
- SWITCH: ${dayOfWeek(utcNow())}
- CASE: ${0}
- Happy Sunday!
-CASE: ${6}
- Happy Saturday!
-DEFAULT:
- Working day!
In this Switch condition template, the bot will respond with any of the following: Happy Sunday! , Happy Saturday
or Working day! based on the value returned by the {dayOfWeek(utcNow()){} functions. utcNow() is a pre-built
function that returns the current timestamp as a string. dayOfWeek() is a function which returns the day of the
week from a given timestamp. Read more about pre-built functions in Adaptive expressions.
Further reading
Language generation
.lg file format
Structured response template
Adaptive expressions
Next
Learn how to ask for user input.
Send responses with cards
9/21/2020 • 10 minutes to read
Cards enable you to create bots that can communicate with users in a variety of ways as opposed to simply using
plain text messages. You can think of a card as an object with a standard set of rich user controls that you can
choose from to communicate with and gather input from users. There are times when you need messages which
simply consist of plain text and there are times when you need richer message content such as images, animated
GIFs, video clips, audio clips and buttons. If you are looking for examples about sending text messages to users
please read the send text messages to users article, if you need rich message content, cards offer several options
which will be detailed in this article. If you are new to the concept of Cards, it might be helpful to read the Cards
section of the design the user experience article.
# TemplateName
[Card-name
title = title of the card
subtitle = subtitle of the card
text = description of the card
image = url of your image
buttons = name of the button you want to show in the card
]
T EM P L AT E C O M P O N EN T DESC RIP T IO N
# TemplateName The template name. Always starts with "#". This is used when
invoking the card.
title The title that will appear in the card when displayed to the
user.
subtitle The subtitle that will appear in the card when displayed to the
user.
text The text that will appear in the card when displayed to the
user.
image The url pointing to the image that will appear in the card
when displayed to the user.
T EM P L AT E C O M P O N EN T DESC RIP T IO N
Card types
Composer currently supports the following Card types:
C A RD T Y P E DESC RIP T IO N
Hero Card A card that typically contains a single large image, one or
more buttons, and simple text.
Thumbnail Card A card that typically contains a single thumbnail image, one or
more buttons, and simple text.
Signin Card A card that enables a bot to request that a user sign-in. It
typically contains text and one or more buttons that the user
can click to initiate the sign-in process.
Animation Card A card that can play animated GIFs or short videos.
Adaptive Card A customizable card that can contain any combination of text,
speech, images, buttons, and input fields.
Now that you have it loaded in Composer, take a look to see how it works.
3. Select Design from the Composer Menu .
4. Select the Unknown intent trigger in the main dialog to get an idea of how this sample works.
NOTE
In this sample, the Unknown intent trigger contains a Multiple choice action (from the Ask a question menu)
where the the User Input list style is set to List and the users selection is stored into the user.choice property.
The user.choice property is passed to the next action, which is a Branch: switch (multiple options) action
(from the Create a condition menu). The item that the user selects from the list will determine which flow is taken,
for example, if HeroCardWithMemory is selected, HeroCardWithMemor y() is called, which calls the
HeroCardWithMemory template in the .lg file that can be found by selecting Bot Responses from the Composer
Menu as shown in the following image.
5. Select Bot Responses from the Composer Menu to see the templates that are called when the user selects
one of the items from the choices they are presented with when the Multiple choice action executes. You
will be referring to these templates throughout this article as each card is discussed in detail.
NOTE
LG provides some variability in card definition, which will eventually be converted to be aligned with the SDK card definition.
For example, both image and images fields are supported in all the card definitions in LG even though only images are
supported in the SDK card definition. For HeroCard and Thumbnail cards in LG, the values defined in either image or
images field will be converted to an images list. For the other types of cards, the last defined value will be assigned to the
image field. The values you assign to the image/images field can be in one of the following formats: string, adaptive
expression, or array in the format using | . Read more here.
HeroCard
A Hero card is a basic card type that allows you to combine images, text and interactive elements such as buttons
in one object and present a mixture of them to the user. A HeroCard is defined using structured template as
follows:
# HeroCard
[HeroCard
title = BotFramework Hero Card
subtitle = Microsoft Bot Framework
text = Build and connect intelligent bots to interact with your users naturally wherever they are, from
text/sms to Skype, Slack, Office 365 mail and other popular services.
image = https://sec.ch9.ms/ch9/7ff5/e07cfef0-aa3b-40bb-9baa-7c9ef8ff7ff5/buildreactionbotframework_960.jpg
buttons = ${cardActionTemplate('imBack', 'Show more cards', 'Show more cards')}
]
This example of hero card will enable your bot to send an image from a designated URL back to users when an
event to send a hero card is triggered. The hero card will include a button to show more cards when pressed.
HeroCardWithMemory
A HeroCardWithMemory is a HeroCard that demonstrates how to call Simple response templates in the .lg file just
as you would call a function.
# HeroCardWithMemory(name)
[Herocard
title=${TitleText(name)}
subtitle=${SubText()}
text=${DescriptionText()}
images=${CardImages()}
buttons=${cardActionTemplate('imBack', 'Show more cards', 'Show more cards')}
]
If you look in the Bot Responses page you will see where the values come from that populate the HeroCard:
ThumbnailCard
A Thumbnail card is another type of basic card type that combines a mixture of images, text and buttons. Unlike
Hero cards which present designated images in a large banner, Thumbnail cards present images as thumbnail. It is
card that typically contains a single thumbnail image, one or more buttons, and simple text. A ThumbnailCard is
defined using structured template as follows:
# ThumbnailCard
[ThumbnailCard
title = BotFramework Thumbnail Card
subtitle = Microsoft Bot Framework
text = Build and connect intelligent bots to interact with your users naturally wherever they are, from
text/sms to Skype, Slack, Office 365 mail and other popular services.
image = https://sec.ch9.ms/ch9/7ff5/e07cfef0-aa3b-40bb-9baa-7c9ef8ff7ff5/buildreactionbotframework_960.jpg
buttons = Get Started
]
SigninCard
A Signin card is a card that enables a bot to request that a user sign in. A SinginCard is defined using structured
template as follows:
# SigninCard
[SigninCard
text = BotFramework Sign-in Card
buttons = ${cardActionTemplate('signin', 'Sign-in', 'https://login.microsoftonline.com/')}
]
AnimationCard
Animation cards contain animated image content (such as .gif ). Typically this content does not contain sound,
and is typically presented with minimal transport controls (e.g, pause/play) or no transport controls at all.
Animation cards follow all shared rules defined for ort controls (e.g. rewind/restart/pause/play). Video cards
follow all shared rules defined for Media cards. An AnimationCard is defined using structured template as follows:
# AnimationCard
[AnimationCard
title = Microsoft Bot Framework
subtitle = Animation Card
image = https://docs.microsoft.com/en-us/bot-framework/media/how-it-works/architecture-resize.png
media = http://i.giphy.com/Ki55RUbOV5njy.gif
]
VideoCard
Video cards contain video content in video format such as .mp4 . Typically this content is presented to the user
with advanced transport controls (e.g. rewind/restart/pause/play). Video cards follow all shared rules defined for
Media cards. A VideoCard is defined using structured template as follows:
# VideoCard
[VideoCard
title = Big Buck Bunny
subtitle = by the Blender Institute
text = Big Buck Bunny (code-named Peach) is a short computer-animated comedy film by the Blender Institute
image = https://upload.wikimedia.org/wikipedia/commons/thumb/c/c5/Big_buck_bunny_poster_big.jpg/220px-
Big_buck_bunny_poster_big.jpg
media = http://download.blender.org/peach/bigbuckbunny_movies/BigBuckBunny_320x180.mp4
buttons = Learn More
]
AudioCard
Audio cards contain audio content in audio format such as .mp3 and .wav . Audio cards follow all shared rules
defined for Media cards. An AudioCard is defined using structured template as follows:
# AudioCard
[AudioCard
title = I am your father
subtitle = Star Wars: Episode V - The Empire Strikes Back
text = The Empire Strikes Back (also known as Star Wars: Episode V – The Empire Strikes Back)
image = https://upload.wikimedia.org/wikipedia/en/3/3c/SW_-_Empire_Strikes_Back.jpg
media = http://www.wavlist.com/movies/004/father.wav
buttons = Read More
]
AdaptiveCard
Adaptive cards are a new open card exchange format adopted by Composer that enable developers define their
cards content in a common and consistent way using JSON. Once defined, the adaptive card can be used in any
supported channel, automatically adapting to the look and feel of the host.
Adaptive cards not only support custom text formatting, they also support the use of containers, speech, images,
buttons, customizable backgrounds, user input controls for dates, numbers, text, and even customizable drop-
down lists.
An AdaptiveCard is defined as follows:
# AdaptiveCard
[Activity
Attachments = ${json(adaptivecardjson())}
]
This tells Composer that it is referencing a template named adaptivecardjson that is in the JSON format. If you
look in the Bot Responses you will see that template, this is the template used to generate the AdaptiveCard.
{
"$schema": "http://adaptivecards.io/schemas/adaptive-card.json",
"version": "1.0",
"type": "AdaptiveCard",
"speak": "Your flight is confirmed for you and 3 other passengers from San Francisco to Amsterdam on Friday,
October 10 8:30 AM",
"body": [
{
"type": "TextBlock",
"text": "Passengers",
"weight": "bolder",
"isSubtle": false
},
{
"type": "TextBlock",
"text": "${PassengerName()}",
"separator": true
},
{
"type": "TextBlock",
"text": "${PassengerName()}",
"spacing": "none"
},
{
"type": "TextBlock",
"text": "${PassengerName()}",
"spacing": "none"
},
{
"type": "TextBlock",
"text": "2 Stops",
"weight": "bolder",
"spacing": "medium"
},
{
"type": "TextBlock",
"text": "Fri, October 10 8:30 AM",
"weight": "bolder",
"spacing": "none"
},
{
"type": "ColumnSet",
"separator": true,
"columns": [
{
"type": "Column",
"width": 1,
"items": [
{
"type": "TextBlock",
"text": "San Francisco",
"isSubtle": true
},
{
{
"type": "TextBlock",
"size": "extraLarge",
"color": "accent",
"text": "SFO",
"spacing": "none"
}
]
},
{
"type": "Column",
"width": "auto",
"items": [
{
"type": "TextBlock",
"text": " "
},
{
"type": "Image",
"url": "http://adaptivecards.io/content/airplane.png",
"size": "small",
"spacing": "none"
}
]
},
{
"type": "Column",
"width": 1,
"items": [
{
"type": "TextBlock",
"horizontalAlignment": "right",
"text": "Amsterdam",
"isSubtle": true
},
{
"type": "TextBlock",
"horizontalAlignment": "right",
"size": "extraLarge",
"color": "accent",
"text": "AMS",
"spacing": "none"
}
]
}
]
},
{
"type": "TextBlock",
"text": "Non-Stop",
"weight": "bolder",
"spacing": "medium"
},
{
"type": "TextBlock",
"text": "Fri, October 18 9:50 PM",
"weight": "bolder",
"spacing": "none"
},
{
"type": "ColumnSet",
"separator": true,
"columns": [
{
"type": "Column",
"width": 1,
"items": [
{
"type": "TextBlock",
"text": "Amsterdam",
"text": "Amsterdam",
"isSubtle": true
},
{
"type": "TextBlock",
"size": "extraLarge",
"color": "accent",
"text": "AMS",
"spacing": "none"
}
]
},
{
"type": "Column",
"width": "auto",
"items": [
{
"type": "TextBlock",
"text": " "
},
{
"type": "Image",
"url": "http://adaptivecards.io/content/airplane.png",
"size": "small",
"spacing": "none"
}
]
},
{
"type": "Column",
"width": 1,
"items": [
{
"type": "TextBlock",
"horizontalAlignment": "right",
"text": "San Francisco",
"isSubtle": true
},
{
"type": "TextBlock",
"horizontalAlignment": "right",
"size": "extraLarge",
"color": "accent",
"text": "SFO",
"spacing": "none"
}
]
}
]
},
{
"type": "ColumnSet",
"spacing": "medium",
"columns": [
{
"type": "Column",
"width": "1",
"items": [
{
"type": "TextBlock",
"text": "Total",
"size": "medium",
"isSubtle": true
}
]
},
{
"type": "Column",
"width": 1,
"items": [
"items": [
{
"type": "TextBlock",
"horizontalAlignment": "right",
"text": "$4,032.54",
"size": "medium",
"weight": "bolder"
}
]
}
]
}
]
}
AdaptiveCard References
Adaptive Cards overview
Adaptive Cards Sample
Adaptive Cards for bot developers
AllCards
The "#AllCards" template displays all of the cards as Attachments of the Activity object.
# AllCards
[Activity
Attachments = ${HeroCard()} | ${ThumbnailCard()} | ${SigninCard()} | ${AnimationCard()} |
${VideoCard()} | ${AudioCard()} | ${AdaptiveCard()}
AttachmentLayout = ${AttachmentLayoutType()}
]
Further reading
Bot Framework - Cards
Add media to messages
Language generation
Structured response template
Next
Learn how to define triggers and events.
Asking for user input
9/21/2020 • 11 minutes to read
Bot Framework Composer makes it easier to collect and validate a variety of data types, and handle instances
when users input invalid or unrecognized data.
Now that you have it loaded in Composer, take a look to see how it works.
3. Select Design from the Composer Menu .
4. Select the Greeting trigger in the main dialog to get an idea of how this sample works.
5. In this sample, the Greeting trigger always is the first thing that runs when the bot starts. This trigger
executes the Send a response action. The Send a response action calls the WelcomeUser template:
${WelcomeUser()} . To see what the WelcomeUser template does, select Bot Responses from the Composer
Menu and search for #WelcomeUser in the Name column.
IMPORTANT
When the bot first starts, it executes the greeting trigger. The Send a response action associated with the
greeting trigger starts to execute and calls the ${WelcomeUser()} template where different options are defined
and presented to the user. When the user responds by typing in the name or number of the item they wish to select,
the bot searches the user input for known patterns. When a pattern is found the bot sends an event with the
corresponding intent. That intent is captured by the Intent recognized trigger that was created to handle that
intent. For example, if the user enters '01' or 'TextInput' the TextInput trigger handles that event and calls the
TextInput dialog. The remaining steps in this section will walk you through this process.
6. Select the main dialog and look at the Proper ties pane, note that the Recognizer Type is set to Regular
Expression and RegEx patterns to intents has a list of all of the intents that includes the Intent name
and its corresponding Pattern . The following image shows the correlation between the list of RegEx
patterns to intents in the main dialog and the message displayed to the user when the bot first starts.
In each of the following sections you will learn how to create each of these user input types, using the
corresponding dialog as an example.
Text input
The Text input prompts users for their name then responds with a greeting using the name provided. This is
demonstrated in the Asking Questions example in the TextInput dialog. You create a text input prompt by
selecting the + icon in the Authoring canvas then selecting Text input from the Ask a question menu.
Optionally, the next section details how to create an entire text input dialog or you can go directly to the Number
Input section.
Create a text input action
To create a text input action:
1. Select the + icon then select Text input from the Ask a Question menu.
2. Enter Hello, I'm Zoidberg. What is your name? (This can't be interrupted) into the Prompt field in
the Proper ties panel.
3. Select the User Input tab, then enter user.name into the Proper ty to fill field
4. Create a new action by selecting the + icon in the Authoring canvas then select Send a response from
the list of actions.
5. Enter Hello ${user.name}, nice to talk to you! into the LG editor in the Proper ties panel.
NumberInput
The NumberInput example prompts the user for their age and other numerical values using the Number input
action.
As seen in the NumberInput dialog the user is prompted for two numbers: their age stored as user.age and the
result of 2*2.2 stored as a user.result . When using number prompts you can set the Output Format to either
float or integer .
Create a number input action
To create a number input action:
1. Select the + icon in the Authoring canvas . When the list of actions appear, select Number input from the
Ask a question menu.
In the Bot Asks tab of the Proper ties panel, enter - What is your age?
Select the User Input tab then enter user.age into the Proper ty to fill field.
NOTE
You can set the Output Format field in the the User Input tab to either float or integer . float is the
default.
Select the Other tab then enter - Please input a number. into the Invalid Prompt field
2. Create another action by selecting the + icon in the Authoring panel and selecting Send a response and
enter -Hello, your age is ${user.age}! into the prompt field. The will cause the bot to respond back to the
user with their age.
3. Next, follow step 1 in this section to create another Number Input action.
In the Bot Asks tab of the Proper ties panel, enter - 2 * 2.2 equals?
Select the User Input tab then enter user.result into the Proper ty to fill field.
Select the Other tab then enter - Please input a number. into the Invalid Prompt field.
4. Create a Branch: If/Else action by selecting the + icon in the Authoring panel and selecting Branch:
If/Else from the Create a condition menu.
5. Select the + icon in the true branch and select Send a response .
6. Enter -2 * 2.2 equals ${user.result}, that's right! into the prompt field. The will cause the bot to
respond back to the user with "2 * 2.2 equals 4.4, that's right!".
7. Create a conditional action by selecting the + icon in the false branch. This will execute when the user
enters an invalid answer.
8. Create another action by selecting the + icon in the Authoring panel and selecting Send a response .
9. Enter -2 * 2.2 equals ${user.result}, that's wrong! into the prompt field.
Confirmation
Confirmation prompts are useful after you've asked the user a question and want to confirm their answer.
Unlike the Multiple choice action that enables your bot to present the user with a list to choose from,
confirmation prompts ask the user to make a binary (yes/no) decision.
Create a confirmation action
To create a confirmation action:
1. Select the + icon then select Confirmation from the Ask a Question menu.
2. Enter -Would you like ice cream? in the Prompt field of the Proper ties panel.
3. Switch to the User Input tab ()
TIP
You can also switch to the User Input tab by selecting the User answers action in the Authoring canvas .
Multiple choice
Multiple choice enables you to present your users with a list of options to choose from.
Create a multiple choice action
To create a prompt with a list of options that the user can choose from:
1. Select the + icon then select Multiple choice from the Ask a Question menu.
2. Select Bot Asks tab and enter - Please select a value from below: in the Prompt with multi-choice
field.
3. Switch to the User Input tab.
Enter user.style in the Proper ty field.
Scroll down to the Array of choices section and select one of the three options (simple choices ,
structured choices , expression ) to add your choices. For example, if you choose simple choices ,
you can add the choices one at a time in the field. Every time you add a choice option, make sure you
press Enter.
a. Test1
b. Test2
c. Test3
Additional information: The User Input tab
The Output Format field is set to value by default. This means the value, not the index will be returned,
for this example that means any one of these three values will be returned: 'test1', 'test2', 'test3'.
By default the locale is set to en-us . The locale sets the language the recognizer should expect from the
user (US English in this sample).
By default the List style is set to Auto . The List style sets the style for how the choice options are
displayed. The table below shows the differences in appearance for the three choices:
There are three boxes related to inline separation, or how your bot separates the text of your choices:
Inline separator - character used to separate individual choices when there are more than two choices,
usually , .
Inline or - separator used when there are only two choices, usually or .
Inline or more - separator between last two choices when there are more than two options, usually
, or .
The Include numbers option allows you to use plain or numbered lists when the List Style is set to List .
Attachment
The Attachment Input example demonstrates how to enable users to upload images, videos, and other media.
When running this example bot in the Emulator, once this option is selected from the main menu you will be
prompted to "Please send an image.", select the paperclip icon next to the text input area and select an image file.
Create an attachment input action
To implement an Attachment Input action:
1. Select the + icon then select File or attachment from the Ask a Question menu.
2. Enter - Please send an image. in the Prompt field of the Proper ties panel.
3. Switch to the User Input tab ()
Enter dialog.attachments in the Proper ty to fill field.
Enter all in the Output Format field.
TIP
You can set the Output Format to first (only the first attachment will be output even is multiple were selected)
or all (all attachments will be output when multiple were selected).
5. Select the + icon in the Authoring panel and select Send a response .
6. Enter -${ShowImage(dialog.attachments[0].contentUrl, dialog.attachments[0].contentType)} into the prompt
field.
DateTimeInput
The DateTimeInput sample demonstrates how to get date and time information from your users using Date or
time prompt.
Create a date time input action
To prompt a user for a date:
1. Select the + icon then select Date or time from the Ask a Question menu.
2. Enter -Please enter a date. in the Prompt field of the Proper ties panel.
3. Switch to the User Input tab and enter user.date in the Proper ty to fill field.
4. Switch to the Other tab and enter - Please enter a date. in the Invalid Prompt field.
5. Select the + icon in the Authoring panel and select Send a response .
6. Enter -You entered: ${user.date[0].value}
IMPORTANT
The value to be validated is present in the this.value property. this is a memory scope that pertains to the active
action's properties. Read more in the memory concept article.
Unrecognized Prompt : This is the message that is sent to a user if the response entered was not
recognized. It is a good practice to add some guidance along with the prompt. For example when a user
input is the name of a city but a five-digit zip code is expected, the Unrecognized Prompt can be the
following:
Sorry, I do not understand '${this.value}'. Please enter a zip code in the form of 12345.
Validation Rules : This is the rule defined in adaptive expressions to validate the user's response. The input
is considered valid only if the expression evaluates to true . An example validation rule specifying that the
user input be 5 characters long can look like the following:
length(this.value) == 5
Invalid Prompt : This is the message that is sent to a user if the response entered is invalid according to
the Validation Rules. It is a good practice to specify in the message that it is not valid and what is expected.
For example:
Sorry, '${this.value}' is not valid. I'm looking for a 5 digit number as zip code. Please specify a
zip code in the form 12345
.
Default Value Response : The value that is returned after the max turn count has been hit. This will be sent
to the user after the last failed attempt. If this is not specified, the prompt will simply end and move on
without telling the user a default value has been selected. In order for the default value response to be used,
you must specify both the default value and the default value response.
Max turn count : The maximum number of re-prompt attempts before the default value is selected. When
Max turn count is reached to limit, the property will end up being set to null unless a default value is
specified. Please note that if your dialog is not designed to handle a null value, it may crash the bot.
Default value : The value returned when no value is supplied. When a default value is specified, you should
also specify the default value response.
Allow interruptions (true/false): This determines whether parent should be able to interrupt child dialog.
Consider using the Allow Interruptions property to either handle a global interruption or a local
interruption within the context of the dialog.
Always prompt (true/false): Collect information even if specified property isn't empty.
Next
Learn how to manage conversation flow using conditionals and dialogs.
Best practices for building bots using Composer.
Controlling conversation flow
9/21/2020 • 11 minutes to read
The conversations a bot has with its users are controlled by the content of its dialog. Dialogs contain templates for
messages the bot will send, along with instructions for the bot to carry out tasks. While some dialogs are linear -
just one message after the other - more complex interactions will require dialogs that branch and loop based on
what the user says and the choices they make. This article explains how to add both simple and complex
conversation flow using examples from the sample bot provided in the Composer examples.
Now that you have it loaded in Composer, take a look to see how it works.
3. Select Design from the Composer Menu .
4. Select the Greeting trigger in the main dialog to get an idea of how this sample works.
5. In this sample, the Greeting trigger always is the first thing that runs when the bot starts. This trigger
executes the Send a response action. The Send a response action calls the WelcomeUser template:
${WelcomeUser()} . To see what the help template does, select Bot Responses from the Composer Menu
and search for #WelcomeUser in the Name column.
IMPORTANT
When the bot first starts, it executes the greeting trigger. The greeting trigger presents the user with different
options using SuggestedActions . When the user selects one of them, the bot sends an event with the
corresponding intent. That intent is captured by the Intent recognized trigger that was created to handle that
intent. For example, if the user enters 'IfCondition' the IfCondition trigger handles that event and calls the
IfCondition dialog. The remaining steps in this section will walk you through this process.
6. Select the main dialog and look at the Proper ties pane, note that the Recognizer Type is set to Regular
Expression .
IMPORTANT
In each of the following sections you will learn how to create each of these different ways to control the conversation
flow, using the corresponding dialog as an example.
7. To see the intents for each trigger select the trigger and look in the Proper ties panel. The following image
shows a Regular Expression such that if the user enters either IfCondition or 01 the IfCondition trigger will
execute, the (?i) starts case-insensitive mode in a regular expression.
Conditional branching
Composer offers several mechanisms for controlling the flow of the conversation. These building blocks instruct
the bot to make a decision based on a property in memory or the result of an expression. Below is a screenshot of
the Create a Condition menu:
Branch: If/Else instructs the bot to choose between one of two paths based on a yes / no or true /
false type value.
Branch: Switch (multiple options) branch instructs the bot to choose the path associated with a specific
value - for example, a switch can be used to build a multiple-choice menu.
Branch: If/Else
The Branch: If/Else action creates a decision point for the bot, after which it will follow one of two possible
branches. To create an Branch: If/Else branch select the + icon in the Authoring canvas then select Branch:
If/Else in the Create a Condition menu.
The decision is controlled by the Condition field in the Proper ties panel, and must contain an expression that
evaluates to true or false. For example, in the screenshot below the bot is evaluating whether user.age is greater
than or equal to 18.
Once the condition has been set, the corresponding branches can be built. The editor will now display two parallel
paths in the flow - one that will be used if the condition evaluates to true , and one if the condition evaluates
false . Below the bot will Send a response based on whether user.age>=18 evaluates to true or false .
Branch: Switch
In a Branch: Switch , the value of the parameter defined in the Condition field of the Proper ties panel is
compared to each of the values defined in the Cases section that immediately follows the Condition field. When
a match is found, the flow continues down that path, executing the actions it contains. To create a Branch: Switch
action, select the + icon in the Authoring canvas then select Branch: Switch from the Create a Condition
menu.
Like Branch:If/Else you set the Condition to be evaluated in the Proper ties panel. Underneath you can create
Branches in your switch condition by entering the value and press Enter . As each case is added, a new branch
will appear in the flow which can then be customized with actions. See below how the Nick and Tom branches are
added both in the property panel on the right and in the authoring canvas. In addition, there will always be a
"default" branch that executes if no match is found.
Loops
Below is a screenshot of the Looping menu:
Loop: for each item instructs the bot to loop through a set of values stored in an array and carry out the
same set of actions with each one. For very large arrays there is
Loop: for each page (multiple items) that can be used to step through the array one page at a time.
Continue loop instructs the bot to stop executing this template and continue with the next iteration of the
loop.
Break out of loop instructs the bot to stop executing this loop.
Loop: for each item
The Loop: for each item action instructs the bot to loop through a set of values stored in an array and carry out
the same set of actions with each element of the array.
For the sample in this section you will first create and populate an array, then create the for each item loop.
Create and populate an array
To create and populate an array:
1. Select Edit an Array proper ty from the Manage proper ties menu.
2. In the Proper ties panel, edit the following fields:
Type of change : Push
Items proper ty : dialog.ids
Value : 10000+1000+100+10+1
3. Repeat the previous two steps to add two more elements to the array, setting the Value to:
200*200
888888/4
4. (optional) send a response to the user. You do that by selecting the + in the Authoring canvas then Send
a response . Enter -Pushed dialog.id into a list into the Proper ties panel.
Now that you have an array to loop through, you can create the loop.
Loop through the array
To create the for each loop:
1. Select the + icon in the Authoring Canvas then Loop: for each item from the Looping menu.
2. Enter the name of the array you created, dialog.ids into the Items proper ty field.
3. To show the results when the bot is running, enter a new action to occur with each iteration of the loop to
display the results. You do that by selecting the + in the Authoring canvas then Send a response . Enter
- ${dialog.foreach.index}: ${dialog.foreach.value} into the Proper ties panel.
Once the loop begins, it will repeat once for each item in the array. Note that it is not currently possible to end the
loop before all items have been processed. If the bot needs to process only a subset of the items, use Branch:
If/Else and Branch: Switch branches within the loop to create nested conditional paths.
Loop: for each page
Loop: for each page (multiple items) loops are useful for situations in which you want to loop through a large
array one page at a time. Like Loop: for each item , the bot iterates an array, the difference is that For Each
Loops executes actions per item page instead of per item in the array.
For the sample in this section you will first create and populate an array, then create the for each page loop.
Create and populate an array
To create and populate an array:
1. Select Edit an Array proper ty from the Manage proper ties menu.
2. Add properties to the array you just created by selecting + in the Authoring canvas , then Edit an Array
Proper ty from the Manage proper ties menu.
3. In the Proper ties panel, edit the following fields:
Type of change : Push
Items proper ty : dialog.ids
Value : 1
4. Repeat steps 3, incrementing the Value by 1 each time until you have 6 properties in your array.
IMPORTANT
You will notice that this differs from the Controlling Conversation Flow example, the reason is to show another
example of the Page size field in the loop that you will create next.
In the screen shot above, ChildDialog will be started, and passed 2 options:
the first will contain the value of the key foo and be available inside the menu dialog as dialog.<field> , in
this case, dialog.foo .
the second will contain the value of the key value and will available inside the menu dialog as dialog.<field>
, in this case, dialog.value .
Note that it is not necessary to map memory properties that would otherwise be available automatically - that is,
the user and conversation scopes will automatically be available for all dialogs. However, values stored in the
turn and dialog scope do need to be explicitly passed.
In addition to passing these key/value pairs into a child dialog, it is also possible to receive a return value from the
child dialog. This return value is specified as part of the End this dialog action, as described below.
In addition to Begin a new dialog , there are a few other ways to launch a child dialog.
Replace this Dialog
Replace this Dialog works just like Begin a new dialog , with one major difference: the parent dialog does not
resume when the child finishes. To replace a dialog select the + icon in the Authoring canvas then select
Replace this Dialog from the Dialog management menu.
Repeat this Dialog
Repeat this Dialog causes the current dialog to repeat from the beginning. Note that this does not reset any
properties that may have been set during the course of the dialog's first run. To repeat a dialog select the + icon in
the Authoring canvas then select Repeat this Dialog from the Dialog management menu.
Ending Dialogs
Any dialog called will naturally end and return control to its parent dialog when it reaches the last action in its flow.
While it is not necessary to explicitly call End this dialog , it is sometimes desirable to end a dialog before it
reaches the end of the flow - for example, you may want to end a dialog if a certain condition is met.
Another reason to call the End this dialog action is to pass a return value back to the parent dialog. The return
value of a dialog can be a property in memory or an expression, allowing developers to return complex values
when necessary. To do this, select the + icon in the Authoring canvas then select End this Dialog from the
Dialog management menu.
Imagine a child dialog used to collect a display name for a user profile. It asks the user a series of questions about
their preferences, finally helping them enter a valid user name. Rather than returning all of the information
collected by the dialog, it can be configured to return only the user name value, as seen in the example below. The
dialog's End this dialog action is configured to return the value of dialog.new_user_name to the parent dialog.
Further Reading
Adaptive dialogs
Adaptive expressions
Next
Language Generation
Adding LUIS for language understanding
9/21/2020 • 2 minutes to read
This article will instruct how to integrate language understanding in your bot using the cloud-based service LUIS.
LUIS lets your bots identify valuable information from user input by interpreting user needs (intents) and
extracting key information (entities). Understanding user intent makes it possible for your bot to know how to
respond with helpful information using language generation.
Prerequisites
Knowledge of language understanding and events and triggers
A LUIS account and a LUIS authoring key.
NOTE
The Default recognizer can be one of the following recognizers:
None - do not use recognizer.
LUIS recognizer - to extract intents and entities from a user's utterance based on the defined LUIS application.
QnA Maker recognizer - to extract intents from a user's utterance based on the defined QnAMaker application.
Cross-trained recognizer set - to compare recognition results from more than one recognizer to decide a winner.
- Hi!
- Hello.
- Hey there.
After you define your trigger and configure it to specific intent, you can add actions to be executed after the
trigger is fired. One option is sending a response message.
3. Create a new action by selecting the plus (+) icon in the Authoring canvas , then Send a response from
the drop-down list.
4. Enter This is a greeting intent! in the LG editor in the Proper ties panel.
TIP
The response message in the LG editor is governed by the rules defined in the .lg file format.
Publish
After you are done with all previous steps, you are ready to publish your language understanding data to LUIS.
1. Select the Star t Bot button located in the Composer Toolbar
2. Every time you add or edit anything in your LU model your data will be saved in LUIS. The first time you
will be prompted for your LUIS Primar y Key , enter it when prompted by the Publish LUIS models
form then select OK .
TIP
If you go to your LUIS account, you will find the newly published application.
Test in Emulator
It's always a good idea to verify that your bot works correctly when you add new functionality. You can test your
bot's new language understanding capabilities using the Emulator.
1. Select Test in Emulator in the Composer tool bar.
2. When the Emulator is running send it messages using the various utterances you created to see if it
executes your Intent recognized triggers.
Next
Try the ToDoBotWithLuisSample in Composer to see how LUIS is used in a bot.
Learn how to add a QnA Maker knowledge base to your bot.
Creating QnA Maker knowledge base in Composer
9/21/2020 • 3 minutes to read
In Bot Framework Composer, you can create your own QnA Maker knowledge base (KB) and publish it to
https://www.qnamaker.ai. This article shows how to start from QnA Maker knowledge base before creating a bot,
add QnA Maker knowledge base when developing bots, and publish your QnA Maker knowledge base.
Prerequisites
A basic bot built using Composer.
A subscription to Microsoft Azure.
A basic understanding of QnA Maker service and how to create a QnA Maker resource in the Azure portal.
A QnA Maker Subscription key when you create your QnA Maker resource.
IMPORTANT
If you built Composer from source, you need to run a command before you can create a QnA Maker knowledge base in
Composer. Before running yarn startall to start your Composer:
On Windows, set QNA_SUBSCRIPTION_KEY=<Your_QnA_Subscription_Key>
On macOS or Linux, export QNA_SUBSCRIPTION_KEY=<Your_QnA_Subscription_Key>
If you are using the desktop application version of Composer, this step is not necessary.
Additional information
Manage QnA Maker resources.
Add a QnA Maker knowledge base to your bot in Composer.
How to add a QnA Maker knowledge base to your bot
9/21/2020 • 4 minutes to read
This article will teach you how to add QnA Maker knowledge base to your bot created using Bot Framework Composer.
You will find this helpful when you want to send a user question to your bot then have the QnA Maker knowledge base
provide the answer.
Prerequisites
A basic bot built using Composer
A QnA Maker knowledge base
Review settings
Review the QnA Maker settings panel when selecting the QnA Maker dialog. While you can edit settings in the panel, a
security best practice is to edit security-related settings (such as the endpoint key, knowledge base ID and hostname) from
the Settings menu. This menu writes the values to the appsettings.json file and persists the values in the browser
session. If you edit the settings from the QnA Maker settings panel, these settings are less secure because they are written
to the dialog file.
The values for KnowledgeBase id , Endpoint Key , and Hostname as shown in the preceding screenshot are locations for the
values in the appsettings.json file. Do not change these values in this panel. Changes made to this panel are saved to a
file on disk. If you manage the Composer files with source control, the security settings saved in the panel will also be
checked into source control.
Editing from the Settings menu of Composer saves the changes to the appsettings.json file which should be ignored by
your source control software.
Required Knowledge base ID - provided by You shouldn't need to provide this value.
appsettings.json as QnA Maker portal's Settings for the
settings.qna.knowledgebaseid knowledge base, after the knowledge base
is published. For example,
12345678-MMMM-ZZZZ-AAAA-
123456789012
.
Required Endpoint key - provided by You shouldn't need to provide this value.
appsettings.json as QnA Maker portal's Settings for the
settings.qna.endpointkey knowledge base, after the knowledge base
is published. For example,
12345678-AAAA-BBBB-CCCC-
123456789012
.
Optional Active learning card title Text to display to user before providing
follow-up prompts, for example:
Did you mean: .
Optional Card no match text Text to display as a card to the user at the
end of the list of follow-up prompts to
indicate none of the prompts match the
user's need. For example:
None of the above.
Edit settings
Securely editing the QnA Maker settings should be completed using Settings . These values are held in the browser
session only.
1. Select the cog in the side menu. This provides the ability to edit the Dialog settings .
2. Edit the values for the knowledge base ID, the endpoint key, and the host name. The endpoint key and host name
are available from the QnA Maker portal's Publish page.
Each dialog in Bot Framework Composer includes a set of triggers (event handlers) that contain actions
(instructions) for how the bot will respond to inputs received when the dialog is active. There are several different
types of triggers in Composer. They all work in a similar manner and can even be interchanged in some cases. This
article explains how to define each type of trigger. Before you walk through this article, please read the events and
triggers concept article.
The table below lists the five different types of triggers in Composer and their descriptions.
Unknown intent The Unknown intent trigger fires when an intent is defined
and recognized but there is no Intent recognized trigger
defined for that intent.
QnA Intent recognized When an intent (QnAMaker) is recognized the QnA Intent
recognized trigger fires.
Duplicated intents recognized The Duplicated intents recognized trigger fires when
multiple intents are recognized. It compares recognition
results from more than one recognizer to decide a winner.
Dialog events When a dialog event such as BeginDialog occurs this trigger
fires.
Custom event When an Emit a custom event occurs the Custom event
trigger will fire.
Unknown intent
This is a trigger used to define actions to take when there is no Intent recognized trigger to handle an existing
intent.
Follow the steps to define an Unknown intent trigger:
1. Select the desired dialog. Select + Add and then Add new trigger from the tool bar. Select Submit . You
will then see an empty Unknown intent trigger in the authoring canvas.
2. Select the + sign under the trigger node to add any action node(s) you want to include. For example, you
can select Send a response to send a message "This is an unknown intent trigger!". When this trigger is
fired, the response message will be sent to the user.
Intent recognized
This is a trigger type used to define actions to take when an intent is recognized. This trigger works in conjunction
with LUIS recognizer and Regular Expression recognizer.
NOTE
Please note that the Default recognizer can work as a LUIS recognizer when you define LUIS models. Read more in the
recognizers section of the dialogs concept article.
Follow the steps to define an Intent recognized trigger with Regular Expression recognizer:
1. Select the desired dialog in the Navigation pane of Composer's Design page.
2. In the Proper ties panel of your selected dialog, choose Regular Expression as recognizer type for your
dialog.
3. Create an Intent recognized trigger. Select New Trigger in the Navigation pane then Intent
recognized from the drop-down list.
Enter a name in the What is the name of this trigger field. This is also the name of the intent.
Enter a Regular Expression pattern in the Please input regex pattern field.
The following image shows the definition of an Intent recognized trigger named BookFlight . User input
that matches the Regex pattern will fire this trigger.
A regular expression is a special text string for describing a search pattern that can be used to match simple or
sophisticated patterns in a string. Composer exposes the ability to define intents using regular expressions and
also allows regular expressions to extract simple entity values. While LUIS offers the flexibility of a more fully
featured language understanding technology, the Regular Expression recognizer works well when you need to
match a narrow set of highly structured commands or keywords.
In the example above, a book-flight intent is defined. However, this will only match the very narrow pattern "book
flight to [somewhere]", whereas the LUIS recognizer will be able to match a much wider variety of messages.
Learn how to define an Intent recognized trigger with LUIS recognizer in the how to use LUIS article.
NOTE
Please note that the Default recognizer can work as a QnA recognizer when you define QnA Maker knowledge base. Read
more in the recognizers section of the dialogs concept article.
3. Select + Add and then + New Trigger in the tool bar. Select QnA Intent recognized trigger from the
drop-down list.
NOTE
Please note that the Default recognizer can work as a CrossTrained recognizer when you have both LUIS and QnA intents
defined. Read more in the recognizers section of the dialogs concept article.
3. Select + Add and then + New Trigger in the tool bar. Select Duplicated intents recognized trigger
from the drop-down list.
Dialog events
This is a trigger type used to define actions to take when a dialog event such as BeginDialog is fired. Most dialogs
will include a trigger configured to respond to the BeginDialog event, which fires when the dialog begins and
allows the bot to respond immediately. Follow the steps below to define a Dialog star ted trigger:
1. Select the desired dialog. Select + Add and then Add new trigger from the toolbar.
2. In the Create a trigger window, select Dialog events from the drop-down list.
3. Select Dialog star ted (Begin dialog event) from the Which event? drop-down list then select Submit .
4. Select the + sign under the Dialog star ted node and then select Begin a new dialog from the Dialog
management menu.
5. Before you can use this trigger you must associate a dialog to it. You do this by selecting a dialog from the
Dialog name drop-down list in the Proper ties panel on the right side of the Composer window. You can
select an existing dialog or create a new one. the example below demonstrates selecting and existing dialog
named weather.
Activities
This type of trigger is used to handle activity events such as your bot receiving a ConversationUpdate Activity. This
indicates a new conversation began and you use a Greeting (ConversationUpdate activity) trigger to handle
it.
The following steps demonstrate hot to create a Greeting (ConversationUpdate activity) trigger to send a
welcome message:
1. Select the desired dialog. Select + Add and then Add new trigger from the tool bar.
2. In the Create a trigger window, select Activities from the drop-down list.
3. Select Greeting (ConversationUpdate activity) from the Which activity type? drop-down list then
select Submit .
4. After you select Submit , you will see the trigger node in the authoring canvas.
5. Select the + sign under the ConversationUpdate Activity node and add any desired action such as Send
a response .
Custom event
The Custom event trigger will only fire when a matching Emit a custom event occurs. It is a trigger that any
dialog in your bot can consume. To define and consume a Custom event trigger, you need to create a Emit a
custom event first. Follow the steps below to create a Emit a custom event :
1. Select the trigger you want to associate your Custom event with. Select the + sign and then select Emit a
custom event from the Access external resources drop-down list.
2. In the Proper ties panel on the right side of the Composer window, enter a name ("weather") into the
Event name field, then set Bubble event to be true .
TIP
When Bubble event is set to be true , any event that is not handled in the current dialog will bubble up to that
dialogs parent dialog where it will continue to look for handlers for the custom event.
Now that your Emit a custom event has been created, you can create a Custom event trigger to handle this
event. When the Emit a custom event occurs, any matching Custom event trigger at any dialog level will fire.
Follow the steps to create a Custom event trigger to be associated with the previously defined Emit a custom
event .
3. Select + Add and then + Add new trigger from the tool bar.
4. In the pop-up window, select Custom events from the drop-down list and enter a name ("weather") into
the What is the name of the custom event field. SelectSubmit .
5. Now you can add an action to your custom event trigger, this defines what will happen when it is triggered.
Do this by selecting the + sign and then Send a response from the actions menu. Enter the desired
response for this action in the Language Generation editor, for this example enter "This is a custom
trigger!".
Now you have completed both of the required steps needed to create and execute a custom event. When Emit a
custom event fires, your custom event trigger will fire and handle this event, sending the response you defined.
Next
Learn how to control conversation flow.
Define intents with entities
9/21/2020 • 6 minutes to read
Conversations do not always progress in a linear fashion. Users will want to cover specify information, present
information out of order, or make corrections etc. Bot Framework Composer supports language understanding in
these advanced scenarios, with the advanced dialog capabilities offered by adaptive dialogs and LUIS application.
In this article, we will cover some details of how LUIS recognizer extracts the intent and entity you may define in
Composer. the code snippets come from the To do with LUIS example. Read the How to use samples article and
learn how to open the example bot in Composer.
Prerequisites
A basic understanding of the intent and entity concepts.
A basic understanding of how to define an Intent Recognized trigger.
A basic understanding of how to use LUIS in Composer.
A LUIS account and a LUIS authoring key.
Now that you have it loaded in Composer, take a look to see how it works.
When triggered, if LUIS is able to identify a city, the city name will be made available as @city within the
triggered actions. The entity value can be used directly in expressions and LG templates, or stored into a memory
property for later use. The JSON view of the query "book me a flight to London" in LUIS app looks like this:
{
"query": "book me a flight to london",
"prediction": {
"normalizedQuery": "book me a flight to london",
"topIntent": "BookFlight",
"intents": {
"BookFlight": {
"score": 0.9345866
}
},
"entities": {
"city": [
"london"
],
"$instance": {
"city": [
{
"type": "city",
"text": "london",
"startIndex": 20,
"length": 6,
"score": 0.834206,
"modelTypeId": 1,
"modelType": "Entity Extractor",
"recognitionSources": [
"model"
]
}
]
}
}
}
}
"property": "user.name"
"value": "=coalesce(@userName, @personName)"
"allowInterruptions": "!@userName && !@personName"
There are two key properties in the example above: value and allowInterruptions .
The expression specified in value property will be evaluated on every single time user responds to the specific
input. In this case, the expression =coalesce(@userName, @personName) attempts to take the first non null entity
value userName or personName and assigns it to user.name . The input will issue a prompt if the property
user.name is null even after the value assignment unless always prompt evaluates to true .
The next property of interest is allowInterruptions . This is set to the following expression:
!@userName && !@personName . This literally means what this expression reads - allow an interruption if we did not
find a value for entity userName or entity personName .
Notice that you can just focus on things the user can say to respond to this specific input in the
Expected responses . With these capabilities, you get to provide labelled examples of the entity and use it no
matter where or how it was expressed in the user input.
If a specific user input does not work, simply try adding that utterance to the Expected response .
The user could have answered multiple questions in the same response. Here is an example
By including the value property on each of these inputs, we can pick up any entities recognized by the recognizer
even if it was specified out of order.
Interruption
Interruptions can be handled at two levels - locally within a dialog as well as re-routing as a global interruption. By
default adaptive dialog does this for any inputs:
1. On every user response to an input action's prompt,
2. Run the recognizer configured on the parent adaptive dialog that holds the input action
3. Evaluate the allowInterruption expression. a. If it evaluates to true , evaluate the triggers that are tied to the
parent adaptive dialog that holds the input action. If any triggers match, execute the actions associated with
that trigger and then issue a re-prompt when the input action resumes. b. If it evaluates to false , evaluate the
value property and assign it as a value to the property . If null run the internal entity recognizer for that
input action (e.g. number recognizer for number input etc) to resolve a value for that input action.
The allowInterruption property is located in the Proper ties panel of the Other tab of an input action. You can
set the value to be true or false .
Handling interruptions locally
With this, you can add contextual responses to inputs via OnIntent triggers within a dialog. Consider this
example:
user: hi
bot: hello, what is your name?
user: why do you need my name?
bot: I need your name to address you correctly.
bot: what is your name?
user: I will not give you my name
bot: Ok. You can say "My name is <your name>" to re-introduce yourself to me.
bot: I have your name as "Human"
bot: what is your age?
You can see the Why , NoValue or Cancel triggers, which are under the userprofile dialog in the ToDoWithLuis
example.
Notice that the bot understood interruption and presented the help response. You can see the UserProfile and
Help dialogs in the ToDoWithLuis example.
Further reading
Entities and their purpose in LUIS
.lu file format
Use OAuth
9/21/2020 • 3 minutes to read
In Bot Framework Composer, you can use the OAuth login action to enable your bot to access external resources
using permissions granted by the end user. This article explains how to use basic OAuth to authenticate your bot
with an external service such as GitHub.
NOTE
It is not necessary to deploy your bot to Azure for the authentication to work.
Prerequisites
Microsoft Azure subscription.
A basic bot built using Composer.
Install ngrok.
A service provider your bot is authenticating with such as GitHub.
Basic knowledge of user authentication within a conversation.
Note the Name of your connection - you will need to enter this value in Composer exactly as it is displayed
in this setting.
2. Enter the values of Client ID , Client Secret , and optionally Scopes depending on the service you are
authenticating with. In this example of GitHub, follow the steps to get these values:
a. Go to GitHub developer's setting webpage and click New OAuth App on the upper right corner. This
will redirect you to the GitHub OAuth App registration website where you fill in the values as
instructed in the following:
Application name : a name you would like to give to your OAuth application, e.g. Composer
Homepage URL : the full URL to your application homepage, e.g. http://microsoft.com
b. Select Register application . Then you will see the Client ID , Client Secret values generated in the
application webpage as the following:
c. Copy the Client ID and Client Secret values and paste them to your Azure's Service Provider
Connection Setting. These values configure the connection between your Azure resource and GitHub.
Optionally, enter user, repo, admin in Scopes . This field specifies the permission you want to grant
to the caller. Save this setting.
Now, with the Name , Client ID , Client Secret , and Scopes of your new OAuth connection setting in
Azure, you are ready to configure your bot.
You'll be asked to login to whatever external resource you've specified. Once complete, the window will close
automatically, and your bot will continue with the dialog.
The results of the OAuth action will now be stored into the property you specified. To reference the user's OAuth
token, use <scope.name>.token -- so for example, if the OAuth prompt is bound to dialog.oauth , the token will be
dialog.oauth.token .
To use this to access the protected resources, pass the token into any API calls you make with the HTTP Request
action. You can refer to the token value in URL, body or headers of the HTTP request using the normal LG syntax,
for example: ${dialog.oauth.token} .
Next
Learn how to send an HTTP request and use OAuth.
Send an HTTP request and use OAuth
9/21/2020 • 2 minutes to read
This article will teach you how to send an HTTP request using OAuth for authorization. It is not necessary to deploy
your bot to Azure for this to work.
Prerequisites
A basic bot you build using Composer
A target API for your bot to call
Basic knowledge of How to send an HTTP request without OAuth
Basic knowledge of How to use OAuth in Composer
2. Select the Restar t Bot button in the Composer toolbar, then Test in Emulator . You should be able to see
the authentication token in the Emulator as shown below:
Now, with the OAuth setup ready and token successfully obtained, you are ready to add the HTTP request in your
bot.
2. In the Proper ties panel, set the method to GET and set the URL to your target API. For example, a typical
Github API URL such as https://api.github.com/users/your-username/orgs .
3. Add headers to include more info in the request. For example we can add two headers to pass in the
authentication values in this request.
a. In the first line of header, add Authorization in the Key field and bearer ${dialog.token.token} in the
Value field. Press Enter.
b. In the second line of header, add User-Agent in the Key field and Vary in the Value field. Press Enter.
4. Finally, set the Result proper ty to dialog.api_response and Response type in Json .
NOTE
HTTP action sets the following information in the Result proper ty : statusCode, reasonPhrase, content, and headers.
Setting the Result proper ty to dialog.api_response means we can access those values via
dialog.api_response.statusCode , dialog.api_response.reasonPhrase , dialog.api_response.content and
dialog.api_response.headers . If the response is json, it will be a deserialized object available via
dialog.api_response.content .
Test
You can add an IF/ELSE branch to test the response of this HTTP request.
1. Set Condition to dialog.api_response.statusCode == 200 in the properties panel.
2. Add two Send a response actions to be fired based on the testing results (true/false) of the condition. This
means if dialog.api_response.statusCode == 200 is evaluated to be true , send a response
called with success! ${dialog.api_response} , else send a response calling api failed.
3. Restart your bot and test it in the Emulator. After login successfully, you should be able to see the response
content of the HTTP request.
Connecting to a skill
9/21/2020 • 7 minutes to read
Since Bot Framework SDK version 4.7, you can extend your bot using another bot called a skill bot. A skill is a bot
that can perform a set of tasks for another bot. A skill consumer is a bot that can invoke one or more skills. In Bot
Framework Composer, you can export a bot built with Composer as a skill, you can also use the Connect to a
skill action to enable a bot to connect to a skill. This article explains how to do both tasks.
IMPORTANT
Connecting to a skill in Composer is a technical process that involves many steps such as setting up Composer and
configuring Azure resources. A high level of technical proficiency will be necessary to execute this process.
Prerequisites
Microsoft Azure subscription.
A basic bot built with Composer.
Install the Bot Framework Emulator version 4.7.0 or later.
A good understanding of skills in the Bot Framework SDK.
4. Copy the value from the Value field of the displayed table. This is the generated password of your Bot
Channels Registration .
For more information about creating a Bot Channels Registration , refer to the Register a bot with Azure Bot
Service article.
5. In this step you will need to enter some values in the different forms to generate your skills manifest.
TIP
When you selects a trigger you want to include in the manifest, the editor adds the corresponding activity type that
the trigger handles to the manifest's activities property. Also, if the trigger is an on intent handler, the intent is added
to the intents array in the dispatch models property. When you select a dialog you want to include, an event activity
gets added to the activities property with the dialogs Dialog Interface.
P RO P ERT Y VA L UE
6. After you select Save . Select Restar t Bot in the toolbar. You can find the manifest folder in your bot's
project folder such as C:\Users\UserName\Documents\Composer\SkillBotName\manifests . You can also test if the
skill manifest works by entering http://localhost:<port>/manifests/<your-skill-manifest-file-name>.json in
your browser. Now you have created a local skill bot! Record your skill manifest URL, you will need to use it
in the add a connect to a skill action section.
7. Publish skill (optional)
If you want to publish your skill bot, you can follow the instructions in the publish a bot article. An example
remote skill manifest may look like this:
https://SkillBot-dev.scm.azurewebsites.net/manifests/SkillBot-manifest.json .
NOTE
If you publish a local skill to a remote host such as Azure web app, you may need to update the endpointUrl and
msAppId values in your skill manifest to make the skill callable, because endpointUrl should no longer point to
localhost and msAppId should be updated.
IMPORTANT
It is noted your skill bot and your consumer bot will have different port numbers. You need to use the correct port
numbers in the settings to avoid errors.
NOTE
If your skill is remote, you need to follow the next steps to install and run ngrok . If your skill is local, you can skip to
the configure settings in Composer section directly.
3. Open a terminal and run ngrok with the following command to create a new tunnel (you may need to
navigate to where the ngrok executable is in your filesystem), the port specified is the same port your
consumer bot is running on:
OSX
Windows
2. In the Connect to a skill properties panel, select Add a new Skill Dialog from the Skill Dialog Name
field.
3. Enter the skill manifest URL in the Manifest url field. If your skill is local, the URL will be like this:
http://localhost:<port>/manifests/<your-skill-manifest-file-name>.json , where port is the port number
your skill bot is running on, and your-skill-manifest-file-name is the name of your skill bot's manifest file.
4. Select Default from the Skill Endpoint drop down list.
5. In the Activity field, configure the activity you want to send to the skill. Depending on the skill manifest
definition, it can be a message, an event or invoke type.
6. (Optional) Enter dialog.result in the Proper ty field at the bottom. When the skill dialog ends, its return
value (if any) is stored in this property.
Your consumer bot is now connected to a skill!
Additional information
Call a sample skill bot from Composer
If you use the sample skill bot in the Bot Framework Samples repository, you should consider the following:
You should update the MicrosoftAppId and MicrosoftAppPassword in your bot's appsettings.json file (
80.skills-simple-bot-to-bot\EchoSkillBot ) with the values you created for the skill bot in the create Azure
Registration resources section.
You should update the manifest file that lives in this directory:
80.skills-simple-bot-to-bot\EchoSkillBot\wwwroot\manifest .
The endpointUrl can be http://localhost:<port-of-skill-bot-running>/api/messages .
msAppId can be the Microsoft App ID you created for the skill bot in the create Azure Registration
resources section.
The manifest URL of your sample skill bot can be:
http://localhost:<port>/manifest/echoskillbot-manifest-1.0.json .
Make sure your sample skill bot is running when you test in the Emulator.
Further reading
About skills
A skills-simple-bot-to-bot sample.
Adding custom actions
9/21/2020 • 7 minutes to read
In Bot Framework Composer, actions are the main contents of a trigger. Actions help to maintain the conversation
flow and instruct bots to fulfill user's requests. Composer provides different types of actions such as Send a
response , Ask a question , and Create a condition . Besides these built-in actions, you can create and customize
your own actions in Composer.
This article will walk you through how to include a sample custom action named MultiplyDialog that multiplies
two numbers passed as inputs. The sample custom action lives inside the runtime/customaction subfolder of the
bot, and can be viewed here on GitHub.
Prerequisites
A basic understanding of actions in Composer.
A basic bot built using Composer.
A sample custom action called MultiplyDialog in the customaction folder.
Bot Framework CLI 4.10 or later.
TIP
For more information about Bot Framework SDK schemas, read here. For more information about how to create schema
files, read here.
Open a command line and follow the steps to setup the bf-dialog tool:
To point npm to nightly builds
npm i -g @microsoft/botframework-cli
bf plugins:install @microsoft/bf-dialog
{
"$schema": "https://raw.githubusercontent.com/microsoft/botframework-
sdk/master/schemas/component/component.schema",
"$role": "implements(Microsoft.IDialog)",
"title": "Multiply",
"description": "This will return the result of arg1*arg2",
"type": "object",
"additionalProperties": false,
"properties": {
"arg1": {
"$ref": "schema:#/definitions/integerExpression",
"title": "Arg1",
"description": "Value from callers memory to use as arg 1"
},
"arg2": {
"$ref": "schema:#/definitions/integerExpression",
"title": "Arg2",
"description": "Value from callers memory to use as arg 2"
},
"resultProperty": {
"$ref": "schema:#/definitions/stringExpression",
"title": "Result",
"description": "Value from callers memory to store the result"
}
}
}
Bot Framework Schemas are specifications for JSON data. They define the shape of the data and can be
used to validate JSON. All of Bot Framework's Adaptive Dialogs are defined using this JSON schema. The
schema files tell Composer what capabilities the bot runtime supports. Composer uses the schema to help
it render the user interface when using the action in a dialog.
IMPORTANT
You can follow instructions here to create schema files.
An Action folder that contains the MultiplyDialog.cs class, which defines the business logic of the custom
action, in this example, multiply two numbers passed as inputs and output result.
using System;
using System.Runtime.CompilerServices;
using System.Threading;
using System.Threading.Tasks;
using AdaptiveExpressions.Properties;
using Microsoft.Bot.Builder.Dialogs;
using Newtonsoft.Json;
namespace Microsoft.BotFramework.Composer.CustomAction
{
/// <summary>
/// Custom command which takes takes 2 data bound arguments (arg1 and arg2) and multiplies them
returning that as a databound result.
/// </summary>
public class MultiplyDialog : Dialog
{
[JsonConstructor]
public MultiplyDialog([CallerFilePath] string sourceFilePath = "", [CallerLineNumber] int
sourceLineNumber = 0)
: base()
{
// enable instances of this command as debug break point
this.RegisterSourceLocation(sourceFilePath, sourceLineNumber);
}
[JsonProperty("$kind")]
public const string Kind = "MultiplyDialog";
/// <summary>
/// Gets or sets memory path to bind to arg1 (ex: conversation.width).
/// </summary>
/// <value>
/// Memory path to bind to arg1 (ex: conversation.width).
/// </value>
[JsonProperty("arg1")]
public NumberExpression Arg1 { get; set; }
/// <summary>
/// Gets or sets memory path to bind to arg2 (ex: conversation.height).
/// </summary>
/// <value>
/// Memory path to bind to arg2 (ex: conversation.height).
/// </value>
[JsonProperty("arg2")]
public NumberExpression Arg2 { get; set; }
/// <summary>
/// Gets or sets caller's memory path to store the result of this step in (ex:
conversation.area).
/// </summary>
/// <value>
/// Caller's memory path to store the result of this step in (ex: conversation.area).
/// </value>
[JsonProperty("resultProperty")]
public StringExpression ResultProperty { get; set; }
Export runtime
The first step to add a custom action is to export the bot runtime through the Runtime Config in Composer. This
process will generate a copy of your bot's runtime so that you can modify the code and add your custom action.
NOTE
Currently Composer supports the C# runtime and JavaScript (preview) runtime.
Once you have the exported bot runtime, you can make changes to the schema. The exported runtime folder will
broadly have the following structure.
bot
/bot.dialog
/language-generation
/language-understanding
/dialogs
/runtime
/azurewebapp
/azurefunctions
/schemas
sdk.schema
NOTE
The following steps assume you are using azurewebapp as your deployment solution. If yo use azurefunctions the steps are
similar.
3. Then still in the azurewebapp folder, open the Startup.cs file. Uncomment the following two lines to register
this action.
using Microsoft.BotFramework.Composer.CustomAction;
4. Run the command dotnet build on the azurewebapp project to verify if it passes build after adding custom
actions to it. You should be able to see the "Build succeeded" message after this command.
Navigate to the C:\Users\UserName\Composer\Bot\schemas folder. This folder contains a PowerShell script and a bash
script. Run either one of the following commands:
NOTE
Please note that the runtime azurewebapp is chosen by default if no argument is passed.
You can validate that the partial schema ( MultiplyDialog.schema inside the customaction/Schema folder) has been
appended to the default sdk.schema file to generate one single consolidated sdk.schema file.
The above steps should have generated a new sdk.schema file inside the schemas folder for Composer to use.
Reload the bot and you should be able to include your custom action!
Test
Reopen the bot project in Composer and you should be able to test your added custom action!
1. Open your bot in Composer. Select a trigger you want to associate this custom action with.
2. Select + under the trigger node to see the actions menu. You will see Custom Actions added to the menu.
Select Multiply from the menu.
3. On the Proper ties panel on the right side, enter two numbers in the argument fields: Arg1 and Arg2 .
Enter dialog.result in the Result property field. For example, you can enter the following:
4. Add a Send a response action. Enter 99*99=${dialog.result} in the Language Generation editor.
5. Select Restar t Bot and you can see the testing result in the Emulator.
Additional information
Bot Framework SDK Schemas
Create schema files
Extending Composer with plugins
9/21/2020 • 4 minutes to read
Composer plugins are JavaScript modules. When loaded into Composer, the module is given access to a set of
Composer APIs which can then be used by the plugin to provide new functionality to the application. You can
extend and customize the behavior of Composer by installing plugins which can hook into the internal
mechanisms of Composer and change the way they operate. Plugins can also "listen to" the activity inside
Composer and react to it. In this article you will learn the following:
Set up multi-user authentication via plugins.
Set up customized storage and make storage user-aware.
Change the samples and templates in the composer and “new bot” flow.
Provide an alternate version of the runtime template.
Change the boilerplate content added to each project.
Prerequisites
A basic understanding of Composer plugins.
A fork of the Bot Framework Composer GitHub repository.
To add new samples or templates, a new plugin can be added in the Composer/plugins folder that calls the
composer.addBotTemplate() .
To remove or modify the templates that ship with Composer, modify or remove the code inside the
Composer/plugins/samples/ folder.
Bot Framework Composer is a visual authoring tool for building conversational AI software. Composer is available
as an open-source project. While the primary way it is distributed is as a bundled desktop application, it is possible
to use Composer in a variety of ways including as a shared, hosted service.
This article covers an approach to hosting Composer in the cloud as a service. It also covers topics related to
customizing and extending the behaviors of Composer in this environment.
Prerequisites
A subscription to Microsoft Azure.
Knowledge of Linux and familiarity with package management.
Familiarity with nginx and configuring an nginx web server and operating in a command line environment.
NOTE
You can choose the type of VM to host Composer, but this article is specific to hosting Composer in a ubuntu VM.
TIP
Node.js (12.18.3 or later)
NVM (v0.35.1)
npm (6.14.6 or later)
Yarn (1.22.4 or later)
.NET Core (3.1 or later)
b. Install node.
f. Use nvm to install the long term support version of node (currently 12.18.x)
sudo apt-get update; \ sudo apt-get install -y apt-transport-https && \ sudo apt-get update && \ sudo
apt-get install -y dotnet-sdk-3.1
2. Create a fork of the Composer repo. With a fork Composer you will be able to make small modifications to
the codebase and still pull upstream changes.
3. Follow the instructions below to build Composer and run it.
cd Composer
yarn
yarn build
yarn start
4. In your VM instance, load Composer in a browser at localhost:3000 and verify that you can use Composer
in it. Outside your VM, load Composer in a browser at http://<IP ADDRESS OF VM>:3000 and verify that you
can use Composer at this URL.
Set up nginx
Now you have deployed Composer into your VM and it runs at this URL: http://<IP ADDRESS OF VM>:3000 . Let's
make Composer run on port 80 instead of :3000 (difference between mycomposer.com:3000 and mycomposer.com )
using nginx. Nginx is a web server and proxy service. It can sit in front of the Composer service and pass requests
into Composer. It can also be used to enable SSL on the domain without binding with Composer, and to proxy the
individual bot processes instead of exposing their ports to the Internet.
TIP
HAProxy is also an option you may consider, but this documentation is specific to nginx.
1. Install nginx.
2. Edit the default nginx config to proxy all requests to the composer app running at :3000 .
a. Find the section that says:
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
3. Now you can load http://<ip address of VM>/ and you should see Composer. No port number is required.
You should be able to create and edit bots in Composer. You should also be able to start the bot – but the
URL for the bot will be "localhost" (mouse over Test in Emulator ). In the next step we will show you how to
fix this by patching the code of Composer in two small places.
5. If you want to allow bots to run and be connected on this instance of Composer, you should open network
ports. In your Azure portal, go to the VM’s networking tab. Add inbound security rule.
Set up Composer to run after you log out
You can set up Composer to run even after you log out. Follow the steps:
1. Install pm2 process manager.
2. Start Composer using pm2. This will allow the app to continue running even once you log out.
pm2 list
IMPORTANT
Without additional steps, anyone can access this instance of Composer. Before you leave it running, take measures to secure
the access control either by installing an auth plugin covered in this article or by turning on service-level access controls via
the azure portal.
Next steps
Extend Composer with plugins.
Multilingual support
9/21/2020 • 3 minutes to read
Bot Framework Composer provides multilingual support for bot development in different languages, with English
as the default. With simple changes to the settings in Composer, you can author .lg and .lu files in your preferred
language, and give your bot the ability to talk to users in different languages.
This articles shows to build a basic bot in English ( en-us ) and walks through the process to author the bot in
Chinese ( zh-cn ).
NOTE
If your bot has LUIS or QnA integrations, you'll also need to consider additional constraints of LUIS supported languages
and QnA supported languages.
Prerequisites
Install Composer
/coolbot
coolbot.dialog
/language-generation
/en-us
common.en-us.lg
coolbot.en-us.lg
/language-understanding
/en-us
coolbot.en-us.lu
When adding languages, Composer creates copies of the language files. For examples, if you add Chinese ( zh-cn ),
your bot's file structure will look like the following:
/coolbot
coolbot.dialog
/language-generation
/en-us
common.en-us.lg
coolbot.en-us.lg
/zh-cn
common.zh-cn.lg
coolbot.zh-cn.lg
/language-understanding
/en-us
coolbot.en-us.lu
/zh-cn
coolbot.zh-cn.lu
NOTE
Both en-us and zh-cn are locales . A locale is a set of parameters that defines the user's language, region and any
special variant preferences that the user wants to see in their user interface.
After adding the languages, you can add manual translations with your source language files as reference. When
you are done with the translation process, you must set the locale in the Default language field. This tells your
bot in which language it must talk to the users. However, this locale setting will be overwritten by the client's (for
example, Bot Framework Emulator) locale setting.
In the next sections, we will use a basic bot in English and walk through the steps to author bots in multiple
languages.
When test the bot in the Emulator you get the following responses:
Update language settings
The first step to author bots in other languages is to add languages. You can add as many languages as you need in
the Settings page.
1. In the Settings page, select Edit on the top toolbar. Then select Add language from the drop-down menu.
2. In the pop-up window, there are three settings that need to be updated:
a. The first setting is the language to copy resources from. You can leave this as English .
b. The second setting is the preferred bot authoring languages. You can select multiple languages. Let's
select Chinese (Simplified, China) . Hover your mouse over the selection you'll see the locale .
c. The final setting is a check box. When checked, your selected language will be the active authoring
language. In cases you have multiple sections in the second part, the first selected language becomes
the active authoring language. Let's check the box and select Done .
You'll see the language being added to the following language drop-down lists:
Bot language is the language you choose to author your bot.
Default language is the locale you set as your bot's runtime language. This language setting will be
overwritten by the client's locale setting.
You'll also see the locale changed from en-us to zh-cn in the Composer title bar.``
2. Go to the User Input page. Select the dialog whose language you want to edit and toggle Edit mode to
add manual translations for user inputs in your selected authoring language.
NOTE
Make sure you select all the dialogs and add manual translations for all user input.
Test
After you finish translation, you need to go back to the Settings page and select your preferred language as your
bot's runtime language.
NOTE
Make sure your Emulator locale setting is consistent with your Default language setting in Composer. Alternatively can
leave your Emulator locale setting empty.
Testing in Emulator:
Capture your bot's telemetry
9/21/2020 • 3 minutes to read
Bot Framework Composer enables your bot applications to send event data to a telemetry service such as
Application Insights. Telemetry offers insights into your bot by showing which features are used the most, detects
unwanted behavior and offers visibility into availability, performance, and usage. In this article you will learn how to
implement telemetry into your bot using Application Insights.
Prerequisites
A subscription to Microsoft Azure.
A basic bot built using Composer.
Basic knowledge of Kusto queries.
How to use Log Analytics in the Azure portal to write Azure Monitor log queries.
The basic concepts of Log queries in Azure Monitor.
TIP
You can learn more about how to create an Application Insight resource and get the Instrumentation key by reading this
article.
As standard you can track a number of events, including bot messages sent or received, LUIS results, dialog events
(started / completed / cancelled) and QnA Maker events. Specifically for QnA Maker, you can filter down to events
named QnAMakerRecognizerResult, which will include the original query, the top answers from the QmA Maker
Knowledge Base and the score etc.
Once you are gathering telemetry from your bot, you can also try using Power BI template, which contains some
QnA tabs, to view your data. The template was built for use with the Virtual Assistant template, and you can find
details of this here.
Additional information
In Composer, there are two additional settings in the app settings that you need to be aware of: logActivities and
logPersonalInformation. logActivities, which is set to true by default, determines if your incoming or outgoing
activities are logged. logPersonalInformation, which is set to false by default, determines if more sensitive
information is logged. You may see some of the fields blank if you do not enable this.
Since Composer 1.1.1 release, Composer features a new action for sending additional events to Application
Insights, alongside those that you can automatically capture as described above. Wherever you want to track a
custom event, you can add the Emit a telemetr y track event action, which can be found under the Debugging
Options menu. Once added to your authoring canvas, you specify a custom name for the event, which is the name
of the event that will appear in the customEvents table referenced above, along with optionally specifying one or
more additional properties to attach to the event.
Further reading
Analyze your bot's telemetry data.
Validation
9/21/2020 • 4 minutes to read
This article introduces the validation functionality provided in Bot Framework Composer. The validation
functionality helps you identify syntax errors and provide suggested fixes when you author .lg template, .lu
template, and expressions during the process of developing a bot using Composer. With the help of the validation
functionality, your bot-authoring experience will be improved and you can easily build a functional bot that can
"run".
NOTE
This article only covers the validation functionality implemented in Composer so far. More user scenarios will be added with
the progress of the project.
Prerequisites
Install Bot Framework Composer using Yarn.
A basic understanding of the Language Generation concepts and how to define LG templates.
A basic understanding of Language Understanding concepts.
A basic understanding of Adaptive expressions.
Error notifications
In Composer, there are a couple of error indicators when your bot has errors. Usually when you run a bot in
Composer, you should be able to select the Star t Bot button (if start for the first time) or the Restar t Bot button
on the upper right corner of the toolbar. However, sometimes you will see the Star t Bot button (or the Restar t Bot
button) grayed out and not clickable. This indicates the bot application has errors that must be fixed before the bot
can run.
The number with an error icon on the left side of the Star t Bot (or Restar t Bot ) button indicates the number of
errors. Select the error icon will navigate to the Notifications page which lists all the errors and warnings this bot
application has.
NOTE
You could access the Notifications page by selecting Notifications on the Composer menu.
Errors of .lg template and .lu template will show in both the Language Generation and Language
understanding inline editors and Bot Responses and User Input .
.lg files
When you author an .lg template that has syntax errors, a red wiggling line will show under the error in the
Language Generation inline editor.
In the example .lg template above abc is invalid. There are two things you can do to diagnose and fix the error:
1. Read the error message beneath the editor and click here to refer to the syntax documentation.
2. Hover your mouse over the erroneous part and read the detailed error message with suggested fixes.
NOTE
If you find the error message not helpful, you should read the .lg file format and use the correct syntax to compose
the language generation template.
Select Bot Responses on the Composer menu on the left side and toggle Edit Mode , you will find the error is
also saved and updated in Bot Responses .
The tiny red rectangle on the right end of the editor helps you to identify where the error is. This is especially
helpful when you have a long list of templates.
The error message at the bottom of the editor indicates the line numbers of the error. In this example, line3:0 -
line3:3 means the error locates in the third line of the editor from the first character (indexed 0 )to the fourth
character (indexed 3 ).
Hover your mouse over the erroneous part you will see the detailed error message with suggested fixes.
In this example, the error message indicates a - is missing in the template. After you add the - sign in the lg
template, you will see the error message disappear.
If you go back to the Language Generation inline editor, you will see the change is updated and error disappear
as well.
.lu files
When you create an Intent recognized trigger and your lu file has syntax errors, a red wiggling line will show
under the error in the Language Understanding inline editor.
Similar to the Language Generation editor, there are two things you can do to diagnose and fix the error:
1. Expand the error message at the bottom of the Language Understanding inline editor to read more about
the error including the line of errors and possible fixes. You can select here in the error message and refer to
the .lu file format syntax documentation.
2. Hover your mouse over the erroneous part and read the detailed error message with suggested fixes.
NOTE
If you find the error message not helpful, you should read the .lu file format and use the correct syntax to compose
the language understanding template.
Expressions
When you fill in property fields with invalid expressions, the entire form of the Proper ties panel will be in red
frame with error messages under it.
Select the double arrow icon on the upper right corner of the message will expand the error message.
To diagnose and fix the error, read the error message and select here to refer to the syntax documentation. In this
example, the error message indicates that there is a mismatch of the operator = . The correct operator should be !=
if it indicates not equal and == if it indicates equal. Read more about the Adaptive expressions syntax here.
After you fixed the error, the form of the Proper ties panel will turn from red to blue. This indicates that the
expressions entered in this field is syntactically correct.
Publish a bot
9/21/2020 • 5 minutes to read
In this article we will show you how to publish your bot to Azure Web App and Azure Functions. Bot Framework
Composer includes instructions and scripts to help make this process easier. Follow the steps in this article to
complete the publish process or refer to the README file in your bot's project folder, for example, under this
directory: C:\Users\UserName\Documents\Composer\BotName . Please also note that the process to publish your bot to
Azure Web App and Azure Functions are slightly different.
Prerequisites
A subscription to Microsoft Azure.
Node.js. Use version 12.13.0 or later.
A basic bot built using Composer.
Azure CLI.
cd C:\Users\UserName\Documents\Composer\BotName\scripts
npm install
If you publish your bot to Azure Functions, run the following command:
P RO P ERT Y VA L UE
Name of your resource group The name you give to the resource group you are
creating.
App password At least 16 characters with at least one number, one letter,
and one special character.
Name for environment The name you give to the publish environment.
Note that if you see an error message "InsufficientQuota", you need to add an param '--
createLuisAuthoringResource false' and run the script again. For example:
For Azure Web App:
NOTE
If you use --createLuisAuthoringResource false in this step, you will need to manually add the LUIS authoring
key to the publish configuration in the deploy to new Azure resources section, otherwise, the bot will not work. The
default region is westus . If you want to provision to other regions, you can add --location region .
After running the last command, you will see the following. The process will take a few minutes.
5. After the previous step is completed, you will see a generated JSON file in command line.
{
"accessToken": "<SOME VALUE>",
"name": "<NAME OF YOUR RESOURCE GROUP>",
"environment": "<ENVIRONMENT>",
"hostname": "<NAME OF THE HOST>",
"luisResource": "<NAME OF YOUR LUIS RESOURCE>"
"settings": {
"applicationInsights": {
"InstrumentationKey": "<SOME VALUE>"
},
"cosmosDb": {
"cosmosDBEndpoint": "<SOME VALUE>",
"authKey": "<SOME VALUE>",
"databaseId": "botstate-db",
"collectionId": "botstate-collection",
"containerId": "botstate-container"
},
"blobStorage": {
"connectionString": "<SOME VALUE>",
"container": "transcripts"
},
"luis": {
"endpointKey": "<SOME VALUE>",
"authoringKey": "<SOME VALUE>",
"region": "westus"
},
"qna": {
"endpoint": "<SOME VALUE>",
"subscriptionKey": "<SOME VALUE>"
},
"MicrosoftAppId": "<SOME VALUE>",
"MicrosoftAppPassword": "<SOME VALUE>"
}
}
NOTE
If you use --createLuisAuthoringResource false in the 4th step of the create Azure resources section, you will need to
manually add the LUIS authoring key to the publish configuration, otherwise, the bot will not work. Also, the default region is
westus . If you want to provision to other regions, you can add --location region .
luisResource - this is the hostname of your LUIS endpoint resource if not in the form <name>-<env>-<luis> .
You can EXCLUDE cosmos db, applicationInsights, blob storage, or luis if you don't want those features
enabled or use them. This is the primary way you can opt-in or opt-out of those features in the runtime.
Examples :
Deploy your bot without configuring any other services:
{
"accessToken": "<your access token>",
"hostname": "<your web app name>",
"settings": {
"MicrosoftAppId": "<the appid of your bot channel registration>",
"MicrosoftAppPassword": "<the app password of your bot channel registration>"
}
}
If you have LUIS configured in your Composer bot, you should use this:
{
"accessToken": "<your access token>",
"hostname": "<your web app name>",
"luisResource": "<your luis service name>",
"settings": {
"luis": {
"endpointKey": "<your luis endpointKey>",
"authoringKey": "<your luis authoringKey>",
"region": "<your luis region, for example westus>"
},
"MicrosoftAppId": "<the appid of your bot channel registration>",
"MicrosoftAppPassword": "<the app password of your bot channel registration>"
}
}
NOTE
You should author and publish in the same region. Read more in the Authoring and publishing regions and
associated keys article.
2. Select Publish to selected profile from Composer toolbar. In the pop-up window select Okay .
Additional information
When publishing, if you encounter an error about your access token being expired. You can follow the steps to get
a new token:
Open a terminal window.
Run az account get-access-token .
This will result in a JSON object printed to the console, containing a new accessToken field.
Copy the value of the accessToken from the terminal and into the publish accessToken field in the profile in
Composer.
A glossary of concepts and terms used in Composer
9/21/2020 • 8 minutes to read
A|B|C|D|E|F|G|H|I|J|K|L|M|
N|O|P|Q|R|S|T|U|V|W|X|Y|Z
A
Action
Actions are the main component of a trigger, they are what enable your bot to take action whether in response to
user input or any other event that may occur. Actions are very powerful, with them you can formulate and send a
response, create properties and assign them values, manipulate the conversational flow and dialog management
and many other activities.
Additional Information:
See action in the dialog concept article.
Adaptive dialogs
Adaptive Dialogs are a new way to model conversations that take the best of waterfall dialogs and prompts in the
dialogs library. Adaptive Dialogs are event-based. Using adaptive dialogs simplifies sophisticated conversation
modelling primitives like building a dialog dispatcher and ability to handle interruptions elegantly. Adaptive dialogs
derive from dialogs and interact with the rest of the Bot Framework SDK dialog system.
Additional Information:
See adaptive dialogs.
Adaptive expressions
Adaptive expressions are a new expressions language used with the Bot Framework SDK and other conversational
AI components, like Bot Framework Composer, Language Generation, Adaptive dialogs, and Adaptive Cards.
Additional Information:
See adaptive expressions
Authoring canvas
A section of the Design page where users design and author their bot.
B
Bot Responses
An option in the Composer Menu. It navigates users to the Bot Responses page where the Language Generation
(LG) editor locates. From there users can view all the LG templates and edit them.
C
Child dialog
Every dialog that you create in Composer will be a child dialog. Dialogs can be nested multiple levels deep with the
main dialog being the root of all dialogs in Composer. Each child dialog must have a parent dialog and parent
dialogs can have zero or more child dialogs, but a child dialog can and must have only one parent dialog.
Additional Information:
The dialog concept article.
Learn to create a child dialog in the Tutorial: Adding dialogs to your bot
D
Design
An option in the Composer Menu. It navigates users to the Design page where users design and develop their bots.
Dialog
Dialogs are the basic building blocks in Composer. Each dialog represents a portion of the bot's functionality that
contains instructions for what the bot will do and how it will react to user input. Dialogs are composed of
Recognizers that help understand and extract meaningful pieces of information from user's input, a language
generator that helps generate responses to the user, triggers that enable your bot to catch and respond to events
and actions that help you put together the flow of conversation that will occur when a specific event is captured
via a Trigger. There are two types of dialogs in Composer: main dialog and child dialog.
Additional Information:
The dialog concept article.
E
Emulator
The Bot Framework Emulator is a desktop application that allows bot developers to test and debug their bots, either
locally or remotely. Using the Emulator, you can chat with your bot and inspect the messages it sends and receives.
The Emulator displays messages as they would appear in a web chat UI and logs JSON requests and responses as
you exchange messages with your bot. Before deploying your bot, you can run it locally and test it using the
Emulator.
Additional Information:
The latest release of the Bot Framework Emulator
Entity
An entity contains the important details of the user's intent. It can be anything, a location, date, time, cuisine type,
etc. An intent may have no entities, or it may have multiple entities, each providing additional details to help
understand the needs of the user.
Additional Information:
See entities in the Language Understanding concepts article.
Examples
A section in the Composer Home page listing all the example bots.
Additional Information:
Read more about how to use samples.
Export
The "export" activity will generate a copy of your bot's runtime so that it can be used for other purposes such as
adding custom actions and debugging in Visual Studio.
F
G
H
Home
An option in Menu and the start page of Composer.
I
Intent
An intent is the task that the user wants to accomplish or the problem they want to solve. Intent recognition in
Composer is its ability to determine what the user is requesting. This is accomplished by the recognizer using either
Regular Expressions or LUIS. When an intent is detected from the user's input, an event is emitted which can be
handled using the Intent recognized trigger. If the intent is not recognized by any recognizers, another event is
emitted which can be handled using the Unknown intent trigger.
Additional Information:
See intents in the Language Understanding concepts article.
J
K
L
Language Generation
Language Generation (LG), is the process to produce meaningful phrases and sentences in the form of natural
language. Language generation enables your bot to response to a user with human readable language.
Additional Information:
The Language Generation concept article.
LG editor
A section of the Bot Responses page. It is the language generation editor where users can view and edit all the
Language generation templates.
Language Understanding
Language Understanding (LU) deals with how the bot handles users input and converts them into something that it
can understand and respond to intelligently. It involves the use of either a LUIS or Regular Expression recognizer
along with utterances, intents and entities.
Additional Information:
The Language Understanding concept article.
LU editor
A section of the User Input page. It is the language understanding editor where users can view and edit all the
Language understanding templates.
LUIS
A recognizer type in Composer that enables you to extract intent and entities based on LUIS service.
Additional Information:
See how to use LUIS for language understanding in Composer.
M
Main dialog
The main dialog is the foundation of every bot created in Composer. There is only one main dialog and all other
dialogs are children of it. It gets initialized every time your bot runs and is the entry point into the bot.
Memory
A bot uses memory to store property values, in the same way that programming and scripting languages such as
C# and JavaScript do. A bots memory management is contained within the following scopes: user, conversation,
dialog and turn.
Additional Information:
See the conversation flow and memory concept article.
Menu
A list of options provided on the left side of the Composer screen from which a user can choose.
N
Navigation pane
A section of the Composer screen. It enables users to navigate to different parts of Composer.
Notifications
An option in the Composer Menu . It navigates users to the Notifications page that lists all the errors and
warnings of the current bot application.
Additional Information:
See the validation article.
O
P
Parent dialog
A parent dialog is any dialog that has one or more child dialogs, and any dialog can have zero or more child dialogs
associated with it. A parent dialog can also be a child of another dialog.
Prompt
Prompts refer to bots asking questions to users to collect information of a variety of data types (e.g. text, numbers).
Additional information:
Read more about prompts.
Property
A property is a distinct value identified by a specific address. An address is comprised of two parts, the scope and
name: scope.name. Some examples of typical properties in Composer could include: user.name, turn.activity,
dialog.index, user.profile.age.
Additional information:
Read more about property in the memory concept article.
Properties pane
A section of the Design page where users can edit properties.
Q
QnA maker
A cloud-based Natural Language Processing (NLP) service that easily creates a natural conversational layer over
your data.
Additional information:
See the What is the QnA Maker service article.
R
Recognizer
A recognizer enables your bot to understand and extract meaningful pieces of information from user's input. There
are currently two types of recognizers in Composer: LUIS and Regular Expression, both emit events which are
handled by [triggers](#trigger].
Regular Expression
A Regular Expression (regex) is a sequence of characters that define a search pattern. Regex provides a powerful,
flexible, and efficient method for processing text. The extensive pattern-matching notation of regex enables your
bot to quickly parse large amounts of text to find specific character patterns that can be used to determine user
intents, validate text to ensure that it matches a predefined pattern (such as an email address or zip codes), or
extract entities from utterances.
Root dialog
See main dialog.
S
Scope
When a property is in scope, it is visible to your bot. See memory concept article to know more about the different
scopes of memory.
Settings
An option in the Composer Menu . It navigates users to the Settings page where users manage settings for their
bot and Composer.
T
Title bar
A horizontal bar at the top of the Composer screen, bearing the name of the product and the name of current bot
project.
Toolbar
A horizontal bar under Title bar in the Composer screen. It is a strip of icons used to perform certain actions to
manipulate dialogs, triggers, and actions.
Trigger
Triggers are the main component of a dialog, they are how you catch and respond to events. Each trigger has a
condition and a collection of actions to execute when the condition is met.
Additional information:
See events and triggers concept article.
See how to define triggers article.
U
User Input
An option in the Composer Menu. It navigates users to the User Input page where the Language Understanding
editor locates. From there users can view all the Language Understanding templates and edit them.
Utterance
An utterance can be thought of as a continuous fragment of speech that begins and ends with a clear pause.
Composer's language processing examines a user's utterance to determine the intent and extract any entities it may
contain.
Additional Information:
See utterances in the Language Understanding concepts article.
V
W
X
Y
Z