0% found this document useful (0 votes)
82 views5 pages

Ai-900 Whi Zcar D: Quick Exam Reference - Hand-Picked For You

Uploaded by

ili aisyah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
82 views5 pages

Ai-900 Whi Zcar D: Quick Exam Reference - Hand-Picked For You

Uploaded by

ili aisyah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

AI- 900 Whi zCar d Quick Exam Reference - Hand-Picked for You

Descr ibe Ar t if icial In t elligen ce w or k loads an d con sider at ion s Descr ibe f eat u r es of com pu t er vision w or k loads on Azu r e
Identify features of common AI workloads Identify Azure tools and services for computer vision tasks
Artificial Intelligence(AI) is the software that imitates human cognitive abilities and functions. Computer Vision uses Machine Learning models trained for images and videos.
There are six common Computer Vision tasks.
- Image classification ? analyzes images and videos, detects objects and text, extract descriptions, and create tags.
There are five key elements of Microsoft Artificial Intelligence. - Object detection ? identifies objects and their boundaries within the image.
- Machine Learning (ML) ? tools and services to produce predictions based on the patterns in input data. - Semantic segmentation ? classifies pixels to the objects they belong to.
ML is the foundation of AI systems. - Image analysis ? extracts information from the images, tag them, and creates a descriptive image summary.
- Anomaly Detection ? tools and services for the identification of unusual activities. - Face detection, analysis, and recognition ? detects, analyzes, and recognizes human faces.
- Natural Language Processing (NLP) ? tools and services for the understanding of written and spoken - Optical character recognition ? detects and recognizes text in images and documents.
language. Object detection helps recognize
- Computer Vision ? tools and services for detection, recognition and analysis of objects, faces and text objects on the images. It places
in images and videos. The Image classification model classifies images based on their content. This Computer Vision technique helps
doctors find cancer and other medical conditions on X-ray or MRI images. It supports visual product search.Disaster each recognizable object in the
- Conversational AI ? tools and services for intelligent conversation. bounding box with the class name
investigation benefits from engineering structures classifications on aerial photos, like bridges.
and probability score.
Identify guiding principles for responsible AI
Optical character recognition includes two sets of APIs: Read API and OCR API.
The principle of Fairness guides AI solutions to treat everybody
fairly with no bias. Read API helps "read" texts within predominantly document images. Read API is an asynchronous
The principle of Reliability and safety guides AI solutions to be service. Microsoft designed it for heavy on text images or documents with many distortions. Read API
reliable in operation, to resist harmful manipulations, and be safe returns page information for each page, including page size and orientation. Then information about
for the users. each line on the page. And finally, information about each word in each line, including the bounding box
The principle of Privacy and security guides AI solutions to be of each word.
OCR API extracts small amounts of
secure and follow privacy rules.
The principle of Inclusiveness guides AI solutions to to provide text within an image. It is asynchronous
the benefits of the solutions to everyone with no barriers and service designed for an immediate
limitations. Solution should follow three Inclusive Design result. OCR API returns regions on the
Principles: Recognize exclusion; Solve for one, extend to many, image with text defined by bounding
and Learn from diversity. box coordinates. Then lines of text in
The principle of Transparency guides AI solutions to provide full each region, bounding box coordinates.
information on their operations, behavior, and limitations. And finally, words in each line with
Semantic segmentation classifies pixels Azure Face API call returns information about face
The principle of Accountability guides AI solutions to follow bounding box coordinates.
Source: Microsoft Learn that belong to a particular object, like attributes with a confidence level. Face attributes
governance and organizational norms.
flooded areas on aerial images, and include age, gender, smile, glasses, emotion,
highlights them. makeup, hair, etc. On the picture is the example of the
OCR API in action. The service
reads the plates of each scooter. The
Descr ibe f u n dam en t al pr in ciples of m ach in e lear n in g on Azu r e results are in the Preview section.

Identify common machine learning types

Machine Learning (ML) ? software or system that builds and trains models based on the input Descr ibe f eat u r es of com pu t er vision w or k loads on Azu r e
data to predict the outcome.
Identify common types of computer vision solution
Microsoft Machine Learning is the foundation for Artificial Intelligence service. It includes four features Computer Vision is one of the key elements for Artificial Intelligence. It includes the following services: For automated document processing, Form Recognizer uses two models:
and capabilities: - Computer vision ? analyzes images and video, detects objects and text, extracts descriptions, and creates Custom Model and A pre-build receipt model.
tags. With the Custom model approach, you train the Form Recognizer model based on your
- Automated machine learning ? automated creation of ML models for non-experts.
- Azure Machine Learning designer ? a graphical interface for no-code creation of the ML solutions. - Custom vision ? trains custom models for image classification and custom object detection. own form and data. You just need only 5 samples of your form to start.
- Data and Compute management ? cloud-based tools for data science professionals. - Face ? detects, analyzes, and recognizes faces. A pre-build receipt model is a Form Recognizer default model that is trained to work
- Pipelines? visual designer for creating ML tasks workflow. with receipts. It helps recognize receipts and extract data from them.
- Form Recognizer ? extracts information from scanned forms and invoices.
Machine Learning algorithm is a program. It includes instructions for patterns discovery within the - Video Indexer ? analyzes and indexes video and audio content.
Form Recognizer
provided data set. ML algorithms are generally grouped by ML techniques. There are three main ML service uses pre-build
groups: Computer vision service works with images. This service brings sense to the image pixels by using them as features for ML receipt models to
models. These predefined models help categorize and classify images, detect and recognize objects, tag, and identify them. extract such
- The Supervised group makes predictions based on the information from the previous outcomes Computer vision can "read" a text in images in 25 languages and recognize landmarks. information from
(labeled data).
receipts: date of
- An Unsupervised group makes predictions without any prior knowledge of the possible outcomes.
Custom Vision service helps create your own computer vision model. These models are based on image classifications. As for transaction, time of
- The Reinforced group learns from the outcome and decides the next move based on this knowledge.
any classification model, it should be a set of images for each known class or category. Custom Vision service relies on deep the transaction,
The Supervised model types rely on the structured data where input columns or fields are called learning techniques. These techniques use convolutional neural networks (CNN). CNN links pixels to the classes or categories. merchant information,
features, and the output is the label or labels. There are two Supervised model types: For a creation of the Custom Vision solution, users can use a general Azure Cognitive Service resource. It includes both resources, taxes paid, receipt
for training and prediction. Or they can create separate Custom Vision resources only for training or prediction. Such separation is total. The service also
- The Regression model produces a numeric value prediction for the label, like a game score or a
useful only for resource tracking purposes. recognizes all text on
stock price.
After provisioning the resources, users train the model at the Custom Vision portal: https://www.customvision.ai. Here they can the receipt and
- The Classification model predicts a class (dog or cat) or multi-class (dog, cat, or rabbit) of the label
create applications and submit images. It should be enough images with object classes from various angles. When a model is returns it.
based on incoming data (features).
created, the service assesses the model performance based on the following metrics:
The Clustering model is the only one that belongs to the Unsupervised model type. The Clustering - Precision ? defines the percentage of the class predictions that the model makes correct. For example, if the model predicts
model predicts what data points belong to what cluster or group. There is no prior knowledge about the Azure Cognitive Face service currently includes the following
ten images are bananas, and there are actually only seven bananas, the model precision is 70%.
data clusters or groups that can be used for prediction. The Clustering algorithm learns common cluster - Recall ? defines the percentage of the class identification that the model makes correct. For example, if there are ten apple functionality: Face detection, Face verification , Find similar
properties first. Then calculates the cluster ?membership? probability for each data point. images, and the model identifies only eight, the model recall is 80%. faces, Group faces on similarities, and Person identification.
- Average Precision (AP) is the combined metrics of both Precision and Recall.

w w w.w h izlabs.com W hizl absEducation, Inc. © 2020 Ver . 01. 02. 1129- 20 Cr eated by Dr. Andr ei Ser geev
AI- 900 Whi zCar d
Descr ibe f u n dam en t al pr in ciples of m ach in e lear n in g on Azu r e Descr ibe f u n dam en t al pr in ciples of m ach in e lear n in g on Azu r e

Describe core machine learning concepts. Regression Model Metrics. Describe core machine learning concepts. Classification Model Metrics.

The Confusion matrix (or error matrix) provides a tabulated view of predicted and actual values for each class. It is usually used as a performance
Azure ML uses model evaluation for the measurement of the trained model
assessment for Classification models. But it can also be used for fast visualization of the Clustering model results too.
accuracy. For Regression models Evaluate Model module provides the
following five metrics: A binary confusion matrix is divided into four squares that represent the following
values:
- True positive (TP) ? the number of positive cases that the model predicted
correctly.
- True negative (TN) ? the number of negative cases that the model predicted
correctly.
- False positive (FP) ? the number of positive cases that the model predicted as
The Predicted vs. True chart presents the differences between predicted and true negative.
values. The dotted line outlines the ideal model performance and the solid line - False negative (FN) ? the number of negative cases that the model predicted
reflects the average model predictions. Closer these lines to each other, better as positive.
- Mean absolute error (MAE) (1) is the regression model evaluation model performance.
metrics. It produces the score that measures how close the model is to
the actual values ? the lower score, better the model performance.
Azure ML uses model evaluation for the measurement of the trained model accuracy. For Classification models, the Evaluate Model module provides
- Root Mean Squared Error (RMSE) (2) is the regression model The Residual histogram presents the
the following metrics:
evaluation metrics. It represents the square Root from the squared mean frequency of residual values
of the errors between predicted and actual values. distribution. Residual is the difference - Accuracy metric defines how many predictions (positive and negative) are actually
between predicted and actual values. predicted right. To calculate this metric, use the following formula: (TP+TN)/Total number
- Relative squared error (RSE) (3) is the regression model evaluation It represents the amount of error in the of cases.
metrics. It is based on the square of the differences between predicted model. For high-performance models, - Precision metric defines how many positive cases are actually predicted right. To
and true values. The value is between 0 and 1. The closer this value is to we should expect that most of the calculate this metric, use the following formula: TP/(TP+FP).
0, the better is model performance. Relativity of this metric helps to errors are small. They will cluster - Recall or True Positive Rate (TPR) metric defines how many positive cases that model
compare model performances for the labels in different units. around 0 on the Residual histogram. predicted are actually predicted right. To calculate this metric, use the following formula:
- Relative absolute error (RAE) (4) is the regression model evaluation TP/(TP+FN).
metrics. It is based on absolute differences between predicted and true - F1 Score metric combines Precision and Recall. To calculate this metric, use the
values. The value is between 0 and 1. The closer this value is to 0, the following formula: 2TP/(2TP+FP+FN).
better is model performance. Relativity of this metric helps to compare - Fall-out or False Positive Rate (FPR) metric defines how many negative cases that
model performances for the labels in different units. model predicted are actual predicted right. To calculate this metric, use the following
- Coefficient of determination (R2) (5) is the regression model evaluation formula: FP/(FP+TN).
Azure ML Studio provides explanations for the best fitting model. The part of
metrics. It reflects the model performance: the closer R2 to 1 - the better The Receiver Operator Characteristics or ROC Curve is the relation between FPR
these explanations is the Global Importance histogram. It presents the
the model fits the data. (Fall-out) and TPR (Recall). ROC Curve produces the Area Under Curve (AUC).
importance of each feature in the label prediction.
Area Under Curve (AUC) is a classification model performance metric reflecting how good
the model fits the data. For binary classification models, the AUC value of 0.5 represents that
Descr ibe f u n dam en t al pr in ciples of m ach in e lear n in g on Azu r e a model prediction is the same as randomly selected values of "Yes" or "No." If the AUC
value is below 0.5, the model performance is worse than random. Ideally, the best-fitted
model has a value of 1. Such an ideal model predicts all the values correctly.
Describe core machine learning concepts. Clustering Model Metrics.

Azure ML uses model evaluation for the measurement of the trained model accuracy. For Clustering models, the Evaluate Model module provides the following five Descr ibe f u n dam en t al pr in ciples of m ach in e lear n in g on Azu r e
metrics:

Describe core machine learning concepts


- Average Distance to Other Center (1) is the clustering model evaluation
metrics. It reflects how far is the average distance for each data point in a
cluster to the centroids of all other clusters. The feature is a generic name for the input column or field in structured data.
The label is a generic name for the model output of numeric value or class.
- Average Distance to Cluster Center (2) is the clustering model
evaluation metrics. It reflects how far is the average distance for each data
point in a cluster to the centroid of the cluster. Azure ML Designer provides several Regression ML There are several modules for Classification algorithm also, like Two-class
algorithm modules, like Linear Regression and Logistic Regression, Multiclass Logistic Regression or Two-Class Neural
- Number of Points (3) is the clustering model evaluation metrics. Its value
is the number of points assigned to each cluster. Decision Forest Regression. Linear Regression Network. Two-class and Multiclass Logistic Regression algorithms are based on
algorithm based on a linear regression model. Decision logistic regression model. Two-Class Neural Network is based on a neural network
- Maximal Distance to Cluster Center( 4) is the clustering model Forest Regression algorithm is based on a decision
evaluation metrics. It reflects the cluster spread.Metric's value is the sum algorithm.
forest algorithm. And there is only one algorithm for Clustering : K-means
of distances from each point in the cluster to the cluster's centroid.
clustering.
- Combined Evaluation (5)is the clustering model evaluation metrics. It
combines the above metrics per cluster into the combined model Regression Algorithm Family has the word "regression" in their names without
evaluation metric. class, like Linear Regression or Decision forest regression.
Helping to navigate through Azure ML algorithm options,
Microsoft provides a guide for selecting the best algorithm
All algorithm in the ML Classification family includes the word "class" in their
for your solution Machine Learning Algorithm Cheat Sheet
names, like Two-class logistic regression, Multiclass logistic regression, or
for Azure Machine Learning designer.
Multiclass forest regression.

w w w.w h izlabs.com W hizl absEducation, Inc. © 2020 Ver . 01. 02. 1129- 20 Cr eated by Dr. Andr ei Ser geev
AI- 900 Whi zCar d
Descr ibe f u n dam en t al pr in ciples of m ach in e lear n in g on Azu r e Descr ibe f u n dam en t al pr in ciples of m ach in e lear n in g on Azu r e

Identify core tasks in creating a machine learning solution Describe capabilities of no-code machine learning with Azure Machine Learning

The Azure ML Studio provides all essential tools to create and work
There are several core tasks for building ML solutions. with models. There are three main Author options for creating the A collection of runs or trials in Azure ML calls an experiment.
1. Data ingestion. Data ingestion is the process of bringing data from different sources into a common repository or storage. After ingestion, data is accessible experiments: Automated ML (or no-code), Azure ML Designer (or
for various services. There are three general ways to get the data: upload dataset, manual input, and import data. low-code), and Notebooks (or coding).
Azure ML Studio has four options to import data: From local files, From datastore, From web files, and From open Datasets. Automated ML is the "no-code" option. It doesn?t require any specific
data science knowledge. The simple wizard type interface helps users to
2. Data preparation and data transformation. Before we can use the data for set up Auto ML
ML modeling, we need to prepare or pre-process data: find and correct data run. It includes
errors, remove outliers, impute missing data with appropriate data values. dataset selection,
Azure Auto ML executes data preparation during the run. Azure ML Designer experiment and
provides several modules for these tasks: Clip Values (for outliers), Clean compute resource
Missing Data (for missing data), Remove Duplicate Rows, Apply SQL setup, and
Transformation, Python, and R scripts. These prebuilt modules come from learning task
Data Transformation and Languages groups. choice. The wizard
presents three
3. Feature selection and engineering. Before model training, we need to task types:
review the data, select features that influence the prediction (label), and Classification,
discard other features from the final dataset. If the dataset has numeric Regression, and Time series forecasting. After task selection, users click
fields/columns on different scales, like one column has all values from 0 to 0.5 "Finish" and start the run. Auto ML run settles on the best algorithm and
and another column ? from 100 to 500, we need to bring them to the common creates the model suited for users' goals.
scale. This process is named data normalization. If, for better model Azure ML Designer Data transformation Modules Azure ML Designer is the "low-code" option. It helps users to create
performance, the ML solution requires to generate a new feature based on the ML workflow using Pipelines. To create a Pipeline, users drag&drop Notebooks is the "coding" option. Users can use Python language
current features, this operation is called feature engineering. Users can use modules from the library on the Designer's canvas. The library and Python SDKs for coding their ML solutions.
Apply SQL Transformation, Python, and R modules to script these operations. includes Data Transformation, ML Algorithms,Model Scoring &
In Azure ML, featurization is the name for the generic application of all Evaluation, and other prebuilt modules.
data-preprocessing techniques, like scaling, normalization, or feature
engineering.

4. Model training. After data-preprocessing, data is almost ready for model


training. We need to have two sets of data: one for training and one for test or
validation. Auto ML splits the original dataset into training and validation sets
automatically. However, users have the option to upload a validation set. Azure
ML Designer provides a Split Data module for creating training and test
datasets. Before the training, we need to connect the left output of a Split Data Dataset before and after data normalization
module to a Training module's right input.We also need to connect the selected
for the solution ML Algorithm module to the Training module's left input. For
Regression and Classification models, we need to mark a label column in the
Training module. And we can run the training.

Important note: Azure ML Studio requires to create a compute


resource for model training. We must provision Training Clusters
in the Compute section of the Manage blade or within Auto ML or Descr ibe f eat u r es of Nat u r al Lan gu age Pr ocessin g (NLP) w or k loads on Azu r e
Azure Designer settings.
Identify features of common NLP Workload Scenarios
5. Evaluation. After training is finished, we need to score (test) a model with
the test dataset. Auto ML uses a validation dataset for model validation and Natural Language Processing (NLP) is one of the key elements for Artificial Intelligence. It includes four services:
cross-validation of the child processes. Azure ML Designer utilizes the Score - Text Analytics ? helps analyze text documents, detect document's language, extract key phrases, determine entities, and provide sentiment
Model module to score the model predictions using the test dataset. The score analysis.
results are supplied to the Evaluate Model module. The module evaluates the - Translator Text ? helps translate texts in real time between 70+ languages.
Model training, scoring and evaluation - Speech ? helps recognize and synthesize speech, recognize and identify speakers, translate live or recorded speech.
score results and produces the standard model performance metrics.
- Language Understanding Intelligent Service (LUIS) ? helps to understand voice or text commands.
6. Model deployment. If model performance is satisfactory, we can deploy the model to a production environment. For model deployment, we need to
create the production version of the training Pipeline ? inference pipeline. Azure ML Designer provides an option to create a real-time inference pipeline.
Azure Cognitive services provide two types of translation: Text and Speech. Azure Translator service supports multi-language translations between
After we create the inference pipeline, we need to do the test run. But before that, we want to be sure that our input field selection doesn?t include a
labeled field. If the inference pipeline test run is successful, we are ready to deploy our solution. For the next step, we must have Inference Clusters Azure Translator service supports multi-language near real-time text 70 languages. It uses Neural Machine Translation (NMT) technology
provisioned. Only then can we push the ?Deploy? button in the Azure ML Designer's top right, and the solution will be deployed. We can see the solution's translations between 70 languages. The service uses neural network as a service backbone. The significant benefit of NMT is that it assesses
status on the Assets section's Azure ML Studio Endpoints blade after deployment. technologies. Custom Translator extends Translator with custom specific the entire sentence before translating the words. Custom Translator
language domains. Custom extended models can benefit both Translator customizes NMT systems for translation of the specific domain
and Speech services for their translations. terminology.
7. Model management. Azure ML implements principles from DevOps for model maintenance. This approach is called Machine Learning Operations or
MLOps. It includes reproducible environments, code management, package deployment, monitoring, alerts, notification, and end-to-end automation. Translator Text API service has two options for fine-tuning the results:
Users can easily integrate Translator and Custom Translator with their
applications. Important to know that the Translator doesn?t store any user?s - Profanity filtering controls a translation of the profanity words by
data. If we need to translate the same text into several languages, we can do marking them as profanity or by omitting them.
it in one request to API. We submit a text for translation in the API call body - Selective Translation allows a user to tag a word or phrase that
and a sequence of the language codes for translation to as parameters. need not be translated, like a brand name.

w w w.w h izlabs.com W hizl absEducation, Inc. © 2020 Ver . 01. 02. 1129- 20 Cr eated by Dr. Andr ei Ser geev
AI- 900 Whi zCar d
Descr ibe f eat u r es of Nat u r al Lan gu age Pr ocessin g (NLP) w or k loads on Azu r e Descr ibe f eat u r es of Nat u r al Lan gu age Pr ocessin g (NLP) w or k loads on Azu r e

Identify features of common NLP Workload Scenarios Identify Azure tools and services for NLP workloads

Azure Text Analytics is a part of Natural Language Processing (NLP). All Text Analytics services, except Language detection, utilize the same The biggest challenge for processing the language is to understand the meaning of the text or speech. The language understanding models resolve
It includes the following four services: JSON body format for API calls. This format includes three fields for each this issue. Azure Language Understanding service, or LUIS, helps users to work with language models.
- Language detection ? helps to identify the language of the text. document in the collection: language name, document id, and text for The primary goal of LUIS based applications is to understand the user?s intention. LUIS examines the user?s input, or utterance, and extracts the
- Sentiment analysis ? helps to analyze text, and returns sentiment analysis. Language detection service accepts only two: document id and keywords, or entities. It then uses a compiled list of entities linked to intent and outputs the probable action or tasks that the user wants to execute.
scores and labels for each sentence. text for analysis.
Azure provides a LUIS portal (https://www.luis.ai) for creating solutions
- Key phrase extraction ? helps to extract the key phrases from the Every LUIS model has the default ?None? intent. It is empty by
Each text should have less than 5,120 characters. And the documents based on language models.There are two stages in this process:
unstructured text. default, and it can?t be deleted. This intent matches utterances
collection can handle up to 1,000 items (ids). Documents in the collection authoring and prediction.
- Named entity recognition ? helps to identify entities in the text and outside of the application domain.
can be in different languages. Authoring is the process of language understanding model creation
group them into categories.
and training. To train these models, we need to supply the following key
elements: There are four types of
Named Entity Recognition (NER) is a Text Analytics service. It identifies entities in the text. And group entities that we can
them into categories, like a person, organization, location, event, and others. NER service has two types - Entity is the word or phrase that is the focus of the utterance, as the create:
Sentiment analysis is a Text Analytics service. It
of API calls: general entity recognition and entity linking support. For example, there are 18 meanings word "light" in the utterance "Tur n t he l i ght s on.? Machine-Learned, List,
analyzes text and returns sentiment scores
for the word ?bank.? It can be ?bank of the river? or ?agent bank? or ?food bank,? etc. NER service (between 0 and 1) and labels (?positive,? - Intent is the action or task that the user wants to execute. It reflects RegEx, and Pattern.any.
analyzes the link between entities to resolve the possible meaning ambiguity. But to do this effectively, ?neutral,? and ?negative?) for each sentence. A in utterance as a goal or purpose. We can define intent as "Tur nOn"
in the utterance "Tur n t he l i ght s on.? We can use pre-built
the service uses Wikipedia as a knowledge base for the entity linkage and identification in different score close to 0 means a negative sentiment,
LUIS collections of
languages. and close to 1 - positive. And in cases with a - Utterance is the user's input that your model needs to interpret, like
intents and entities for
Currently, entity linking supports only English and Spanish languages. On the contrary, general entity neutral or undefined sentiment, the score is 0.5. "Tur n t he l i ght s on" or "Tur n on t he l i ght s.?
the common domains,
recognition NER service supports 23 languages. After we define the intents and entities, we can iteratively train our
Currently, the Sentiment Analysis service like Calendar, Places,
supports 20 languages. The service helps model by using sample utterances. When we are satisfied with the Utilities, etc, for our
Here is the output from NER model performance, we publish the LUIS application.
analyze social media, customer reviews, or other model.
service entity linking call
( ent i t i es / l i nki ng) for media that provide people?s opinions. Prediction is the process of publishing and using the model. Clients can connect to the predicting resource's endpoint by providing an authentication
the following sentence: key. Before creating a LUIS application, users need to choose what type of Azure resources they want to provision for their solution. There are two
An example of the types of resources: the dedicated LUIS resources (authoring or prediction or both) and general Azure Cognitive services resources (only for
?After th ey m et at th e
Sentiment prediction). This flexibility helps the user manage resources and access to different Cognitive services. But it has some overhead for the developers.
ban k of th e Sein e, th ey
wal k ed to th e Ban k of Analysis output for
Fr an ce bu il din g.? the phrase:
Azure Speech includes the following services: Azure Voice Recognition service helps to identify and
The word ?bank? is used ?Peter was
- Speech-to-Text ? transcribes audio data into text verify the speakers by unique characteristics of their voice.
twice: as a bank of the river su r pr ised an d
- Text-to-Speech ? synthesizes human-like voice audio data from the input text. Speakers train the service by using their voice and service
and as a financial institution. ver y h appy to
- Speech Translation ? provides a real-time multi-language translation of the spoken creates an enrollment profile. Based on this profile system
The service recognizes the m eet Sar a in
language audio data into speech or text can identify the speaker or user by his/her voice. The
difference by providing Par is."
If we call NER without entity linking option - Voice Recognition ? recognizes and authenticates a speaker by the specific voice Speaker Recognition APIs can identify speakers in voice
Wikipedia links for the Seine
(ent i t i es/ r ecogni t i on/ gener al) we get characteristics. recordings, real-time chats, and video streams.
River and Bank of France.
just two locations.
Speech recognition and Speech synthesis are parts of Azure Speech reverse to the recognition. First, Speech Synthesis tokenizes the text
Services. These services help determine the spoken language content into individual words and matches them with phonetic sounds. Then it
Key Phrase Extraction service is a part of Azure Text Analytics. It helps to extract Azure Speech-to-Text service uses the Universal language and generate the audio content by the synthetic voice. puts together the sounds into prosodic units, like phrases or sentences,
the key phrases from the unstructured text. This functionality is beneficial when you model. This Microsoft?s proprietary model is optimized for Speech Recognition uses many models, but two are essential: The and creates phonemes from them. After that, the service converts
need to create a summary or a catalog from the document content or understand the conversational and dictation scenarios. Users can use the service Acoustic and The Language. phonemes into an audio sequence. Voice synthesizer outputs audio
customer reviews' key points. for real-time or batch transcription of the audio data into the text The Acoustic model converts audio into phonemes. The Language sequence. We can control voice output options by Speech Synthesis
format. model matches phonemes with words. There are several examples of Markup Language (SSML). SSML , XML-based language, can change
Key Phrase extraction works best with long documents.
Real-time Speech-To-Text transcribes or translates the audio Speech recognition applications. Closed captions, transcripts of the the voice speed and pitch or how the text or the text?s parts should be
Using Text Analytics APIs, we can submit the text of the
streams or files into text. phone calls or meetings, or text dictation are some of them. read. We use Speech Synthesis service in many areas, like personal
documents in a simple JSON format. This format includes
Batch Speech-To-Text transcription is an asynchronous service. Speech Synthesis is the ?opposite? service to Speech Recognition. It voice assistants, phone voice menus, or public announcements in airports
three fields for each document in the collection: language
It works with large audio data stored in Azure Blob Storage. requires text content and the voice to vocalize the content. It is working in and train stations.
name, document id, and text for key phrase extraction.
Each text should have less than 5,120 characters. And the
documents collection can handle up to 1,000 items (ids). Azure Text-to-Speech service gives users an option to select
Documents in the collection can be from different between standard and neural voice generation. Neural voices Descr ibe f eat u r es of con ver sat ion al AI w or k loads on Azu r e
languages. sound very close to human reproducing stress and intonation of
Currently, Key phrase extraction service supports 15 spoken language. Users also can create custom voices.
Identify common use cases for conversational AI
languages.
Speech Translation is a part of the Speech services. It is
Azure Conversation AI supports agents, or bots, that can keep a conversation in turns with the users. Examples of such systems are Web chat AI
powered by a Translator and combines Translator Speech APIs
agents, or bots, and Smart home devices that can answer your questions and act on your commands.
with Custom Speech services. It provides real-time
multi-language translation functionality for user?s applications. Every organization is trying to keep its costs low. Usually, customer service is one of their expensive operations. So, the challenge is how to lower
Users can use this service for Speech-to-Speech and customer support costs without lowering service quality. Conversation AI agents became a trendy solution for this problem as alternative and/or addition
Speech-to-Text translations. to human customer service.

Azure Speech service provides two APIs: Speech-to-Text and A simple example of a Conversation AI agent is a WebChat Bot. WebChat Bot can conversationally answer customer?s questions in real-time.
The Key Phrase Text-to-Speech. These APIs also include speech recognition and Customers can interact with bot by many channels, like a web browser, chat application, phone calls, emails, text messages, social media, and others.
Extraction Service speech synthesis functionalities. To create a WebChat Bot, we need two components: Knowledge base and Bot Service. Knowledge base stores information that bot is accessing and
The body of the API call output. providing answers from. We can build a Knowledge base from website information, FAQ documents, chit-chat lists, etc. Usually, the Knowledge base is
a list of question-and-answer pairs. Bot Service provides an interface for users to interact with a Knowledge base by communication channels.

w w w.w h izlabs.com W hizl absEducation, Inc. © 2020 Ver . 01. 02. 1129- 20 Cr eated by Dr. Andr ei Ser geev
AI- 900 Whi zCar d
Descr ibe f eat u r es of con ver sat ion al AI w or k loads on Azu r e Descr ibe f eat u r es of con ver sat ion al AI w or k loads on Azu r e

Identify common use cases for conversational AI Identify Azure services for conversational AI

Another popular customer service solution is telephone voice menus. The Microsoft Bot Framework supports two models of bot integration with agent engagement platforms, like Customer support service. These two models are
A Personal Digital Assistant is a Bot as agent and Bot as a proxy.
telephone voice menus functionality is a good example of a Speech
Bot Framework solution. It is
Synthesis service.
based on three major components:
Speech Synthesis service is a part of Azure Text-to-Speech services. Azure Bot Service, Bot
Text-To-Speech provides two options for the voice: Standard and Neural. Framework and Knowledge
base.
The Neural voice uses deep neural networks for Speech Synthesis and
makes output sounds very close to humans. It reduces listening fatigue when During the creation of the Azure
people interact with automated attendants. Web App Bot, users can select
Basic Bot template. This basic Bot
can be extended to the Virtual
Assistant using Enterprise Bot
Users can create their own Bots using Microsoft Bot Framework. Bot
Services. Digital Assistant
Framework provides conversation control and integrates with QnA Maker.
incorporates many services, like
Microsoft Azure Bot Service based on the Bot Framework. Users can LUIS, QnA Maker, Cosmos DB,
provision Web App Bot in Azure. During the creation of the Web Bot, users Content Moderation, and others.
have to choose between two pre-defined templates:
- Echo Bot ? a simple bot that echos the customer?s messages.
- Basic Bot ? includes ?out of the box? integration with LUIS and Bot
Analytics services. Source: Microsoft Documentation Source: Microsoft Documentation

Using Bot Framework Skills, users can extend the Bots capabilities. Skills Bot as agent model integrates bot on the same level as live agents: the Bot as proxy model integrates bot as the primary filter before the user
are like standalone bots that focus on specific functions: Calendar, To Do, bot is engaged in interactions the same way as Customer support interacts with a live agent.
Point of Interest, etc. personnel. Handoff protocol regulates bot's disengagement and a
transfer of user?s communication to a live person. This is the most Bot's logic decides when to transfer a conversation and where to route it.
straightforward model to implement. This model is more complicated to implement.

When users want to embed Web Bot within their website, they are using Web
Chat control. This control requires a secret key for bot access. The secret key
is a master key that allows access to all bot?s conversations. Its free exposure
in a production environment creates a significant security risk. To limit this AZURE COGNITIVE SERVICES REST API
risk, users need to generate a token based on the secret key. A token gives
control access only to a single conversation and has an expiration term. Azure Cognitive Services The REST API HTTP protocol helps users to access Azure AI
Source: Microsoft Bot Framework Documentation resource includes access to applications. REST API is a set of rules for Web services. It stands for
the list of Cognitive Services. ?REpresentational State Transfer Application Programming Interface.?
Customers can use a common
Identify Azure services for conversational AI endpoint and authentication HTTP REST API protocol comprises two parts:request and response. A
key to access Computer user makes a request to an AI endpoint to process the input data. After
Conversation AI is one of the key elements of Artificial Intelligence. It includes two services: Vision, Content Moderator, ML service processes the request, it sends back a response with the
Face, Language
- QnA Maker ? helps to create a knowledge base - a foundation for a conversation between humans and AI agents. results.
Understanding, Speech, Text
- Azure Bot Service ? helps to create, publish, and manage Conversation AI agents, or bots.
Analysis, Translator, and other The request includes four key parts: service endpoint URL, a method,
services. The endpoint header and a body. Every endpoint has a service root URL, service
address consists of the Azure path, and query parameters (optional). The HTTP protocol defines the
Azure QnA Maker service transforms the semi-structured text information Azure creates all these services with a provision of the QnA Maker instance. Cognitive service resource
five main action methods for the service: GET, POST, PUT, PATCH, and
into structured question-and-answers pairs. Azure stores these pairs as After resource deployment, users can create and connect Knowledge base name and cogni t i veser vi ces . az ur e. c om domain, like
DELETE. The header contains the authentication key for the service.
Knowledge base (KB). When the service receives a customer?s question, it to the instance using the Azure QnA portal (https://www.qnamaker.ai/). The ht t ps: / / c ognsr v- wl . c ogni t i ves er v i ces . azur e. c om/ , where
?cognsrv-wl? is the resource name. User can utilize the REST API or And the body includes data that the user wants to process by the AI
matches the question with answers from the Knowledge base. And then portal helps populate KB with information from online FAQs, product
SDK to call services. service, like the text to analyze. The request can be sent using Postman
outputs the most appropriate answer with a confidence score. manuals, different file documents, etc.
or CURL command.
Before users create a new Knowledge base, they need to provision the If users want to attach a personality to the conversations, they can add Some services are not on this list and require a separate resource, like
QnA Maker resource in Azure subscription. On the contrary, to other Custom Vision, QnA Maker, or Web Chat Bot. The response contains the service output in JSON format.
Chit-chat ? lists of small talk pairs. Portal provides several pre-built
Cognitive Services, QnA Maker depends on several Azure services: Chit-chat lists: professional, friendly, witty, caring, and enthusiastic. Azure
adds the selected Chit-chat to the user?s Knowledge base.
- QnA Maker Management service ? the model training and publishing Abou t Wh izlabs an d Wh izCar d
service. QnA Maker resource can be connected to several Knowledge bases. But
- Azure Search ? stores data that submitted to QnA Maker. Whizlabs is the premier provider of educational and training content. For the last 20 years, millions of IT specialists have been using Whizlabs. Whizlabs
the language of the first KB defines the language for the rest of the bases
- Azure App Service ? hosts QnA Maker's endpoint. practice tests, video courses, and labs help them prepare and pass the toughest certification exams from companies like Microsoft, AWS, Google, Oracle,
within the QnA Maker instance.
IBM, and others.
WhizCard summarizes the required knowledge for the certification and serves as guidance for the exam preparation. Every section header and
sub-headers reflect the Microsoft Skills Measured document for the exam. They are also linked to the appropriate Microsoft documents for in-dept study.
After creating Knowledge base, users need to train and test their models. Then published for the clients to use over the REST API. Applications accessing Please let us know if you like our WhizCard or how we can improve it. Visit us at www.whizlabs.com or connect with us on Twitter, LinkedIn, Facebook
published Knowledge base must provide the KB id, KB endpoint address, and KB authorization key. Users can deliver KB by creating a Bot. They also can or Slack.
use the QnA Maker functionality of bot generation for their Knowledge base. Good luck on your learning journey, and may the power of knowledge be with you.
Disclosure: The information provided in WhizCards is for educational purposes only; created in our efforts to help aspirants prepare for the AI-900 certification exam. Though references have
been taken from Microsoft documentation, it?s not intended as a substitute for the official docs. The document can be reused, reproduced, and printed in any form; ensure that appropriate
sources are credited and required permissions are received.

w w w.w h izlabs.com W hizl absEducation, Inc. © 2020 Ver . 01. 02. 1129- 20 Cr eated by Dr. Andr ei Ser geev

You might also like