0% found this document useful (0 votes)
4 views41 pages

Azure 2

The document discusses principles and practices for building responsible AI systems, emphasizing reliability, safety, fairness, and transparency. It outlines various Azure services for machine learning, computer vision, and natural language processing, detailing their functionalities and applications. Additionally, it covers the process of creating machine learning models, including data preparation, model training, and deployment within Azure Machine Learning services.

Uploaded by

absangel007
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views41 pages

Azure 2

The document discusses principles and practices for building responsible AI systems, emphasizing reliability, safety, fairness, and transparency. It outlines various Azure services for machine learning, computer vision, and natural language processing, detailing their functionalities and applications. Additionally, it covers the process of creating machine learning models, including data preparation, model training, and deployment within Azure Machine Learning services.

Uploaded by

absangel007
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 41

Question #10 Topic 1

HOTSPOT -
To complete the sentence, select the appropriate option in the answer area.
Hot Area:

Correct Answer:

Reliability and safety: To build trust, it's critical that AI systems operate reliably, safely, and consistently under normal
circumstances and in unexpected conditions.
These systems should be able to operate as they were originally designed, respond safely to unanticipated conditions, and
resist harmful manipulation.
Reference:
https://docs.microsoft.com/en-us/learn/modules/responsible-ai-principles/4-guiding-principles
Question #11 Topic 1
You are building an AI system.
Which task should you include to ensure that the service meets the Microsoft transparency principle for responsible AI?
 A. Ensure that all visuals have an associated text that can be read by a screen reader.
 B. Enable autoscaling to ensure that a service scales based on demand.
 C. Provide documentation to help developers debug code. Most Voted
 D. Ensure that a training dataset is representative of the population.
Correct Answer: C
Reference:
https://docs.microsoft.com/en-us/learn/modules/responsible-ai-principles/4-guiding-principles
Community vote distribution
C (100%)
Question #12 Topic 1
DRAG DROP -
Match the types of AI workloads to the appropriate scenarios.
To answer, drag the appropriate workload type from the column on the left to its scenario on the right. Each workload type
may be used once, more than once, or not at all.
NOTE: Each correct selection is worth one point.
Select and Place:

Correct Answer:

Reference:

9. What are the features and capabilities of Azure Machine Learning Service?
Automated machine learning
This feature enables non-experts to quickly create an effective machine learning model from data.Azure Machine
Learning designer
A graphical interface enabling no-code development of machine learning solutions.Data and compute management
Cloud-based data storage and compute resources that professional data scientists can use to run data experiment code at
scale.Pipelines
Data scientists, software engineers, and IT operations professionals can define pipelines to orchestrate model training,
deployment, and management tasks.
10. In which scenarios you use anomaly detection — a machine learning-based technique that analyzes data over
time and identifies unusual changes?
1. Monitor credit card transactions and detect unusual usage patterns that might indicate fraud.2. An application that tracks
activity in an automated production line and identifies failures.3. A racing car telemetry system that uses sensors to
proactively warn engineers about potential mechanical failures before they happen.
11. Is there any service for Anamoly detection from Microsoft Azure?
The Anomaly Detector service provides an application programming interface (API) that developers can use to create
anomaly detection solutions.To learn more, view the Anomaly Detector service web site.
12. Name the app based on Computer Vision?
The Seeing AI app is a great example of the power of computer vision. Designed for the blind and low vision community,
the Seeing AI app harnesses the power of AI to open up the visual world and describe nearby people, text and objects.
13. What are the tasks that come under Computer Vision?
Image classification
Image classification involves training a machine learning model to classify images based on their contents. For example, in
a traffic monitoring solution you might use an image classification model to classify images based on the type of vehicle
they contain, such as taxis, buses, cyclists, and so on.Object detection
Object detection machine learning models are trained to classify individual objects within an image, and identify their
location with a bounding box. For example, a traffic monitoring solution might use object detection to identify the location of
different classes of vehicle.Semantic segmentation
Semantic segmentation is an advanced machine learning technique in which individual pixels in the image are classified
according to the object to which they belong. For example, a traffic monitoring solution might overlay traffic images with
"mask" layers to highlight different vehicles using specific colors.Image analysis
You can create solutions that combine machine learning models with advanced image analysis techniques to extract
information from images, including "tags" that could help catalog the image or even descriptive captions that summarize
the scene shown in the image.Face detection, analysis, and recognition
Face detection is a specialized form of object detection that locates human faces in an image. This can be combined with
classification and facial geometry analysis techniques to infer details such as gender, age, and emotional state; and even
recognize individuals based on their facial features.Optical character recognition (OCR)
Optical character recognition is a technique used to detect and read text in images. You can use OCR to read text in
photographs (for example, road signs or store fronts) or to extract information from scanned documents such as letters,
invoices, or forms.
14. What are the Computer Vision services in Microsoft Azure?
Computer Vision
You can use this service to analyze images and video, and extract descriptions, tags, objects, and text.Custom Vision
Use this service to train custom image classification and object detection models using your own images.Face
The Face service enables you to build face detection and facial recognition solutions.Form Recognizer
Use this service to extract information from scanned forms and invoices.
15. What can you do with NLP?
* Analyze and interpret text in documents, email messages, and other sources.* Interpret spoken language, and
synthesize speech responses.* Automatically translate spoken or written phrases between languages.* Interpret
commands and determine appropriate actions.
16. What are NLP services in Microsoft Azure?
Text Analytics
Use this service to analyze text documents and extract key phrases, detect entities (such as places, dates, and people),
and evaluate sentiment (how positive or negative a document is).Translator Text
Use this service to translate text between more than 60 languages.Speech
Use this service to recognize and synthesize speech, and to translate spoken languages.Language Understanding
Intelligent Service (LUIS)
Use this service to train a language model that can understand spoken or text-based commands.
17. What are the Conversational AI services in Microsoft Azure?
QnA Maker
This cognitive service enables you to quickly build a knowledge base of questions and answers that can form the basis of
a dialog between a human and an AI agent.Azure Bot Service
This service provides a platform for creating, publishing, and managing bots. Developers can use the Bot Framework to
create a bot and manage it with Azure Bot Service - integrating back-end services like QnA Maker and LUIS, and
connecting to channels for web chat, email, Microsoft Teams, and others.
18. What is responsible AI?
Artificial Intelligence is a powerful tool that can be used to greatly benefit the world. However, like any tool, it must be used
responsibly.At Microsoft, AI software development is guided by a set of six principles, designed to ensure that AI
applications provide amazing solutions to difficult problems without any unintended negative consequences.
19. What are the six guiding principles of responsible AI?
Fairness
AI systems should treat all people fairly. For example, suppose you create a machine learning model to support a loan
approval application for a bank. The model should make predictions of whether or not the loan should be approved without
incorporating any bias based on gender, ethnicity, or other factors that might result in an unfair advantage or disadvantage
to specific groups of applicants.Reliability and safety
AI systems should perform reliably and safely. For example, consider an AI-based software system for an autonomous
vehicle; or a machine learning model that diagnoses patient symptoms and recommends prescriptions. Unreliability in
these kinds of system can result in substantial risk to human life.Privacy and security
AI systems should be secure and respect privacy. The machine learning models on which AI systems are based rely on
large volumes of data, which may contain personal details that must be kept private. Even after the models are trained and
the system is in production, it uses new data to make predictions or take action that may be subject to privacy or security
concerns.Inclusiveness
AI systems should empower everyone and engage people. AI should bring benefits to all parts of society, regardless of
physical ability, gender, sexual orientation, ethnicity, or other factors.Transparency
AI systems should be understandable. Users should be made fully aware of the purpose of the system, how it works, and
what limitations may be expected.Accountability
People should be accountable for AI systems. Designers and developers of AI-based solution should work within a
framework of governance and organizational principles that ensure the solution meets ethical and legal standards that are
clearly defined.
20. You want to create a model to predict sales of ice cream based on historic data that includes daily ice cream
sales totals and weather measurements. Which Azure service should you use?
Azure Machine Learning
21. You want to train a model that classifies images of dogs and cats based on a collection of your own digital
photographs. Which Azure service should you use?
Computer Vision
22. You are designing an AI application that uses computer vision to detect cracks in car windshields, and warns
drivers when a windshield should be repaired or replaced. When tested in good lighting conditions, the
application successfully detects 99% of dangerously damaged glass. Which of the following statements should
you include in the application’s user interface?
When used in good lighting conditions, this application can be used to identify potentially dangerous cracks and defects in
windshields. If you suspect your windshield is damaged, even if the application does not detect any defects, you should
have it inspected by a professional.
23. You create a machine learning model to support a loan approval application for a bank. The model should
make predictions of whether or not the loan should be approved without incorporating any bias based on gender,
ethnicity, or other factors that might result in an unfair advantage or disadvantage to specific groups of
applicants. Which principle of responsible AI does this come under?
Fairness
24. AI-based software application development must be subjected to rigorous testing and deployment
management processes to ensure that they work as expected before release. Which principle of responsible AI
does this come under?
Reliability and safety
25. The machine learning models on which AI systems are based rely on large volumes of data, which may
contain personal details that must be kept private. Which principle of responsible AI does this come under?
Privacy and security
26. AI systems should empower everyone and engage people. AI should bring benefits to all parts of society,
regardless of physical ability, gender, sexual orientation, ethnicity, or other factors. Which principle of
responsible AI does this come under?
Inclusiveness
27. AI systems should be understandable. Users should be made fully aware of the purpose of the system, how it
works, and what limitations may be expected. Which principle of responsible AI does this come under?
Transparency
28. Designers and developers of AI-based solutions should work within a framework of governance and
organizational principles that ensure the solution meets ethical and legal standards that are clearly defined.
Which principle of responsible AI does this come under?
Accountability
Describe fundamental principles of machine learning on Azure (30- 35%)
Practice questions based on these concepts
 Identify common machine learning types
 Describe core machine learning concepts
 Identify core tasks in creating a machine learning solution
 Describe the capabilities of no-code machine learning with Azure Machine Learning
29. Adventure Works Cycles is a business that rents cycles in a city. The business could use historic data to train
a model that predicts daily rental demand in order to make sure sufficient staff and cycles are available. Which
service should you use?
Azure Machine Learning
30. What are the various kinds of machine learning models?
Regression (supervised machine learning)
We use historic data to trian the model to predict the numerical valueClassification (supervised machine learning)
We can fit the features into the model and predict the classification of the labelUnsupervised Machine learning
You don't have a label to predict. you only have features. You have to create clusters based on the features.
31. What is the process of machine learning regardless of the model?
Data Igestion
You need to get the data to train your modelData Pre processing
Identify the features that helps the model to predict and discarding othersData Cleaning
Fix any erros or remving the items which has errosReplacing Feature Values
find the replacement feature values if any missing. In this process you might use exisiting feature engineering to find the
valueApply Algoritms
Apply alogorithms on this data for the processing until you are happy with the model pridictionsDeploy Model
Fianlly you deploy your model into machine learning service so that applications can connect to it.
32. To use Azure Machine Learning, you create a workspace in your Azure subscription. Is this true?
TrueYou can then use this workspace to manage data, compute resources, code, models, and other artifacts related to
your machine learning workloads.
33. What is the benefit of using the Azure Machine learning service?
Data scientists expend a lot of effort exploring and pre-processing data, and trying various types of model-training
algorithm to produce accurate models, which is time consuming, and often makes inefficient use of expensive compute
hardware.Azure Machine Learning is a cloud-based platform for building and operating machine learning solutions in
Azure. It includes a wide range of features and capabilities that help data scientists prepare data, train models, publish
predictive services, and monitor their usage. Most importantly, it helps data scientists increase their efficiency by
automating many of the time-consuming tasks associated with training models; and it enables them to use cloud-
based compute resources that scale effectively to handle large volumes of data while incurring costs only when actually
used.
34. What are the settings you need when creating a Machine learning workspace?
Workspace Name: A unique name of your choiceSubscription: Your Azure subscriptionResource group: Create a new
resource group with a unique nameLocation: Choose any available locationWorkspace edition: Enterprise
35. How many kinds of workspace editions?
Basic
Enterprise
36. Automated machine learning interface available only in Enterprise edition. Is this true?
True
37. What are other resources that are added automatically when creating Machine Learning Workspace?
Azure Storage
Azure Application insigts
Azure Key Vault
38. What is Machine Learning Studio?
You can manage your workspace using the Azure portal, but for data scientists and Machine Learning operations
engineers, Azure Machine Learning studio provides a more focused user interface for managing workspace resources.On
the Overview page for your workspace, launch Azure Machine Learning studio (or open a new browser tab and navigate
to https://ml.azure.com ), and sign into Azure Machine Learning studio using your Microsoft account.
39. How many kinds of Compute resources that data scientists can use to train their models?
Compute Instances
Development workstations that data scientists can use to work with data and models.Compute Clusters:
Scalable clusters of virtual machines for on-demand processing of experiment code.Inference Clusters:
Deployment targets for predictive services that use your trained models.Attached Compute:
Links to existing Azure compute resources, such as Virtual Machines or Azure Databricks clusters.
40. What are the settings you need to create a compute instance?
Compute name: enter a unique nameVirtual Machine type: CPUVirtual Machine size: Standard_DS2_v2
41. What are the settings you need to create a Compute Clusters?
Compute name: enter a unique nameVirtual Machine size: Standard_DS2_v2Virtual Machine priority:
DedicatedMinimum number of nodes: 2Maximum number of nodes: 2Idle seconds before scale down: 120
42. How do you make sure that to start the compute only when it is needed when creating computer clusters in
the production environment?
In a production environment, you'd typically set the minimum number of nodes value to 0 so that compute is only started
when it is needed.
43. How do you reduce the amount of time you spend waiting for the compute to start?
To reduce the amount of time you spend waiting for it you've initialized it with two permanently running nodes.
44. In the Machine Learning Studio, where do you register the data to train the model?
Assets > Datasets
45. How many ways you can import data for creating datasets?
From Local files
From datastore
From web files
From open datasets
46. You have created a dataset and you want to see the ample of the data. Where do you see in the Machine
Learning Studio?
After the dataset has been created, open it and view the Explore page to see a sample of the data.
47. Where do you run experiments in ML Studio?
Author > Automated ML pageCreate a new Automated ML run
select dataset
Configure run
Task type and settings
48. _______model to predict the numeric value. Fill this blank?
Regression
49. Which kind of model to produce the Predicted vs True chart?
Regression
49. An automobile dealership wants to use historic car sales data to train a machine learning model. The model
should predict the price of a pre-owned car based on characteristics like its age, engine size, and mileage. What
kind of machine learning model does the dealership need to create?
Regression
50. A bank wants to use historic loan repayment records to categorize loan applications as low-risk or high-risk
based on characteristics like the loan amount, the income of the borrower, and the loan period. What kind of
machine learning model does the bank need to create?
Classification
50. Which of the following types of machine learning is an example of unsupervised machine learning?
Clustering
51. You are creating a model with the Azure Machine Learning designer. As a first step, you import the raw data.
What are the next steps you need to do to prepare the data for the modeling?

52. You have created model with Azure Machine Learning designer using linear regression. What are the missing
steps in the below diagram?
Clean Missing Data
Linear Regression
53. What is Mean Absolute Error (MAE)?
The average difference between predicted values and true values. This value is based on the same units as the label, in
this case dollars. The lower this value is, the better the model is predicting.
54. What is Root Mean Squared Error (RMSE)?
The mean difference between predicted and true values is squared, and then the square root is calculated. The result is a
metric based on the same unit as the label (dollars). When compared to the MAE (above), a larger difference indicates
greater variance in the individual errors (for example, with some errors being very small, while others are large). If the MAE
and RMSE are approximately the same, then all individual errors are of a similar magnitude.
55. What is Relative Squared Error (RSE)?
A relative metric between 0 and 1 based on the square of the differences between predicted and true values. The closer to
0 this metric is, the better the model is performing. Because this metric is relative, it can be used to compare models where
the labels are in different units.
56. What is Relative Absolute Error (RAE)?
A relative metric between 0 and 1 based on the absolute differences between predicted and true values. The closer to 0
this metric is, the better the model is performing. Like RSE, this metric can be used to compare models where the labels
are in different units.
57. What is the Coefficient of Determination (R2)?
This metric is more commonly referred to as R-Squared, and summarizes how much of the variance between predicted
and true values is explained by the model. The closer to 1 this value is, the better the model is performing.
58. You plan to use the Azure Machine Learning designer to create and publish a regression model. Which edition
should you choose when creating an Azure Machine Learning workspace?
Enterprise
59. You are creating a training pipeline for a regression model, using a dataset that has multiple numeric columns
in which the values are on different scales. You want to transform the numeric columns so that the values are all
on a similar scale based relative to the minimum and maximum values in each column. Which module should you
add to the pipeline?
Normalize Data
60. You use the Azure Machine Learning designer to create a training pipeline and an inference pipeline for a
regression model. Now you plan to deploy the inference pipeline as a real-time service. What kind of compute
target should you create to host the service?
Inference Cluster
61. _______ is a form of machine learning that is used to predict which category, or class, an item belongs to.
Classification
62. A health clinic might use the characteristics of a patient (such as age, weight, blood pressure, and so on) to
predict whether the patient is at risk of diabetes. In this case, the characteristics of the patient are the features,
and the label is a classification of either 0 or 1, representing non-diabetic or diabetic. What kind of model is this?
Classification
63. You are using the Azure Machine Learning designer to create a training pipeline for a binary classification
model. You have added a dataset containing features and labels, a Two-Class Decision Forest module, and a
Train Model module. You plan to use Score Model and Evaluate Model modules to test the trained model with a
subset of the dataset that was not used for training. Which additional kind of module should you add?
Split Data
64. You use an Azure Machine Learning designer pipeline to train and test a binary classification model. You
review the model’s performance metrics in an Evaluate Model module and note that it has an AUC score of 0.3.
What can you conclude about the model?
The model performs worse than random guessing.
65. You use the Azure Machine Learning designer to create a training pipeline for a classification model. What
must you do before deploying the model as a service?
Create an inference pipeline from the training pipeline
66. What is the Accuracy metric in the classification model?
The ratio of correct predictions (true positives + true negatives) to the total number of predictions. In other words, what
proportion of diabetes predictions did the model get right?
67. What is called the F1 score metric in the classification model?
An overall metric that essentially combines precision and recall.
68. _______ is a form of machine learning that is used to group similar items into clusters based on their
features?
Clustering
69. To train a clustering model, you need to apply a clustering algorithm to the data, using only the features that
you have selected for clustering. You’ll train the model with a subset of the data, and use the rest to test the
trained model. This is the complete pipeline for clustering what are the missing modules in the following
pipeline?
Normalize Data
K-Means Clustering
70. You are using an Azure Machine Learning designer pipeline to train and test a K-Means clustering model. You
want your model to assign items to one of three clusters. Which configuration property of the K-Means Clustering
module should you set to accomplish this?
Set Number of Centroids to 3
71. You use the Azure Machine Learning designer to create a training pipeline for a clustering model. Now you
want to use the model in an inference pipeline. Which module should you use to infer cluster predictions from the
model?
Assign Data to Clusters
Describe features of computer vision workloads on Azure (15–20%)
Practice questions based on these concepts
 Identify common types of computer vision solution
 Identify Azure tools and services for computer vision tasks
72. What can Computer Vision cognitive service do?
Interpret an image and suggest an appropriate caption.Suggest relevant tags that could be used to index an
image.Categorize an image.Identify objects in an image.Detect faces and people in an image.Recognize celebrities and
landmarks in an image.Read text in an image.
73. When using Computer Vision, what is the difference between Computer Vision and Cognitive Services?
Computer Vision: A specific resource for the Computer Vision service. Use this resource type if you don’t intend to use
any other cognitive services, or if you want to track utilization and costs for your Computer Vision resource
separately.Cognitive Services: A general cognitive services resource that includes Computer Vision along with many
other cognitive services; such as Text Analytics, Translator Text, and others. Use this resource type if you plan to use
multiple cognitive services and want to simplify administration and development.
74. If the client wants to use the Computer Vision services what do they need?
A key that is used to authenticate client applications.An endpoint that provides the HTTP address at which your resource
can be accessed.
75. Can Computer Vision describe the images?
Yes
76. Computer Vision detects the objects in the image. Is this true?
TrueThe object detection capability is similar to tagging, in that the service can identify common objects; but rather than
tagging, or providing tags for the recognized objects only, this service can also return what is known as bounding box
coordinates.
77. Computer Vision detects the brands in the image. Is this true?
TrueThis feature provides the ability to identify commercial brands. The service has an existing database of thousands of
globally recognized logos from commercial brands of products.
78. With Computer Vision you can categorize the people in the image. Is this true?
True
79. When categorizing an image, the Computer Vision service supports two specialized domain models. What are
these?
Celebrities — The service includes a model that has been trained to identify thousands of well-known celebrities from the
worlds of sports, entertainment, and business.Landmarks — The service can identify famous landmarks, such as the Taj
Mahal and the Statue of Liberty.
80. The Computer Vision service can use ________ capabilities to detect printed and handwritten text in images.
optical character recognition (OCR)
81. If you want to detect images that contain adult content or depict violent, gory scenes. Can Computer Vision
service help in this scenario?
YesModerate content - detecting images that contain adult content or depict violent, gory scenes.
82. You want to use the Computer Vision service to analyze images. You also want to use the Text Analytics
service to analyze text. You want developers to require only one key and endpoint to access all of your services.
What kind of resource should you create in your Azure subscription?
Cognitive Services
83. You want to use the Computer Vision service to identify the location of individual items in an image. Which of
the following features should you retrieve?
Objects
84. You want to use the Computer Vision service to analyze images of locations and identify well-known
buildings? What should you do?
Retrieve the categories for the image, specifying the landmarks domain
85. _______ is a machine learning technique in which the object being classified is an image, such as a
photograph.
Image classification
86. What are the uses of Image classification?
Product identification — performing visual searches for specific products in online searches or even, in-store using a
mobile device.Disaster investigation — evaluating key infrastructure for major disaster preparation efforts. For example,
aerial surveillance images may show bridges and classify them as such. Anything classified as a bridge could then be
marked for emergency preparation and investigation.Medical diagnosis — evaluating images from X-ray or MRI devices
could quickly classify specific issues found as cancerous tumors, or many other medical conditions related to medical
imaging diagnosis.
87. What are the resources available for Custom Vision in Azure?
Custom Vision: A dedicated resource for the custom vision service, which can be either a training or a prediction
resource.Cognitive Services: A general cognitive services resource that includes Custom Vision along with many other
cognitive services. You can use this type of resource for training, prediction, or both.
88. The model training process is an iterative process in which the Custom Vision service repeatedly trains the
model using some of the data, but holds some back to evaluate the model. What are the evaluation metrics?
Precision: What percentage of the class predictions made by the model were correct? For example, if the model predicted
that 10 images are oranges, of which eight were actually oranges, then the precision is 0.8 (80%).Recall: What percentage
of class predictions did the model correctly identify? For example, if there are 10 images of apples, and the model found 7
of them, then the recall is 0.7 (70%).Average Precision (AP): An overall metric that takes into account both precision and
recall).
89. Once you publish the model to your prediction resource. To use your model, what information that client
application developers need?
Project ID: The unique ID of the Custom Vision project you created to train the model.Model name: The name you
assigned to the model during publishing.Prediction endpoint: The HTTP address of the endpoints for the prediction
resource to which you published the model (not the training resource).Prediction key: The authentication key for the
prediction resource to which you published the model (not the training resource).
90. You plan to use the Custom Vision service to train an image classification model. You want to create a
resource that can only be used for model training, and not for prediction. Which kind of resource should you
create in your Azure subscription?
Custom Vision
91. You train an image classification model that achieves less than satisfactory evaluation metrics. How might
you improve it?
Add more images to the training set.
92. You have published an image classification model. What information must you provide to developers who
want to use it?
The project ID, the model name, and the key and endpoint for the prediction resource
93. _______ is a form of machine learning-based computer vision in which a model is trained to recognize
individual types of object in an image, and to identify their location in the image.
Object detection
94. What information object detection model returns?
The class of each object identified in the image.The probability score of the object classification (which you can interpret as
the confidence of the predicted class being correct)The coordinates of a bounding box for each object.
95. What is the difference between Object detection and Image classification?
Image classification is a machine learning based form of computer vision in which a model is trained to categorize images
based on the primary subject matter they contain.Object detection goes further than this to classify individual objects within
the image, and to return the coordinates of a bounding box that indicates the object's location.
96. What are the uses of object detection?
Evaluating the safety of a building by looking for fire extinguishers or other emergency equipment.Creating software for
self-driving cars or vehicles with lane assist capabilities.Medical imaging such as an MRI or x-rays that can detect known
objects for medical diagnosis.
97. What are the key considerations when tagging training images for object detection are ensuring that you have
sufficient images of the objects?
Preferably from multiple angles;
Making sure that the bounding boxes are defined tightly around each object.
98. Which of the following results does an object detection model typically return for an image?
A class label, probability, and bounding box for each object in the image
99. You plan to use a set of images to train an object detection model, and then publish the model as a predictive
service. You want to use a single Azure resource with the same key and endpoint for training and prediction.
What kind of Azure resource should you create?
Cognitive Services
100. _________ is an area of artificial intelligence (AI) in which we use algorithms to locate and analyze human
faces in images or video content.
Face detection and analysis
101. The facial landmarks can be used as features with which to train a machine learning model from which you
can infer information about a person, such as their perceived age or perceived emotional state. Is this true?
True
102. What are the uses of face detection and analysis?
Security — facial recognition can be used in building security applications, and increasingly it is used in smartphone
operating systems for unlocking devices.Social media — facial recognition can be used to automatically tag known friends
in photographs.Intelligent monitoring — for example, an automobile might include a system that monitors the driver’s face
to determine if the driver is looking at the road, looking at a mobile device, or shows signs of tiredness.Advertising —
analyzing faces in an image can help direct advertisements to an appropriate demographic audience.Missing persons —
using public cameras systems, facial recognition can be used to identify if a missing person is in the image frame.Identity
validation — useful at ports of entry kiosks where a person holds a special entry permit.
103. What are the cognitive services that you can use to detect and analyze faces from Microsoft Azure?
Computer Vision, which offers face detection and some basic face analysis, such as determining age.Video Indexer,
which you can use to detect and identify faces in a video.Face, which offers pre-built algorithms that can detect, recognize,
and analyze faces.
104. What information client applications need to use face service?
A key that is used to authenticate client applications.An endpoint that provides the HTTP address at which your resource
can be accessed.
105. What are some of the tips that can help improve the accuracy of the detection in the images when using Face
service?
Image format — supported images are JPEG, PNG, GIF, and BMPfile size — 4 MB or smallerface size range — from 36
x 36 up to 4096 x 4096. Smaller or larger faces will not be detectedother issues — face detection can be impaired by
extreme face angles, occlusion (objects blocking the face such as sunglasses or a hand). Best results are obtained when
the faces are full-frontal or as near as possible to full-frontal
106. You plan to use Face to detect human faces in an image. How does the service indicate the location of the
faces it detects?
A set of coordinates for each face, defining a rectangular bounding box around the face
107. What is one aspect that may impair facial detection?
Extreme angles
108. You want to use Face to identify named individuals. What must you do?
Use Face to create a group containing multiple images of each named individual, and train a model based on the group
109. What are the uses of OCR?
note-takingdigitizing forms, such as medical records or historical documentsscanning printed or handwritten checks for
bank deposits
110. The basic foundation of processing printed text is _______?
optical character recognition (OCR)
111. _________ is an AI system not only reads the text characters but can use a semantic model to interpret the
text is about.
machine reading comprehension (MRC)
112. What is OCR API?
The OCR API is designed for quick extraction of small amounts of text in images. It operates synchronously to provide
immediate results, and can recognize text in numerous languages.
113. What is the information that OCR API returns?
Regions in the image that contain text
Lines of text in each region
Words in each line of textFor each of these elements, the OCR API also returns bounding box coordinates that define a
rectangle to indicate the location in the image where the region, line, or word appears.
114. What is the Read API?
The Read API uses the latest recognition models and is optimized for images that have a significant amount of text or has
considerable visual noise.
115. The Read API is a better option for scanned documents that have a lot of text. Is this true?
True
116. What is the information that Read API returns?
Pages — One for each page of text, including information about the page size and orientation.
Lines — The lines of text on a page.
Words — The words in a line of text.Each line and word includes bounding box coordinates indicating its position on the
page.
117. The OCR API works synchronously and the Read API works asynchronously. Is this correct?
True
118. Why the Read API works asynchronously?
Because the Read API can work with larger documents
119. You want to extract text from images and then use the Text Analytics service to analyze the text. You want
developers to require only one key and endpoint to access all of your services. What kind of resource should you
create in your Azure subscription?
Cognitive Services
120. You plan to use the Computer Vision service to read the text in a large PDF document. Which API should you
use?
The Read API
121. The _________ in Azure provides intelligent form processing capabilities that you can use to automate the
processing of data in documents such as forms, invoices, and receipts.
Form Recognizer
122. How many ways Form Recognizer supports automated document processing?
2 waysA pre-built receipt model that is provided out-of-the-box, and is trained to recognize and extract data from sales
receipts.Custom models, which enable you to extract what are known as key/value pairs and table data from forms.
Custom models are trained using your own data, which helps to tailor this model to your specific forms. Starting with only
five samples of your forms, you can train the custom model. After the first training exercise, you can evaluate the results
and consider if you need to add more samples and re-train.
123. Currently, the pre-built receipt model is designed to recognize common receipts, in English, that are common
to the USA. Is this true?
True
124. What are the guidelines to get the best results when using a custom model?
Images must be JPEG, PNG, BMP, PDF, or TIFF formats
File size must be less than 20 MB
Image size between 50 x 50 pixels and 10000 x 10000 pixel
For PDF documents, no larger than 17 inches x 17 inches
125. You plan to use the Form Recognizer pre-built receipt model. Which kind of Azure resource should you
create?
Form Recognizer
126. You are using the Form Recognizer service to analyze receipts that you have scanned into JPG format
images. What is the maximum file size of the JPG file you can submit to the pre-built receipt model?
20 MB
Describe features of Natural Language Processing (NLP) workloads on Azure (15–20%)
Practice questions based on these concepts
 Identify features of common NLP Workload Scenarios
 Identify Azure tools and services for NLP workloads
127. What is Text Analytics?
Text analytics is a process where an artificial intelligence (AI) algorithm, running on a computer, evaluates these same
attributes in text, to determine specific insights.
128. You need to use a service from Azure that determines the language of a document or text (for example,
French or English). Which one should you use?
Text Analytics cognitive service
129. You need to use a service from Azure that performs sentiment analysis on text to determine a positive or
negative sentiment. Which one should you use?
Text Analytics cognitive service
130. You need to use a service from Azure that extracts key phrases from the text that might indicate its main
talking points. Which one should you use?
Text Analytics cognitive service
131. You need to use a service from Azure Identify and categorize entities in the text. Entities can be people,
places, organizations, or even everyday items such as dates, times, quantities, and so on. Which one should you
use?
Text Analytics cognitive service
132. You are planning to read text information only. Which resource you should provision?
A Text Analytics resource - choose this resource type if you only plan to use the Text Analytics service, or if you want to
manage access and billing for the resource separately from other services.
133. You are planning to read text information and objects in the Image. Which resource you should provision?
A Cognitive Services resource - choose this resource type if you plan to use the Text Analytics service in combination
with other cognitive services, and you want to manage access and billing for these services together.
134. The Text Analytics service has language detection capability and you can submit multiple documents at a
time for analysis. Is this true?
True
135. You have submitted multiple documents to the Text Analytics service. What is the output for each
document?
* The language name (for example "English")
* The ISO 6391 language code (for example, "en")
* A score indicating a level of confidence in the language detection.
136. Consider a scenario where you own and operate a restaurant where customers can complete surveys and
provide feedback on the food, the service, staff, and so on. Suppose you have received the following reviews
from customers:
Review 1: “A fantastic place for lunch. The soup was delicious.”
Review 2: “Comida maravillosa y gran servicio.”
Review 3: “The croque monsieur avec frites was terrific. Bon appetit!”
You can use the Text Analytics service to detect the language for each of these reviews, and it might respond with the
following results:

What does the information in the above table mean?


Review 1: It detected English with 1.0 confidence
Review 2: It detected Spanish with 1.0 confidence
Review 3: The language detection service will focus on the predominant language in the text. The service uses an
algorithm to determine the predominant language, such as length of phrases or total amount of text for the language
compared to other languages in the text.The predominant language will be the value returned, along with the language
code. The confidence score may be less than 1 as a result of the mixed language text.
137. When the text in the document is ambiguous or mixed language content. What is the output of the Text
Analytics service?
An ambiguous content example would be a case where the document contains limited text, or only punctuation. For
example, using the service to analyze the text ":-)", results in a value of unknown for the language name and the language
identifier, and a score of NaN (which is used to indicate not a number).
138. What does the confidence score of NaN Text Analytics service output mean?
Ambiguous or mixed language content
139. What is the Sentiment Analysis?
The Text Analytics service can evaluate text and return sentiment scores and labels for each sentence. This capability is
useful for detecting positive and negative sentiment in social media, customer reviews, discussion forums and more.
140. What are the score ranges of Sentiment Analysis from the Text Analytics service?
Using the pre-built machine learning classification model, the service evaluates the text and returns a sentiment score in
the range of 0 to 1, with values closer to 1 being a positive sentiment. Scores that are close to the middle of the range (0.5)
are considered neutral or indeterminate.
141. What does the sentiment analysis score of 0.5 mean?
Indeterminate sentimentA score of 0.5 might indicate that the sentiment of the text is indeterminate, and could result from
text that does not have sufficient context to discern a sentiment or insufficient phrasing. For example, a list of words in a
sentence that has no structure, could result in an indeterminate score.
142. You are using the Text Analytics service for sentiment analysis. You have used the wrong language code.
For example, A language code (such as “en” for English, or “fr” for French) is used to inform the service which
language the text is in. What score does the service return?
The service will return a score of precisely 0.5.
143. What is Keyphrase extraction?
Key phrase extraction is the concept of evaluating the text of a document, or documents, and then identifying the main
talking points of the document(s).
144. You are running a restaurant and have collected thousands of reviews through a number of surveys. You
don’t have time to go through each review but, you want to know the most talking points. What feature of Text
Analytics would help here?
Key phrase extraction: you can use the key phrases to identify important elements of the review.
145. What is Entity Recognition?
You can provide the Text Analytics service with unstructured text and it will return a list of entities in the text that it
recognizes. The service can also provide links to more information about that entity on the web. An entity is essentially an
item of a particular type or a category;and in some cases, subtype
146. You want to use the Text Analytics service to determine the key talking points in a text document. Which
feature of the service should you use?
Key phrase extraction
147. You use the Text Analytics service to perform sentiment analysis on a document, and a score of 0.99 is
returned. What does this score indicate about the document sentiment?
The document is positive.
148. When might you see NaN returned for a score in Language Detection?
When the language is ambiguous
149. What is Speech recognition?
The ability to detect and interpret spoken input.Speech recognition is concerned with taking the spoken word and
converting it into data that can be processed - often by transcribing it into a text representation. The spoken words can be
in the form of a recorded voice in an audio file, or live audio from a microphone.
150. What is Speech synthesis?
The ability to generate spoken output.Speech synthesis is in many respects the reverse of speech recognition. It is
concerned with vocalizing data, usually by converting text to speech
151. What are the models you use to accomplish Speech recognition?
An acoustic model that converts the audio signal into phonemes (representations of specific sounds).A language model
that maps phonemes to words, usually using a statistical algorithm that predicts the most probable sequence of words
based on the phonemes.
152. What are some of the use cases for speech recognition?
* Providing closed captions for recorded or live videos
* Creating a transcript of a phone call or meeting
* Automated note dictation
* Determining intended user input for further processing
153. What are some of the use cases for speech synthesis?
* Generating spoken responses to user input.
* Creating voice menus for telephone systems.
* Reading email or text messages aloud in hands-free scenarios.
* Broadcasting announcements in public locations, such as railway stations or airports.
154. What are the required elements for the speech synthesis?
The text to be spoken.
The voice to be used to vocalize the speech.To synthesize speech, the system typically tokenizes the text to break it down
into individual words, and assigns phonetic sounds to each word. It then breaks the phonetic transcription into prosodic
units (such as phrases, clauses, or sentences) to create phonemes that will be converted to audio format. These
phonemes are then synthesized as audio by applying a voice, which will determine parameters such as pitch and timbre;
and generating an audio wave form that can be output to a speaker or written to a file.
155. What are the services for speech recognition and speech synthesis from Azure?
The Speech-to-Text API
The Text-to-Speech API
156. You want to use a service from Azure for just translating user spoken output to text. Which resource you
should be provisioned in the Azure subscription?
A Speech resource - choose this resource type if you only plan to use the Speech service, or if you want to manage
access and billing for the resource separately from other services.
157. You can use the speech-to-text API to perform real-time or batch transcription of audio into a text format.
What does it mean?
Real-time speech-to-text allows you to transcribe text in audio streams. You can use real-time transcription for
presentations, demos, or any other scenario where a person is speaking.Not all speech-to-text scenarios are real time.
You may have audio recordings stored on a file share, a remote server, or even on Azure storage. You can point to audio
files with a shared access signature (SAS) URI and asynchronously receive transcription results.
158. You have a person speaking right now and you want to transcribe that into written output. Which
transcription should you use?
Real-time transcription
159. You have thousands of stored audio files and you want to transcribe that into written output. Which
transcription should you use?
Batch transcription
160. Why Batch transcription is asynchronous?
Batch transcription should be run in an asynchronous manner because the batch jobs are scheduled on a best-effort basis.
Normally a job will start executing within minutes of the request but there is no estimate for when a job changes into the
running state.
161. You plan to build an application that uses the Speech service to transcribe audio recordings of phone calls
into text and then submits the transcribed text to the Text Analytics service to extract key phrases. You want to
manage access and billing for the application services in a single Azure resource. Which type of Azure resource
should you create?
Cognitive Services
162. You want to use the Speech service to build an application that reads incoming email message subjects
aloud. Which API should you use?
Text-to-Speech
163. What is Text Translation?
Text translation can be used to translate documents from one language to another, translate email communications that
come from foreign governments, and even provide the ability to translate web pages on the Internet. Many times you will
see a Translate option for posts on social media sites, or the Bing search engine can offer to translate entire web pages
that are turned in search results.
164. What is Speech Translation?
Speech translation is used to translate between spoken languages, sometimes directly (speech-to-speech translation) and
sometimes by translating to an intermediary text format (speech-to-text translation).
165. What is the service from Microsoft Azure for Text Translation?
The Translator Text service, which supports text-to-text translation.
166. What is the service from Microsoft Azure for Speech Translation?
The Speech service, which enables speech-to-text and speech-to-speech translation.
167. What is the output if you use the Text Analytics service to detect entities in the following restaurant review
extract:
“I ate at the restaurant in Seattle last week.”

168. What are the services you should provision in your Azure subscription if you want to manage access and
billing for each service individually?
There are dedicated Translator Text and Speech resource types
169. The Text Translator service supports text-to-text translation of more than 60 languages. is this correct?
True
170. Using the Text Translate service you can specify one from a language with multiple to languages, enabling
you to simultaneously translate a source document to multiple languages. Is this true?
True
171. How do you handle brand names which are the same in all languages when using Text Translate service?
Selective translation. You can tag content so that it isn't translated.
172. When using the Text Translate you can control profanity translation by either marking the translated text as
profane or by omitting it in the results. Is this correct?
TrueProfanity filtering. Without any configuration, the service will translate the input text, without filtering out profanity.
Profanity levels are typically culture-specific but you can control profanity translation by either marking the translated text
as profane or by omitting it in the results.
173. ________ used to transcribe speech from an audio source to text format?
Speech-to-text
174. ________ used to generate spoken audio from a text source?
Text-to-speech
175. ________ used to translate speech in one language to text or speech in another?
Speech Translation
176. You are developing an application that must take English input from a microphone and generate a real-time
text-based transcription in Hindi. Which service should you use?
Speech
177. You need to use the Translator Text service to translate email messages from Spanish into both English and
French? What is the most efficient way to accomplish this goal?
Make a single call to the service; specifying a "from" language of "es", a "to" language of "en", and another "to" language of
"fr".
178. On Microsoft Azure, language understanding is supported through the ___________?
Language Understanding Intelligent Service
179. To work with Language Understanding, you need to take into account three core concepts. What are these
concepts?
utterances, entities, and intents.
180. What are Utterances?
An utterance is an example of something a user might say, and which your application must interpret. For example, when
using a home automation system, a user might use the following utterances:"Switch the fan on.""Turn on the light."
181. What are the Entities?
An entity is an item to which an utterance refers. For example, fan and light in the following utterances:"Switch the fan
on.""Turn on the light."
182. What are Intents?
An intent represents the purpose, or goal, expressed in a user's utterance. For example, for both of the previously
considered utterances, the intent is to turn a device on; so in your Language Understanding application, you might define a
TurnOn intent that is related to these utterances.
183. What is None intent?
In a Language Understanding application, the None intent is created but left empty on purpose. The None intent is a
required intent and can't be deleted or renamed. Fill it with utterances that are outside of your domain.
184. Creating a language understanding application with Language Understanding consists of two main tasks.
What are these tasks?
First you must define entities, intents, and utterances with which to train the language model - referred to as authoring the
model.

Then you must publish the model so that client applications can use it for intent and entity prediction based on user input.
185. How many types of entities and what are those?
There are four types of entities:Machine-Learned: Entities that are learned by your model during training from context in
the sample utterances you provide.List: Entities that are defined as a hierarchy of lists and sublists. For example, a device
list might include sublists for light and fan. For each list entry, you can specify synonyms, such as lamp for light.RegEx:
Entities that are defined as a regular expression that describes a pattern — for example, you might define a pattern like [0–
9]{3}-[0–9]{3}-[0–9]{4} for telephone numbers of the form 555–123–4567.Pattern.any: Entities that are used with patterns
to define complex entities that may be hard to extract from sample utterances.
186. You need to provision an Azure resource that will be used to author a new Language Understanding
application. What kind of resource should you create?
Language Understanding
187. You are authoring a Language Understanding application to support an international clock. You want users
to be able to ask for the current time in a specified city, for example, “What is the time in London?”. What should
you do?
Define a "city" entity and a "GetTime" intent with utterances that indicate the city intent.
188. You have published your Language Understanding application. What information does a client application
developer need to get predictions from it?
The endpoint and key for the application's prediction resource
Describe features of conversational AI workloads on Azure (15–20%)
Practice questions based on these concepts
 Identify common use cases for conversational AI
 Identify Azure services for conversational AI
189. Name one example of Conversational AI?
chat interface
190. What do you need to implement a conversation AI-based chatbot?
A knowledge base of question and answer pairs -- usually with some built-in natural language processing model to enable
questions that can be phrased in multiple ways to be understood with the same semantic meaning.A bot service that
provides an interface to the knowledge base through one or more channels.
191. What is the Azure service to create and publish a knowledge base with built-in natural language processing
capabilities?
QnA Maker
192. What is the Azure service that provides a framework for developing, publishing, and managing bots on
Azure?
Azure Bot Service.
193. You can write code to create and manage knowledge bases using the QnA Maker REST API or SDK. Is this
true?
Truein most scenarios it is easier to use the QnA Maker portal.
194. To create a knowledge base, you must first provision a QnA Maker resource in your Azure subscription. Is
this true?
True
195. After provisioning a QnA Maker resource, you can use the QnA Maker portal to create a knowledge base that
consists of question-and-answer pairs. What are the ways to get this knowledge base?
* Generated from an existing FAQ document or web page.
* Imported from a pre-defined chit-chat data source.
* Entered and edited manually.
196. Most of the time the knowledge base is created by FAQs. Is this true?
FalseA knowledge base is created using a combination of all of these techniques; starting with a base dataset of questions
and answers from an existing FAQ document, adding common conversational exchanges from a chit-chat source, and
extending the knowledge base with additional manual entries.
197. There are so many alternatives to asking a question how do you solve this problem while creating a
knowledge base?
Questions in the knowledge base can be assigned alternative phrasing to help consolidate questions with the same
meaning. For example, you might include a question like:What is your head office location?You can anticipate different
ways this question could be asked by adding an alternative phrasing such as:Where is your head office located?
198. How to train the knowledge base?
After creating a set of question-and-answer pairs, you must train your knowledge base. This process analyzes your literal
questions and answers and applies a built-in natural language processing model to match appropriate answers to
questions, even when they are not phrased exactly as specified in your question definitions.
199. How to test the knowledge base?
After training, you can use the built-in test interface in the QnA Maker portal to test your knowledge base by submitting
questions and reviewing the answers that are returned.
200. When to publish the knowledge base?
When you're satisfied with your trained knowledge base, you can publish it so that client applications can use it over its
REST interface.
201. What does client applications need to access the published knowledge base?
* The knowledge base ID
* The knowledge base endpoint
* The knowledge base authorization key
202. You have created and published a knowledge base. You want to deliver it to users through a custom bot.
What should you do to accomplish this?
You can create a custom bot by using the Microsoft Bot Framework SDK to write code that controls conversation flow
and integrates with your QnA Maker knowledge base.
203. How many ways you can create bots for your knowledge base?
1. Custom bot by Microsoft Bot Framework SDK
2. Automatic bot creation functionality of QnA Maker
204. What is the Automatic bot creation functionality of QnA Maker?
The automatic bot creation functionality of QnA Maker enables you create a bot for your published knowledge base and
publish it as an Azure Bot Service application with just a few clicks.
205. Can you extend and configure the bot?
YesAfter creating your bot, you can manage it in the Azure portal, where you can:* Extend the bot's functionality by adding
custom code.
* Test the bot in an interactive test interface.
* Configure logging, analytics, and integration with other services.
206. When your bot is ready you can connect to only one channel at one time. Is this true?
FalseWhen your bot is ready to be delivered to users, you can connect it to multiple channels; making it possible for users
to interact with it through web chat, email, Microsoft Teams, and other common communication media.
207. Your organization has an existing frequently asked questions (FAQ) document. You need to create a QnA
Maker knowledge base that includes the questions and answers from the FAQ with the least possible effort. What
should you do?
Import the existing FAQ document into a new knowledge base.
208. You need to deliver a support bot for internal use in your organization. Some users want to be able to submit
questions to the bot using Microsoft Teams, others want to use a web chat interface on an internal web site. What
should you do?
Create a knowledge base. Then create a bot for the knowledge base and connect the Web Chat and Microsoft Teams
channels for your bot
209. Bots are designed to interact with users in a conversational manner, as shown in this example of a chat
interface. What kind of Azure resource should we use to accomplish this?
Azure Bot Service.
Conclusion
The AI fundamentals exam is multiple-choice, multiple answers, text-based, drag-and-drop, fill in the blanks exam. These
sample questions definitely help you prepare for the certification. I would recommend you go through the documentation
first and then refer to this afterward or right before the exam.

Microsoft AI-900 Sample Questions:

01. What are two tasks that can be performed by using computer vision?
Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.
a) Predict stock prices.
b) Detect brands in an image.
c) Detect the color scheme in an image
d) Translate text between languages.
e) Extract key phrases.

02. What is a use case for classification?


a) predicting how many cups of coffee a person will drink based on how many hours the person slept the
previous night.
b) analyzing the contents of images and grouping images that have similar colors
c) predicting whether someone uses a bicycle to travel to work based on the distance from home to work
d) predicting how many minutes it will take someone to run a race based on past race times

03. Which AI service can you use to interpret the meaning of a user input such as “Call me back
later?”
a) Translator Text
b) Speech
c) Text Analytics
d) Language Understanding (LUIS)

04. You are designing an AI system that empowers everyone, including people who have
hearing, visual, and other impairments. This is an example of which Microsoft guiding principle
for responsible AI?
a) fairness
b) inclusiveness
c) reliability and safety
d) accountability

05. Which two components can you drag onto a canvas in Azure Machine Learning designer?
Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.
a) dataset
b) compute
c) pipeline
d) module
06. You have a dataset that contains information about taxi journeys that occurred during a
given period. You need to train a model to predict the fare of a taxi journey. What should you
use as a feature?
a) the number of taxi journeys in the dataset
b) the trip distance of individual taxi journeys
c) the fare of individual taxi journeys
d) the trip ID of individual taxi journeys

07. Which AI service should you use to create a bot from a frequently asked questions (FAQ)
document?
a) Speech
b) Language Understanding (LUIS)
c) Text Analytics
d) QnA Maker

08. You have a frequently asked questions (FAQ) PDF file. You need to create a conversational
support system based on the FAQ. Which service should you use?
a) QnA Maker
b) Text Analytics
c) Computer Vision
d) Language Understanding (LUIS)

09. Which metric can you use to evaluate a classification model?


a) root mean squared error (RMSE)
b) mean absolute error (MAE)
c) coefficient of determination (R2)
d) true positive rate

10. Which two of these sources can you translate from one language into another?
a) Image
b) Handwriting
c) Text
d) Video
e) Speech

Answers:

Question: 01 Question: 02 Question: 03 Question: 04 Question: 05


Answer: b, e Answer: b Answer: c Answer: b Answer: a, d
Question: 06 Question: 07 Question: 08 Question: 09 Question: 10
Answer: b Answer: d Answer: a Answer: d Answer: c, e

AI-900

Number: AI-900
Passing Score: 800
Time Limit: 120 min
File Version: 1

Sections
1. Describe Artificial Intelligence workloads and considerations
2. Describe fundamental principles of machine learning on Azure
3. Describe features of computer vision workloads on Azure
4. Describe features of Natural Language Processing (NLP) workloads on Azure
5. Describe features of conversational AI workloads on Azure
Exam A

QUESTION 1
A company employs a team of customer service agents to provide telephone and email support to customers.

The company develops a webchat bot to provide automated answers to

common customer queries. Which business benefit should the company

expect as a result of creating the webchat bot solution?


A. increased sales
B. a reduced workload for the customer service agents
C. improved product reliability

Correct Answer: B
Section: Describe Artificial Intelligence workloads
and considerations Explanation

Explanation/Reference:

QUESTION 2
For a machine learning progress, how should you split data for training and evaluation?

A. Use features for training and labels for evaluation.


B. Randomly split the data into rows for training and rows for evaluation.
C. Use labels for training and features for evaluation.
D. Randomly split the data into columns for training and columns for evaluation.

Correct Answer: B
Section: Describe Artificial Intelligence workloads
and considerations Explanation
Explanation/
Reference:
Answer: B
Explanation:
In Azure Machine Learning, the percentage split is the available technique to split the data. In this technique, random data
of a given percentage will be split to train and test data.

Reference: https://www.sqlshack.com/prediction-
in-azure-machine-learning/

QUESTION 3
You build a machine learning model by using the automated machine learning user interface (UI).

You need to ensure that the model meets the Microsoft transparency principle for responsible AI.

What should you do?

A. Set Validation type to Auto.


B. Enable Explain best model.
C. Set Primary metric to accuracy.
D. Set Max concurrent iterations to 0.

Correct Answer: B
Section: Describe Artificial Intelligence workloads
and considerations Explanation

Explanation/Reference:
Explanation:
Model Explain Ability.
Most businesses run on trust and being able to open the ML “black box” helps build transparency and trust. In heavily
regulated industries like healthcare and banking, it is critical to comply with regulations and best practices. One key
aspect of this is understanding the relationship between input variables (features) and model output. Knowing both the
magnitude and direction of the impact each feature (feature importance) has on the predicted value helps better
understand and explain the model. With model explain ability, we enable you to understand feature importance as part of
automated ML runs.

Reference: https://azure.microsoft.com/en-us/blog/new-automated-machine-learning-
capabilities-in-azure-machine-learning-service/

QUESTION 4
You are designing an AI system that empowers everyone, including people who have hearing, visual, and other
impairments.

This is an example of which Microsoft guiding principle for responsible AI?


A. fairness
B. inclusiveness
C. reliability and safety
D. accountability

Correct Answer: B
Section: Describe Artificial Intelligence workloads
and considerations Explanation

Explanation/Reference:
Explanation:
Inclusiveness: At Microsoft, we firmly believe everyone should benefit from intelligent technology, meaning it must
incorporate and address a broad range of human needs and experiences. For the 1 billion people with disabilities around
the world, AI technologies can be a game-changer.

Reference: https://docs.microsoft.com/en-us/learn/modules/responsible-
ai-principles/4-guiding-principles

QUESTION 5
You are building an AI system.

Which task should you include to ensure that the service meets the Microsoft transparency principle for responsible AI?

A. Ensure that all visuals have an associated text that can be read by a screen reader.
B. Enable autoscaling to ensure that a service scales based on demand.
C. Provide documentation to help developers debug code.
D. Ensure that a training dataset is representative of the population.

Correct Answer: C
Section: Describe Artificial Intelligence workloads
and considerations Explanation

Explanation/Reference:
Reference: https://docs.microsoft.com/en-us/learn/modules/responsible-
ai-principles/4-guiding-principles

QUESTION 6
Your company is exploring the use of voice recognition technologies in its smart home devices. The company wants to
identify any barriers that might unintentionally leave out specific user groups.

This an example of which Microsoft guiding principle for responsible AI?


A. accountability
B. fairness
C. inclusiveness
D. privacy and security

Correct Answer: C
Section: Describe Artificial Intelligence workloads
and considerations Explanation

Explanation/Reference:
Reference: https://docs.microsoft.com/en-us/learn/modules/responsible-
ai-principles/4-guiding-principles

QUESTION 7
What are three Microsoft guiding principles for responsible AI? Each correct answer presents a complete solution.

NOTE: Each correct selection is worth one point.

A. knowledgeability
B. decisiveness
C. inclusiveness
D. fairness
E. opinionatedness
F. reliability and safety

Correct Answer: CDF


Section: Describe Artificial Intelligence workloads
and considerations Explanation
Explanation/Reference:
Reference: https://docs.microsoft.com/en-us/learn/modules/responsible-
ai-principles/4-guiding-principles

QUESTION 8
You run a charity event that involves posting photos of people wearing sunglasses on Twitter.

You need to ensure that you only retweet photos that meet the following requirements:

Include one or more faces.


Contain at least one person wearing sunglasses.

What should you use to analyze the images?

A. the Verify operation in the Face service


B. the Detect operation in the Face service
C. the Describe Image operation in the Computer Vision service
D. the Analyze Image operation in the Computer Vision service

Correct Answer: B
Section: Describe Artificial Intelligence workloads
and considerations Explanation

Explanation/Reference:
Reference:
https://docs.microsoft.com/en-us/azure/cognitive-
services/face/overview

QUESTION 9
Which metric can you use to evaluate a classification model?

A. true positive rate


B. mean absolute error (MAE)
C. coefficient of determination (R2)
D. root mean squared error (RMSE)

Correct Answer: A
Section: Describe fundamental principles of
machine learning on Azure Explanation

Explanation/Reference:
Explanation:
What does a good model look like?
An ROC curve that approaches the top left corner with 100% true positive rate and 0% false positive rate will be the best
model. A random model would display as a flat line from the bottom left to the top right corner. Worse than random would
dip below the y=x line.

Reference:
https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-
ml#classification

QUESTION 10
Which two components can you drag onto a canvas in Azure Machine Learning designer? Each correct answer presents a
complete solution.
NOTE: Each correct selection is worth one point.

A. dataset
B. compute
C. pipeline
D. module

Correct Answer: AD
Section: Describe fundamental principles of
machine learning on Azure Explanation

Explanation/Reference:
Explanation:
You can drag-and-drop datasets and modules onto the canvas.

Reference:
https://docs.microsoft.com/en-us/azure/machine-learning/concept-designer

QUESTION 11
You need to create a training dataset and validation dataset from an existing dataset.

Which module in the Azure Machine Learning designer should you use?

A. Select Columns in Dataset


B. Add Rows
C. Split Data
D. Join Data

Correct Answer: C
Section: Describe fundamental principles of
machine learning on Azure Explanation

Explanation/Reference:
Explanation:
A common way of evaluating a model is to divide the data into a training and test set by using Split Data, and
then validate the model on the training data. Use the Split Data module to divide a dataset into two distinct sets.
The studio currently supports training/validation data splits

Reference:
https://docs.microsoft.com/en-us/azure/machine-learning/how-to-configure-cross-validation-
data-splits2

QUESTION 12
You have the Predicted vs. True chart shown in the following exhibit.

Which type of model is the chart used to evaluate?

A. classification
B. regression
C. clustering

Correct Answer: B
Section: Describe fundamental principles of
machine learning on Azure Explanation

Explanation/Reference:
Explanation:
What is a Predicted vs. True chart?
Predicted vs. True shows the relationship between a predicted value and its correlating true value for a regression
problem. This graph can be used to measure performance of a model as the closer to the y=x line the predicted values
are, the better the accuracy of a predictive model.

Reference:
https://docs.microsoft.com/en-us/azure/machine-learning/how-to-
understand-automated-m

QUESTION 13
Which type of machine learning should you use to predict the number of gift cards that will be sold next month?

A. classification
B. regression
C. clustering

Correct Answer: B
Section: Describe fundamental principles of
machine learning on Azure Explanation

Explanation/Reference:
Explanation:
In the most basic sense, regression refers to prediction of a numeric target.

Linear regression attempts to establish a linear relationship between one or more independent variables and a numeric
outcome, or dependent variable.

You use this module to define a linear regression method, and then train a model using a labeled dataset. The trained
model can then be used to make predictions.

Reference:
https://docs.microsoft.com/en-us/azure/machine-learning/studio-module-reference/linear-
regression

QUESTION 14
You have a dataset that contains information about taxi journeys that occurred during a given period.

You need to train a model to predict the fare of a taxi journey.

What should you use as a feature?

A. the number of taxi journeys in the dataset


B. the trip distance of individual taxi journeys
C. the fare of individual taxi journeys
D. the trip ID of individual taxi journeys

Correct Answer: B
Section: Describe fundamental principles of
machine learning on Azure Explanation

Explanation/Reference:
Explanation:
The label is the column you want to predict. The identified Featuresare the inputs you give the model to predict the Label.
Example:
The provided data set contains the following columns:

vendor_id: The ID of the taxi


vendor is a feature. rate_code: The
rate type of the taxi trip is a feature.
passenger_count: The number of passengers on the trip is a feature. trip_time_in_secs: The amount of time the trip took.
You want to predict the fare of the trip before the trip is completed. At that moment, you don't know how long the trip would
take. Thus, the trip time is not a feature and you'll exclude this column from the model.
trip_distance: The distance of the trip is a
feature. payment_type: The payment
method (cash or credit card) is a feature.
fare_amount: The total taxi fare paid is the
label.

Reference:
https://docs.microsoft.com/en-us/dotnet/machine-learning/tutorials/predict-prices

QUESTION 15
You need to predict the sea level in meters for the next 10 years.

Which type of machine learning should you use?

A. classification
B. regression
C. clustering
Correct Answer: B
Section: Describe fundamental principles of
machine learning on Azure Explanation

Explanation/Reference:
Explanation:
In the most basic sense, regression refers to prediction of a numeric target.

Linear regression attempts to establish a linear relationship between one or more independent variables and a numeric
outcome, or dependent variable.

You use this module to define a linear regression method, and then train a model using a labeled dataset. The trained
model can then be used to make predictions.

Reference:
https://docs.microsoft.com/en-us/azure/machine-learning/studio-module-reference/linear-
regression

QUESTION 16
Which service should you use to extract text, key/value pairs, and table data automatically from scanned documents?
A. Form Recognizer
B. Text Analytics
C. Ink Recognizer
D. Custom Vision

Correct Answer: A
Section: Describe fundamental principles of
machine learning on Azure Explanation

Explanation/Reference:
Explanation:
Accelerate your business processes by automating information extraction. Form Recognizer applies advanced machine
learning to accurately extract text, key/value pairs, and tables from documents. With just a few samples, Form Recognizer
tailors its understanding to your documents, both on-premises and in the cloud. Turn forms into usable data at a fraction of
the time and cost, so you can focus more time acting on the information rather than compiling it.

Reference:
https://azure.microsoft.com/en-us/services/cognitive-services/form-recognizer/

QUESTION 17
You use Azure Machine Learning designer to publish an inference pipeline.

Which two parameters should you use to consume the pipeline? Each correct answer presents part of the solution.

NOTE: Each correct selection is worth one point.

A. the model name


B. the training endpoint
C. the authentication key
D. the REST endpoint

Correct Answer: CD
Section: Describe fundamental principles of
machine learning on Azure Explanation

Explanation/Reference:
Explanation:
You can consume a published pipeline in the Published pipelines page. Select a published pipeline and find the REST
endpoint of it.

To consume the pipeline, you need:


The REST endpoint
for your service The
Primary Key for your
service

Reference: https://docs.microsoft.com/en-in/learn/modules/create-regression-model-azure-
machine-learning-designer/deploy-service

QUESTION 18
A medical research project uses a large anonymized dataset of brain scan images that are categorized into predefined
brain haemorrhage types.

You need to use machine learning to support early detection of the different brain haemorrhage types in the images before
the images are reviewed by a person.

This is an example of which type of machine learning?

A. clustering
B. regression
C. classification

Correct Answer: C
Section: Describe fundamental principles of
machine learning on Azure Explanation

Explanation/
Reference:
Reference:
https://docs.microsoft.com/en-us/learn/modules/create-classification-model-azure-machine-
learning-designer/introduction

QUESTION 19
When training a model, why should you randomly split the rows into separate subsets?

A. to train the model twice to attain better accuracy


B. to train multiple models simultaneously to attain better performance
C. to test the model by using data that was not used to train the model

Correct Answer: C
Section: Describe fundamental principles of
machine learning on Azure Explanation

Explanation/Reference:

QUESTION 20
You are evaluating whether to use a basic workspace or an enterprise workspace in Azure Machine Learning.

What are two tasks that require an enterprise workspace? Each correct answer presents a complete solution.

NOTE: Each correct selection is worth one point.

A. Use a graphical user interface (GUI) to run automated machine learning experiments.
B. Create a compute instance to use as a workstation.
C. Use a graphical user interface (GUI) to define and run machine learning experiments from Azure Machine Learning
designer.
D. Create a dataset from a comma-separated value (CSV) file.

Correct Answer: AC
Section: Describe fundamental principles of
machine learning on Azure Explanation

Explanation/Reference:
Explanation:
Note: Enterprise workspaces are no longer available as of September 2020. The basic workspace now has all the
functionality of the enterprise workspace.

Reference:
https://www.azure.cn/en-us/pricing/details/machin
e-learning/
https://docs.microsoft.com/en-us/azure/machine-
learning/concept-workspace

QUESTION 21
You need to predict the income range of a given customer by using the following dataset.
Which two fields should you use as features? Each correct answer presents a complete solution.

NOTE: Each correct selection is worth one point.

A. Education Level
B. Last Name
C. Age
D. Income Range
E. First Name

Correct Answer: AC
Section: Describe fundamental principles of
machine learning on Azure Explanation

Explanation/Reference:
Explanation:
First Name, Last Name, Age and Education Level are features. Income range is a label (what you want to predict). First
Name and Last Name are irrelevant in that they have no bearing on income. Age and Education level are the features you
should use.

QUESTION 22
You need to develop a mobile app for employees to scan and store their expenses while travelling.

Which type of computer vision should you use?

A. semantic segmentation
B. image classification
C. object detection
D. optical character recognition (OCR)

Correct Answer: D
Section: Describe features of computer vision
workloads on Azure Explanation

Explanation/Reference:
Explanation:
Azure's Computer Vision API includes Optical Character Recognition (OCR) capabilities that extract printed or handwritten
text from images. You can extract text from images, such as photos of license plates or containers with serial numbers, as
well as from documents - invoices, bills, financial reports, articles, and more.

Reference:
https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/concept-
recognizing-text

QUESTION 23
You need to determine the location of cars in an image so that you can estimate the distance between the cars.

Which type of computer vision should you use?


A. optical character recognition (OCR)
B. object detection
C. image classification
D. face detection

Correct Answer: B
Section: Describe features of computer vision
workloads on Azure Explanation

Explanation/Reference:
Explanation:
Object detection is similar to tagging, but the API returns the bounding box coordinates (in pixels) for each object found.
For example, if an image contains a dog, cat and person, the Detect operation will list those objects together with their
coordinates in the image. You can use this functionality to process the relationships between the objects in an image. It
also lets you determine whether there are multiple instances of the same tag in an image.

The Detect API applies tags based on the objects or living things identified in the image. There is currently no formal
relationship between the tagging taxonomy and the object detection taxonomy. At a conceptual level, the Detect API only
finds objects and living things, while the Tag API can also include contextual terms like "indoor", which can't be localized
with bounding boxes.

Reference:
https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/
concept-object-detection

QUESTION 24
You send an image to a Computer Vision API and receive back the annotated image shown in the exhibit.

Which type of computer vision was used?

A. object detection
B. semantic segmentation
C. optical character recognition (OCR)
D. image classification

Correct Answer: A
Section: Describe features of computer vision
workloads on Azure Explanation

Explanation/
Reference:
Explanation:
Object detection is similar to tagging, but the API returns the bounding box coordinates (in pixels) for each object found.
For example, if an image contains a dog, cat and person, the Detect operation will list those objects together with their
coordinates in the image. You can use this functionality to process the relationships between the objects in an image. It
also lets you determine whether there are multiple instances of the same tag in an image.

The Detect API applies tags based on the objects or living things identified in the image. There is currently no formal
relationship between the tagging taxonomy and the object detection taxonomy. At a conceptual level, the Detect API only
finds objects and living things, while the Tag API can also include contextual terms like "indoor", which can't be localized
with bounding boxes.

Reference:
https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/
concept-object-detection

QUESTION 25
What are two tasks that can be performed by using the Computer Vision service? Each correct answer presents a
complete solution.

NOTE: Each correct selection is worth one point.

A. Train a custom image classification model.


B. Detect faces in an image.
C. Recognize handwritten text.
D. Translate the text in an image between languages.

Correct Answer: BC
Section: Describe features of computer vision
workloads on Azure Explanation

Explanation/Reference:
Explanation:
B: Azure's Computer Vision service provides developers with access to advanced algorithms that process images and
return information based on the visual features you're interested in. For example, Computer Vision can determine whether
an image contains adult content, find specific brands or objects, or find human faces.

C: Computer Vision includes Optical Character Recognition (OCR) capabilities. You can use the new Read API to extract
printed and handwritten text from images and documents.

Reference:
https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/home

QUESTION 26
What is a use case for classification?

A. predicting how many cups of coffee a person will drink based on how many hours the person slept the previous
night.
B. analyzing the contents of images and grouping images that have similar colors
C. predicting whether someone uses a bicycle to travel to work based on the distance from home to work
D. predicting how many minutes it will take someone to run a race based on past race times
Correct Answer: C
Section: Describe features of computer vision
workloads on Azure Explanation

Explanation/Reference:
Explanation:
Two-class classification provides the answer to simple two-choice questions such as Yes/No or True/False.

Incorrect Answers:
A:
This is
Regre
ssion.
B:
This is
Cluste
ring.
D: This is Regression.

Reference: https://docs.microsoft.com/en-us/azure/machine-learning/algorithm-module-
reference/linear-regression
https://docs.microsoft.com/en-us/azure/machine-learning/studio-module-reference/
machine-learning-initialize-model-clustering

QUESTION 27
What are two tasks that can be performed by using computer vision? Each correct answer presents a complete solution.

NOTE: Each correct selection is worth one point.

A. Predict stock prices.


B. Detect brands in an image.
C. Detect the color scheme in an image
D. Translate text between languages.
E. Extract key phrases.

Correct Answer: BC
Section: Describe features of computer vision
workloads on Azure Explanation
Explanation/
Reference:
Explanation:

B: Identify commercial brands in images or videos from a database of thousands of global logos. You can use this feature,
for example, to discover which brands are most popular on social media or most prevalent in media product placement.

C: Analyze color usage within an image. Computer Vision can determine whether an image is black & white or color and,
for color images, identify the dominant and accent colors.
Reference:
https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/overview

QUESTION 28
Your company wants to build a recycling machine for bottles. The recycling machine must automatically identify bottles of
the correct shape and reject all other items.

Which type of AI workload should the company use?

A. anomaly detection
B. conversational AI
C. computer vision
D. natural language processing

Correct Answer: C
Section: Describe features of computer vision
workloads on Azure Explanation

Explanation/
Reference:
Explanation:
Azure's Computer Vision service gives you access to advanced algorithms that process images and return information
based on the visual features you're interested in. For example, Computer Vision can determine whether an image contains
adult content, find specific brands or objects, or find human faces.

Reference:
https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/overview

QUESTION 29
In which two scenarios can you use the Form Recognizer service? Each correct answer presents a complete solution.

NOTE: Each correct selection is worth one point.

A. Extract the invoice number from an invoice.


B. Translate a form from French to English.
C. Find image of product in a catalog.
D. Identity the retailer from a receipt.

Correct Answer: AD
Section: Describe features of computer vision
workloads on Azure Explanation
Explanation/Reference:
Reference:
https://azure.microsoft.com/en-gb/services/cognitive-services/form-recognizer/#features

QUESTION 30
Your website has a chatbot to assist customers.

You need to detect when a customer is upset based on what the customer types in the chatbot.

Which type of AI workload should you use?

A. anomaly detection
B. semantic segmentation
C. regression
D. natural language processing

Correct Answer: D
Section: Describe features of Natural Language Processing (NLP)
workloads on Azure Explanation
Explanation/Reference:
Explanation:
Natural language processing (NLP) is used for tasks such as sentiment analysis, topic detection, language detection, key
phrase extraction, and document categorization.

Sentiment Analysis is the process of determining whether a piece of writing is positive, negative or neutral.

Reference:
https://docs.microsoft.com/en-us/azure/architecture/data-guide/technology-choices/natural-
language-processing

QUESTION 31
Which AI service can you use to interpret the meaning of a user input such as “Call me back later?”

A. Translator Text
B. Text Analytics
C. Speech
D. Language Understanding (LUIS)

Correct Answer: D
Section: Describe features of Natural Language Processing (NLP)
workloads on Azure Explanation

Explanation/Reference:
Explanation:
Language Understanding (LUIS) is a cloud-based AI service, that applies custom machine-learning intelligence to a user's
conversational, natural language text to predict overall meaning, and pull out relevant, detailed information.

Reference:
https://docs.microsoft.com/en-us/azure/cognitive-
services/luis/what-is-luis

QUESTION 32
You are developing a chatbot solution in Azure.

Which service should you use to determine a user’s intent?

A. Translator Text
B. QnA Maker
C. Speech
D. Language Understanding (LUIS)

Correct Answer: D
Section: Describe features of Natural Language Processing (NLP)
workloads on Azure Explanation

Explanation/Reference:
Explanation:
Language Understanding (LUIS) is a cloud-based API service that applies custom machine-learning intelligence to a user's
conversational, natural language text to predict overall meaning, and pull out relevant, detailed information.

Design your LUIS model with categories of user intentions called intents. Each intent needs examples of user utterances.
Each utterance can provide data that needs to be extracted with machine-learning entities.

Reference:
https://docs.microsoft.com/en-us/azure/cognitive-
services/luis/what-is-luis

QUESTION 33
You need to make the press releases of your company available in a range of languages.

Which service should you use?


A. Translator Text
B. Text Analytics
C. Speech
D. Language Understanding (LUIS)

Correct Answer: A
Section: Describe features of Natural Language Processing (NLP)
workloads on Azure Explanation
Explanation/Reference:
Explanation:
Translator is a cloud-based machine translation service you can use to translate text in near real-time through a simple
REST API call. The service uses modern neural machine translation technology and offers statistical machine translation
technology. Custom Translator is an extension of Translator, which allows you to build neural translation systems.

Reference:
https://docs.microsoft.com/en-us/azure/cognitive-services/translator/

QUESTION 34
You are developing a natural language processing solution in Azure. The solution will analyze customer reviews and
determine how positive or negative each review is.

This is an example of which type of natural language processing workload?

A. language detection
B. sentiment analysis
C. key phrase extraction
D. entity recognition

Correct Answer: B
Section: Describe features of Natural Language Processing (NLP)
workloads on Azure Explanation

Explanation/
Reference:
Explanation:
Sentiment Analysis is the process of determining whether a piece of writing is positive, negative or neutral.

Reference: https://docs.microsoft.com/en-us/azure/architecture/data-guide/technology-
choices/natural-language-processing QUESTION 35
You use natural language processing to process text from a Microsoft news story.

You receive the output shown in the following exhibit.

Which type of natural languages processing was performed?

A. entity recognition
B. key phrase extraction
C. sentiment analysis
D. translation

Correct Answer: A
Section: Describe features of Natural Language Processing (NLP)
workloads on Azure Explanation
Explanation/
Reference:
Explanation:
Named Entity Recognition (NER) is the ability to identify different entities in text and categorize them into pre-defined
classes or types such as: person, location, event, product, and organization.

In this question, the square brackets indicate the entities such as DateTime, PersonType, Skill.
Reference:
https://docs.microsoft.com/en-in/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-entity-linking?
tabs=version-3-preview

QUESTION 36
You are developing a solution that uses the Text Analytics service.

You need to identify the main talking points in a collection of documents.

Which type of natural language processing should you use?

A. entity recognition
B. key phrase extraction
C. sentiment analysis
D. language detection

Correct Answer: B
Section: Describe features of Natural Language Processing (NLP)
workloads on Azure Explanation

Explanation/Reference:
Explanation:
Broad entity extraction: Identify important concepts in text, including key
Key phrase extraction/ Broad entity extraction: Identify important concepts in text, including key phrases and named
entities such as people, places, and organizations.

Reference:
https://docs.microsoft.com/en-us/azure/architecture/data-guide/technology-choices/natural-
language-processing

QUESTION 37
In which two scenarios can you use speech recognition? Each correct answer presents a complete solution.

NOTE: Each correct selection is worth one point.

A. an in-car system that reads text messages aloud


B. providing closed captions for recorded or live videos
C. creating an automated public address system for a train station
D. creating a transcript of a telephone call or meeting

Correct Answer: BD
Section: Describe features of Natural Language Processing (NLP) workloads on Azure
Explanation

Explanation/Reference:
Reference: https://azure.microsoft.com/en-gb/services/cognitive-
services/speech-to-text/#features

QUESTION 38
You need to build an app that will read recipe instructions aloud to support users who have reduced vision.

Which version service should you use?

A. Text Analytics
B. Translator Text
C. Speech
D. Language Understanding (LUIS)

Correct Answer: C
Section: Describe features of Natural Language Processing (NLP)
workloads on Azure Explanation

Explanation/Reference:
Reference: https://azure.microsoft.com/en-us/services/cognitive-
services/text-to-speech/#features

QUESTION 39
Which two scenarios are examples of a conversational AI workload? Each correct answer presents a complete solution.

NOTE: Each correct selection is worth one point.

A. a telephone answering service that has a pre-recorder message


B. a chatbot that provides users with the ability to find answers on a website by themselves
C. telephone voice menus to reduce the load on human resources
D. a service that creates frequently asked questions (FAQ) documents by crawling public websites

Correct Answer: BC
Section: Describe features of conversational AI
workloads on Azure Explanation

Explanation/Reference:
Explanation:
B: A bot is an automated software program designed to perform a particular task. Think of it as a robot without a body.

C: Automated customer interaction is essential to a business of any size. In fact, 61% of consumers prefer to communicate
via speech, and most of them prefer self-service. Because customer satisfaction is a priority for all businesses, self-service
is a critical facet of any customer-facing communications strategy.

Incorrect Answers:
D: Early bots were comparatively simple, handling repetitive and voluminous tasks with relatively straightforward
algorithmic logic. An example would be web crawlers used by search engines to automatically explore and catalog web
content.

Reference: https://docs.microsoft.com/en-us/azure/architecture/data-
guide/big-data/ai-overview
https://docs.microsoft.com/en-us/azure/architecture/solution-ideas/
articles/interactive-voice-response-bot

QUESTION 40
You need to provide content for a business chatbot that will help answer simple user queries.

What are three ways to create question and answer text by using QnA Maker? Each correct answer presents a complete
solution.

NOTE: Each correct selection is worth one point.

A. Generate the questions and answers from an existing webpage.


B. Use automated machine learning to train a model based on a file that contains the questions.
C. Manually enter the questions and answers.
D. Connect the bot to the Cortana channel and ask questions by using Cortana.
E. Import chit-chat content from a predefined data source.

Correct Answer: ACE


Section: Describe features of conversational AI
workloads on Azure Explanation

Explanation/Reference:
Explanation:
Automatic extraction
Extract question-answer pairs from semi-structured content, including FAQ pages, support websites, excel files,
SharePoint documents, product manuals and policies.

Reference:
https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/concepts/content-
types

QUESTION 41
You have a frequently asked questions (FAQ) PDF file.

You need to create a conversational support system based on the FAQ.

Which service should you use?


A. QnA Maker
B. Text Analytics
C. Computer Vision
D. Language Understanding (LUIS)

Correct Answer: A
Section: Describe features of conversational AI
workloads on Azure Explanation

Explanation/Reference:
Explanation:
QnA Maker is a cloud-based API service that lets you create a conversational question-and-answer layer over your
existing data. Use it to build a knowledge base by extracting questions and answers from your semi-structured content,
including FAQs, manuals, and documents.

Reference:
https://azure.microsoft.com/en-us/services/cognitive-services/qna-maker/

QUESTION 42
You need to reduce the load on telephone operators by implementing a chatbot to answer simple questions with
predefined answers.

Which two AI service should you use to achieve the goal? Each correct answer presents part of the solution.

NOTE: Each correct selection is worth one point.

A. Text Analytics
B. QnA Maker
C. Azure Bot Service
D. Translator Text

Correct Answer: BC
Section: Describe features of conversational AI
workloads on Azure Explanation

Explanation/Reference:
Explanation:
Bots are a popular way to provide support through multiple communication channels. You can use the QnA Maker service
and Azure Bot Service to create a bot that answers user questions.

Reference:
https://docs.microsoft.com/en-us/learn/modules/build-faq-chatbot-qna-maker-azure-bot-
service/

QUESTION 43
Which two scenarios are examples of a conversational AI workload? Each correct answer presents a complete solution.

NOTE: Each correct selection is worth one point.

A. a smart device in the home that responds to questions such as “What will the weather be like today?”
B. a website that uses a knowledge base to interactively respond to users’ questions
C. assembly line machinery that autonomously inserts headlamps into cars
D. monitoring the temperature of machinery to turn on a fan when the temperature reaches a specific threshold

Correct Answer: AB
Section: Describe features of conversational AI
workloads on Azure Explanation

Explanation/Reference:

QUESTION 44
You have the process shown in the following exhibit.
Which type AI solution is shown in the diagram?

A. a sentiment analysis solution


B. a chatbot
C. a machine learning model
D. a computer vision application

Correct Answer: B
Section: Describe features of conversational AI
workloads on Azure Explanation

Explanation/Reference:

QUESTION 45
You need to develop a web-based AI solution for a customer support system. Users must be able to interact with a web
app that will guide them to the best resource or answer.

Which service should you use?

A. Custom Vision
B. QnA Maker
C. Translator Text
D. Face

Correct Answer: B
Section: Describe features of conversational AI
workloads on Azure Explanation

Explanation/Reference:
Explanation:
QnA Maker is a cloud-based API service that lets you create a conversational question-and-answer layer over your
existing data. Use it to build a knowledge base by extracting questions and answers from your semi-structured content,
including FAQs, manuals, and documents. Answer users’ questions with the best answers from the QnAs in your
knowledge base—automatically. Your knowledge base gets smarter, too, as it continually learns from user behavior.

Incorrect Answers:
A: Azure Custom Vision is a cognitive service that lets you build, deploy, and improve your own image classifiers. An
image classifier is an AI service that applies labels (which represent classes) to images, according to their visual
characteristics. Unlike the Computer Vision service, Custom Vision allows you to specify the labels to apply.
D: Azure Cognitive Services Face Detection API: At a minimum, each detected face corresponds to a faceRectangle field
in the response. This set of pixel coordinates for the left, top, width, and height mark the located face. Using these
coordinates, you can get the location of the face and its size. In the API response, faces are listed in size order from
largest to smallest.

Reference:
https://azure.microsoft.com/en-us/services/cognitive-services/qna-maker/

QUESTION 46
Which AI service should you use to create a bot from a frequently asked questions (FAQ) document?
A. QnA Maker
B. Language Understanding (LUIS)
C. Text Analytics
D. Speech

Correct Answer: A
Section: Describe features of conversational AI
workloads on Azure Explanation

Explanation/Reference:

QUESTION 47
Which scenario is an example of a webchat bot?

A. Determine whether reviews entered on a website for a concert are positive or negative, and then add a thumbs up
or thumbs down emoji to the reviews.
B. Translate into English questions entered by customers at a kiosk so that the appropriate person can call the
customers back.
C. Accept questions through email, and then route the email messages to the correct person based on the content of
the message.
D. From a website interface, answer common questions about scheduled events and ticket purchases for a music
festival.

Correct Answer: D

You might also like