Real Amazon Aif c01 Study Questions by Armstrong
Real Amazon Aif c01 Study Questions by Armstrong
Question 1
Question Type: MultipleChoice
Options:
A- Customer satisfaction score (CSAT)
B- Training time for each epoch
C- Average response time
D- Number of training instances
Answer:
C
Explanation:
The average response time is the correct metric for measuring the runtime efficiency of
operating AI models.
Refers to the time taken by the model to generate an output after receiving an input. It is a key
metric for evaluating the performance and efficiency of AI models in production.
A lower average response time indicates a more efficient model that can handle queries quickly.
Measures Runtime Efficiency: Directly indicates how fast the model processes inputs and delivers
outputs, which is critical for real-time applications.
Performance Indicator: Helps identify potential bottlenecks and optimize model performance.
A . Customer satisfaction score (CSAT): Measures customer satisfaction, not model runtime
efficiency.
B . Training time for each epoch: Measures training efficiency, not runtime efficiency during
model operation.
D . Number of training instances: Refers to data used during training, not operational efficiency.
Real Amazon AIF-C01 Study Questions By Armstrong - Page 3
Question 2
Question Type: MultipleChoice
A company has terabytes of data in a database that the company can use for business analysis.
The company wants to build an AI-based application that can build a SQL query from input text
that employees provide. The employees have minimal experience with technology.
Options:
A- Generative pre-trained transformers (GPT)
B- Residual neural network
C- Support vector machine
D- WaveNet
Answer:
A
Explanation:
Generative Pre-trained Transformers (GPT) are suitable for building an AI-based application that
can generate SQL queries from natural language input provided by employees.
GPT models are designed for understanding and generating human-like text based on natural
language input.
They can be fine-tuned to interpret specific tasks, such as converting natural language queries
into SQL queries.
Natural Language Understanding: GPT is highly effective for tasks that require understanding of
human language and generating structured outputs like SQL.
User-Friendly: Requires minimal technology experience from employees, as they provide simple
text input.
B . Residual neural network: Typically used in computer vision tasks, not for natural language-to-
SQL conversion.
C . Support vector machine: Used for classification tasks, not for generating structured queries
from text.
D . WaveNet: A deep generative model for audio data, unrelated to text-to-SQL tasks.
Question 3
Question Type: MultipleChoice
Which strategy evaluates the accuracy of a foundation model (FM) that is used in image
classification tasks?
Options:
A- Calculate the total cost of resources used by the model.
B- Measure the model's accuracy against a predefined benchmark dataset.
C- Count the number of layers in the neural network.
D- Assess the color accuracy of images processed by the model.
Answer:
B
Explanation:
Measuring the model's accuracy against a predefined benchmark dataset is the correct strategy
to evaluate the accuracy of a foundation model (FM) used in image classification tasks.
In image classification, the accuracy of a model is typically evaluated by comparing the predicted
labels with the true labels in a benchmark dataset that is representative of the real-world data
the model will encounter.
This approach provides a quantifiable measure of how well the model performs on known data
and is a standard practice in machine learning.
Benchmarking Accuracy: Using a predefined dataset allows for consistent and reliable evaluation
Real Amazon AIF-C01 Study Questions By Armstrong - Page 5
of model performance.
Standard Practice: It is a widely accepted method for assessing the effectiveness of image
classification models.
A . Total cost of resources: Does not measure model accuracy but rather the cost of operation.
C . Number of layers in the neural network: Does not directly correlate with the accuracy or
performance of the model.
D . Color accuracy of images processed by the model: Is unrelated to the model's classification
accuracy.
Question 4
Question Type: MultipleChoice
A social media company wants to use a large language model (LLM) for content moderation. The
company wants to evaluate the LLM outputs for bias and potential discrimination against specific
groups or individuals.
Which data source should the company use to evaluate the LLM outputs with the LEAST
administrative effort?
Options:
A- User-generated content
B- Moderation logs
C- Content moderation guidelines
D- Benchmark datasets
Answer:
D
Explanation:
Benchmark datasets are pre-validated datasets specifically designed to evaluate machine
learning models for bias, fairness, and potential discrimination. These datasets are the most
efficient tool for assessing an LLM's performance against known standards with minimal
administrative effort.
Real Amazon AIF-C01 Study Questions By Armstrong - Page 6
Option D (Correct): 'Benchmark datasets': This is the correct answer because using standardized
benchmark datasets allows the company to evaluate model outputs for bias with minimal
administrative overhead.
Option B: 'Moderation logs' is incorrect because they represent historical data and do not provide
a standardized basis for evaluating bias.
Option C: 'Content moderation guidelines' is incorrect because they provide qualitative criteria
rather than a quantitative basis for evaluation.
Evaluating AI Models for Bias on AWS: AWS supports using benchmark datasets to assess model
fairness and detect potential bias efficiently.
Question 5
Question Type: MultipleChoice
An education provider is building a question and answer application that uses a generative AI
model to explain complex concepts. The education provider wants to automatically change the
style of the model response depending on who is asking the question. The education provider will
give the model the age range of the user who has asked the question.
Which solution meets these requirements with the LEAST implementation effort?
Options:
A- Fine-tune the model by using additional training data that is representative of the various age
ranges that the application will support.
B- Add a role description to the prompt context that instructs the model of the age range that the
response should target.
C- Use chain-of-thought reasoning to deduce the correct style and complexity for a response
suitable for that user.
D- Summarize the response text depending on the age of the user so that younger users receive
shorter responses.
Answer:
B
Real Amazon AIF-C01 Study Questions By Armstrong - Page 7
Explanation:
Adding a role description to the prompt context is a straightforward way to instruct the
generative AI model to adjust its response style based on the user's age range. This method
requires minimal implementation effort as it does not involve additional training or complex logic.
Option B (Correct): 'Add a role description to the prompt context that instructs the model of the
age range that the response should target': This is the correct answer because it involves the
least implementation effort while effectively guiding the model to tailor responses according to
the age range.
Option A: 'Fine-tune the model by using additional training data' is incorrect because it requires
significant effort in gathering data and retraining the model.
Option C: 'Use chain-of-thought reasoning' is incorrect as it involves complex reasoning that may
not directly address the need to adjust response style based on age.
Option D: 'Summarize the response text depending on the age of the user' is incorrect because it
involves additional processing steps after generating the initial response, increasing complexity.
Prompt Engineering Techniques on AWS: AWS recommends using prompt context effectively to
guide generative models in providing tailored responses based on specific user attributes.
Question 6
Question Type: MultipleChoice
A medical company deployed a disease detection model on Amazon Bedrock. To comply with
privacy policies, the company wants to prevent the model from including personal patient
information in its responses. The company also wants to receive notification when policy
violations occur.
Options:
A- Use Amazon Macie to scan the model's output for sensitive data and set up alerts for potential
violations.
B- Configure AWS CloudTrail to monitor the model's responses and create alerts for any detected
personal information.
C- Use Guardrails for Amazon Bedrock to filter content. Set up Amazon CloudWatch alarms for
notification of policy violations.
Real Amazon AIF-C01 Study Questions By Armstrong - Page 8
D- Implement Amazon SageMaker Model Monitor to detect data drift and receive alerts when
model quality degrades.
Answer:
C
Explanation:
Guardrails for Amazon Bedrock provide mechanisms to filter and control the content generated
by models to comply with privacy and policy requirements. Using guardrails ensures that
sensitive or personal information is not included in the model's responses. Additionally,
integrating Amazon CloudWatch alarms allows for real-time notification when a policy violation
occurs.
Option C (Correct): 'Use Guardrails for Amazon Bedrock to filter content. Set up Amazon
CloudWatch alarms for notification of policy violations': This is the correct answer because it
directly addresses both the prevention of policy violations and the requirement to receive
notifications when such violations occur.
Option A: 'Use Amazon Macie to scan the model's output for sensitive data' is incorrect because
Amazon Macie is designed to monitor data in S3, not to filter real-time model outputs.
Option B: 'Configure AWS CloudTrail to monitor the model's responses' is incorrect because
CloudTrail tracks API activity and is not suited for content moderation.
Option D: 'Implement Amazon SageMaker Model Monitor to detect data drift' is incorrect because
data drift detection does not address content moderation or privacy compliance.
Guardrails in Amazon Bedrock: AWS provides guardrails to ensure AI models comply with content
policies, and using CloudWatch for alerting integrates monitoring capabilities.
Real Amazon AIF-C01 Study Questions By Armstrong - Page 9