0% found this document useful (0 votes)
118 views12 pages

Aif-C01 3

AIF Exam questions

Uploaded by

Puneet Jaggi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
118 views12 pages

Aif-C01 3

AIF Exam questions

Uploaded by

Puneet Jaggi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

100% Valid and Newest Version AIF-C01 Questions & Answers shared by Certleader

https://www.certleader.com/AIF-C01-dumps.html (97 Q&As)

AIF-C01 Dumps

AWS Certified AI Practitioner

https://www.certleader.com/AIF-C01-dumps.html

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AIF-C01 Questions & Answers shared by Certleader
https://www.certleader.com/AIF-C01-dumps.html (97 Q&As)

NEW QUESTION 1
An AI practitioner is using an Amazon Bedrock base model to summarize session chats from the customer service department. The AI practitioner wants to store
invocation logs to monitor model input and output data.
Which strategy should the AI practitioner use?

A. Configure AWS CloudTrail as the logs destination for the model.


B. Enable invocation logging in Amazon Bedrock.
C. Configure AWS Audit Manager as the logs destination for the model.
D. Configure model invocation logging in Amazon EventBridge.

Answer: B

Explanation:
Amazon Bedrock provides an option to enable invocation logging to capture and store the input and output data of the models used. This is essential for
monitoring and auditing purposes, particularly when handling customer data.
? Option B (Correct): "Enable invocation logging in Amazon Bedrock": This is the correct answer as it directly enables the logging of all model invocations, ensuring
transparency and traceability.
? Option A: "Configure AWS CloudTrail" is incorrect because CloudTrail logs API
calls but does not provide specific logging for model inputs and outputs.
? Option C: "Configure AWS Audit Manager" is incorrect as Audit Manager is used for compliance reporting, not specific invocation logging for AI models.
? Option D: "Configure model invocation logging in Amazon EventBridge" is incorrect as EventBridge is for event-driven architectures, not specifically designed for
logging AI model inputs and outputs.
AWS AI Practitioner References:
? Amazon Bedrock Logging Capabilities: AWS emphasizes using built-in logging features in Bedrock to maintain data integrity and transparency in model
operations.

NEW QUESTION 2
A company has documents that are missing some words because of a database error. The company wants to build an ML model that can suggest potential words
to fill in the missing text.
Which type of model meets this requirement?

A. Topic modeling
B. Clustering models
C. Prescriptive ML models
D. BERT-based models

Answer: D

Explanation:
BERT-based models (Bidirectional Encoder Representations from Transformers) are suitable for tasks that involve understanding the context of words in a
sentence and suggesting missing words. These models use bidirectional training, which considers the context from both directions (left and right of the missing
word) to predict the appropriate word to fill in the gaps.
? BERT-based Models:
? Why Option D is Correct:
? Why Other Options are Incorrect:

NEW QUESTION 3
A company uses a foundation model (FM) from Amazon Bedrock for an AI search tool. The company wants to fine-tune the model to be more accurate by using
the company's data.
Which strategy will successfully fine-tune the model?

A. Provide labeled data with the prompt field and the completion field.
B. Prepare the training dataset by creating a .txt file that contains multiple lines in .csv format.
C. Purchase Provisioned Throughput for Amazon Bedrock.
D. Train the model on journals and textbooks.

Answer: A

Explanation:
Providing labeled data with both a prompt field and a completion field is the correct strategy for fine-tuning a foundation model (FM) on Amazon Bedrock.
? Fine-Tuning Strategy:
? Why Option A is Correct:
? Why Other Options are Incorrect:

NEW QUESTION 4
A company wants to make a chatbot to help customers. The chatbot will help solve technical problems without human intervention. The company chose a
foundation model (FM) for the chatbot. The chatbot needs to produce responses that adhere to company tone.
Which solution meets these requirements?

A. Set a low limit on the number of tokens the FM can produce.


B. Use batch inferencing to process detailed responses.
C. Experiment and refine the prompt until the FM produces the desired responses.
D. Define a higher number for the temperature parameter.

Answer: C

Explanation:
Experimenting and refining the prompt is the best approach to ensure that the chatbot using a foundation model (FM) produces responses that adhere to the

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AIF-C01 Questions & Answers shared by Certleader
https://www.certleader.com/AIF-C01-dumps.html (97 Q&As)

company's tone.
? Prompt Engineering:
? Why Option C is Correct:
? Why Other Options are Incorrect:

NEW QUESTION 5
An AI practitioner wants to use a foundation model (FM) to design a search application. The search application must handle queries that have text and images.
Which type of FM should the AI practitioner use to power the search application?

A. Multi-modal embedding model


B. Text embedding model
C. Multi-modal generation model
D. Image generation model

Answer: A

Explanation:
A multi-modal embedding model is the correct type of foundation model (FM) for powering a search application that handles queries containing both text and
images.
? Multi-Modal Embedding Model:
? Why Option A is Correct:
? Why Other Options are Incorrect:

NEW QUESTION 6
A company built an AI-powered resume screening system. The company used a large dataset to train the model. The dataset contained resumes that were not
representative of all demographics. Which core dimension of responsible AI does this scenario present?

A. Fairness.
B. Explainability.
C. Privacy and security.
D. Transparency.

Answer: A

Explanation:
Fairness refers to the absence of bias in AI models. Using non- representative datasets leads to biased predictions, affecting specific demographics unfairly.
Explainability, privacy, and transparency are important but not directly related to this scenario. References: AWS Responsible AI Framework.

NEW QUESTION 7
A research company implemented a chatbot by using a foundation model (FM) from Amazon Bedrock. The chatbot searches for answers to questions from a large
database of research papers.
After multiple prompt engineering attempts, the company notices that the FM is performing poorly because of the complex scientific terms in the research papers.
How can the company improve the performance of the chatbot?

A. Use few-shot prompting to define how the FM can answer the questions.
B. Use domain adaptation fine-tuning to adapt the FM to complex scientific terms.
C. Change the FM inference parameters.
D. Clean the research paper data to remove complex scientific terms.

Answer: B

Explanation:
Domain adaptation fine-tuning involves training a foundation model (FM) further using a specific dataset that includes domain-specific terminology and content,
such as scientific terms in research papers. This process allows the model to better understand and handle complex terminology, improving its performance on
specialized tasks.
? Option B (Correct): "Use domain adaptation fine-tuning to adapt the FM to complex
scientific terms": This is the correct answer because fine-tuning the model on domain-specific data helps it learn and adapt to the specific language and terms used
in the research papers, resulting in better performance.
? Option A: "Use few-shot prompting to define how the FM can answer the
questions" is incorrect because while few-shot prompting can help in certain
scenarios, it is less effective than fine-tuning for handling complex domain-specific terms.
? Option C: "Change the FM inference parameters" is incorrect because adjusting
inference parameters will not resolve the issue of the model's lack of understanding of complex scientific terminology.
? Option D: "Clean the research paper data to remove complex scientific terms" is
incorrect because removing the complex terms would result in the loss of important information and context, which is not a viable solution.
AWS AI Practitioner References:
? Domain Adaptation in Amazon Bedrock: AWS recommends fine-tuning models with domain-specific data to improve their performance on specialized tasks
involving unique terminology.

NEW QUESTION 8
A company wants to use large language models (LLMs) with Amazon Bedrock to develop a chat interface for the company's product manuals. The manuals are
stored as PDF files.
Which solution meets these requirements MOST cost-effectively?

A. Use prompt engineering to add one PDF file as context to the user prompt when the prompt is submitted to Amazon Bedrock.
B. Use prompt engineering to add all the PDF files as context to the user prompt when the prompt is submitted to Amazon Bedrock.
C. Use all the PDF documents to fine-tune a model with Amazon Bedroc
D. Use the fine- tuned model to process user prompts.
E. Upload PDF documents to an Amazon Bedrock knowledge bas

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AIF-C01 Questions & Answers shared by Certleader
https://www.certleader.com/AIF-C01-dumps.html (97 Q&As)

F. Use the knowledge base to provide context when users submit prompts to Amazon Bedrock.

Answer: A

Explanation:
Using Amazon Bedrock with large language models (LLMs) allows for efficient utilization of AI to answer queries based on context provided in product manuals. To
achieve this cost- effectively, the company should avoid unnecessary use of resources.
? Option A (Correct): "Use prompt engineering to add one PDF file as context to the
user prompt when the prompt is submitted to Amazon Bedrock": This is the most cost-effective solution. By using prompt engineering, only the relevant content
from one PDF file is added as context to each query. This approach minimizes the amount of data processed, which helps in reducing costs associated with LLMs'
computational requirements.
? Option B: "Use prompt engineering to add all the PDF files as context to the user
prompt when the prompt is submitted to Amazon Bedrock" is incorrect. Including
all PDF files would increase costs significantly due to the large context size processed by the model.
? Option C: "Use all the PDF documents to fine-tune a model with Amazon Bedrock"
is incorrect. Fine-tuning a model is more expensive than using prompt engineering, especially if done for multiple documents.
? Option D: "Upload PDF documents to an Amazon Bedrock knowledge base" is
incorrect because Amazon Bedrock does not have a built-in knowledge base feature for directly managing and querying PDF documents.
AWS AI Practitioner References:
? Prompt Engineering for Cost-Effective AI: AWS emphasizes the importance of using prompt engineering to minimize costs when interacting with LLMs. By
carefully selecting relevant context, users can reduce the amount of data processed and save on expenses.

NEW QUESTION 9
A company built a deep learning model for object detection and deployed the model to production.
Which AI process occurs when the model analyzes a new image to identify objects?

A. Training
B. Inference
C. Model deployment
D. Bias correction

Answer: B

Explanation:
Inference is the correct answer because it is the AI process that occurs when a deployed model analyzes new data (such as an image) to make predictions or
identify objects.
? Inference:
? Why Option B is Correct:
? Why Other Options are Incorrect:

NEW QUESTION 10
A company wants to use language models to create an application for inference on edge devices. The inference must have the lowest latency possible.
Which solution will meet these requirements?

A. Deploy optimized small language models (SLMs) on edge devices.


B. Deploy optimized large language models (LLMs) on edge devices.
C. Incorporate a centralized small language model (SLM) API for asynchronous communication with edge devices.
D. Incorporate a centralized large language model (LLM) API for asynchronous communication with edge devices.

Answer: A

Explanation:
To achieve the lowest latency possible for inference on edge devices, deploying optimized small language models (SLMs) is the most effective solution. SLMs
require fewer
resources and have faster inference times, making them ideal for deployment on edge devices where processing power and memory are limited.
? Option A (Correct): "Deploy optimized small language models (SLMs) on edge
devices": This is the correct answer because SLMs provide fast inference with low latency, which is crucial for edge deployments.
? Option B: "Deploy optimized large language models (LLMs) on edge devices" is
incorrect because LLMs are resource-intensive and may not perform well on edge devices due to their size and computational demands.
? Option C: "Incorporate a centralized small language model (SLM) API for
asynchronous communication with edge devices" is incorrect because it introduces network latency due to the need for communication with a centralized server.
? Option D: "Incorporate a centralized large language model (LLM) API for
asynchronous communication with edge devices" is incorrect for the same reason, with even greater latency due to the larger model size.
AWS AI Practitioner References:
? Optimizing AI Models for Edge Devices on AWS: AWS recommends using small, optimized models for edge deployments to ensure minimal latency and efficient
performance.

NEW QUESTION 10
A retail store wants to predict the demand for a specific product for the next few weeks by using the Amazon SageMaker DeepAR forecasting algorithm.
Which type of data will meet this requirement?

A. Text data
B. Image data
C. Time series data
D. Binary data

Answer: C

Explanation:
Amazon SageMaker's DeepAR is a supervised learning algorithm designed for forecasting scalar (one-dimensional) time series data. Time series data consists of

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AIF-C01 Questions & Answers shared by Certleader
https://www.certleader.com/AIF-C01-dumps.html (97 Q&As)

sequences of data points indexed in time order, typically with consistent intervals between them. In the context of a retail store aiming to predict product demand,
relevant time series data might include historical sales figures, inventory levels, or related metrics recorded over regular time intervals (e.g., daily or weekly). By
training the DeepAR model on this historical time series data, the store can generate forecasts for future product demand. This capability is
particularly useful for inventory management, staffing, and supply chain optimization. Other data types, such as text, image, or binary data, are not suitable for time
series forecasting tasks and would not be appropriate inputs for the DeepAR algorithm.
Reference: Amazon SageMaker DeepAR Algorithm

NEW QUESTION 13
A company wants to use a large language model (LLM) to develop a conversational agent. The company needs to prevent the LLM from being manipulated with
common prompt engineering techniques to perform undesirable actions or expose sensitive information.
Which action will reduce these risks?

A. Create a prompt template that teaches the LLM to detect attack patterns.
B. Increase the temperature parameter on invocation requests to the LLM.
C. Avoid using LLMs that are not listed in Amazon SageMaker.
D. Decrease the number of input tokens on invocations of the LLM.

Answer: A

Explanation:
Creating a prompt template that teaches the LLM to detect attack patterns is the most effective way to reduce the risk of the model being manipulated through
prompt engineering.
? Prompt Templates for Security:
? Why Option A is Correct:
? Why Other Options are Incorrect:

NEW QUESTION 15
An accounting firm wants to implement a large language model (LLM) to automate document processing. The firm must proceed responsibly to avoid potential
harms.
What should the firm do when developing and deploying the LLM? (Select TWO.)

A. Include fairness metrics for model evaluation.


B. Adjust the temperature parameter of the model.
C. Modify the training data to mitigate bias.
D. Avoid overfitting on the training data.
E. Apply prompt engineering techniques.

Answer: AC

Explanation:
To implement a large language model (LLM) responsibly, the firm should focus on fairness and mitigating bias, which are critical for ethical AI deployment.
? A. Include Fairness Metrics for Model Evaluation:
? C. Modify the Training Data to Mitigate Bias:
? Why Other Options are Incorrect:

NEW QUESTION 17
A company is building a customer service chatbot. The company wants the chatbot to improve its responses by learning from past interactions and online
resources.
Which AI learning strategy provides this self-improvement capability?

A. Supervised learning with a manually curated dataset of good responses and bad responses
B. Reinforcement learning with rewards for positive customer feedback
C. Unsupervised learning to find clusters of similar customer inquiries
D. Supervised learning with a continuously updated FAQ database

Answer: B

Explanation:
Reinforcement learning allows a model to learn and improve over time based on feedback from its environment. In this case, the chatbot can improve its
responses by being rewarded for positive customer feedback, which aligns well with the goal of self- improvement based on past interactions and new information.
? Option B (Correct): "Reinforcement learning with rewards for positive customer
feedback": This is the correct answer as reinforcement learning enables the chatbot to learn from feedback and adapt its behavior accordingly, providing self-
improvement capabilities.
? Option A: "Supervised learning with a manually curated dataset" is incorrect
because it does not support continuous learning from new interactions.
? Option C: "Unsupervised learning to find clusters of similar customer inquiries" is incorrect because unsupervised learning does not provide a mechanism for
improving responses based on feedback.
? Option D: "Supervised learning with a continuously updated FAQ database" is incorrect because it still relies on manually curated data rather than self-
improvement from feedback.
AWS AI Practitioner References:
? Reinforcement Learning on AWS: AWS provides reinforcement learning
frameworks that can be used to train models to improve their performance based on feedback.

NEW QUESTION 19
A company wants to create an application by using Amazon Bedrock. The company has a limited budget and prefers flexibility without long-term commitment.
Which Amazon Bedrock pricing model meets these requirements?

A. On-Demand
B. Model customization

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AIF-C01 Questions & Answers shared by Certleader
https://www.certleader.com/AIF-C01-dumps.html (97 Q&As)

C. Provisioned Throughput
D. Spot Instance

Answer: A

Explanation:
Amazon Bedrock offers an on-demand pricing model that provides flexibility without long- term commitments. This model allows companies to pay only for the
resources they use, which is ideal for a limited budget and offers flexibility.
? Option A (Correct): "On-Demand": This is the correct answer because on-demand
pricing allows the company to use Amazon Bedrock without any long-term commitments and to manage costs according to their budget.
? Option B: "Model customization" is a feature, not a pricing model.
? Option C: "Provisioned Throughput" involves reserving capacity ahead of time, which might not offer the desired flexibility and could lead to higher costs if the
capacity is not fully used.
? Option D: "Spot Instance" is a pricing model for EC2 instances and does not apply to Amazon Bedrock.
AWS AI Practitioner References:
? AWS Pricing Models for Flexibility: On-demand pricing is a key AWS model for services that require flexibility and no long-term commitment, ensuring cost-
effectiveness for projects with variable usage patterns.

NEW QUESTION 21
A company wants to deploy a conversational chatbot to answer customer questions. The chatbot is based on a fine-tuned Amazon SageMaker JumpStart model.
The application must comply with multiple regulatory frameworks.
Which capabilities can the company show compliance for? (Select TWO.)

A. Auto scaling inference endpoints


B. Threat detection
C. Data protection
D. Cost optimization
E. Loosely coupled microservices

Answer: BC

Explanation:
To comply with multiple regulatory frameworks, the company must ensure data protection and threat detection. Data protection involves safeguarding sensitive
customer information, while threat detection identifies and mitigates security threats to the application.
? Option C (Correct): "Data protection": This is correct because data protection is
critical for compliance with privacy and security regulations.
? Option B (Correct): "Threat detection": This is correct because detecting and mitigating threats is essential to maintaining the security posture required for
regulatory compliance.
? Option A: "Auto scaling inference endpoints" is incorrect because auto-scaling does not directly relate to regulatory compliance.
? Option D: "Cost optimization" is incorrect because it is focused on managing expenses, not compliance.
? Option E: "Loosely coupled microservices" is incorrect because this architectural approach does not directly address compliance requirements.
AWS AI Practitioner References:
? AWS Compliance Capabilities: AWS offers services and tools, such as data protection and threat detection, to help companies meet regulatory requirements for
security and privacy.

NEW QUESTION 25
A security company is using Amazon Bedrock to run foundation models (FMs). The company wants to ensure that only authorized users invoke the models. The
company needs to identify any unauthorized access attempts to set appropriate AWS Identity and Access Management (IAM) policies and roles for future
iterations of the FMs.
Which AWS service should the company use to identify unauthorized users that are trying to access Amazon Bedrock?

A. AWS Audit Manager


B. AWS CloudTrail
C. Amazon Fraud Detector
D. AWS Trusted Advisor

Answer: B

Explanation:
AWS CloudTrail is a service that enables governance, compliance, and operational and risk auditing of your AWS account. It tracks API calls and identifies
unauthorized access attempts to AWS resources, including Amazon Bedrock.
? AWS CloudTrail:
? Why Option B is Correct:
? Why Other Options are Incorrect:
Thus, B is the correct answer for identifying unauthorized users attempting to access Amazon Bedrock.

NEW QUESTION 26
An AI practitioner trained a custom model on Amazon Bedrock by using a training dataset that contains confidential data. The AI practitioner wants to ensure that
the custom model does not generate inference responses based on confidential data.
How should the AI practitioner prevent responses based on confidential data?

A. Delete the custom mode


B. Remove the confidential data from the training datase
C. Retrain the custom model.
D. Mask the confidential data in the inference responses by using dynamic data masking.
E. Encrypt the confidential data in the inference responses by using Amazon SageMaker.
F. Encrypt the confidential data in the custom model by using AWS Key Management Service (AWS KMS).

Answer: A

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AIF-C01 Questions & Answers shared by Certleader
https://www.certleader.com/AIF-C01-dumps.html (97 Q&As)

Explanation:
When a model is trained on a dataset containing confidential or sensitive data, the model may inadvertently learn patterns from this data, which could then be
reflected in its inference responses. To ensure that a model does not generate responses based on confidential data, the most effective approach is to remove the
confidential data from the training dataset and then retrain the model.
Explanation of Each Option:
? Option A (Correct): "Delete the custom model. Remove the confidential data from the training dataset. Retrain the custom model."This option is correct because
it directly addresses the core issue: the model has been trained on confidential data. The only way to ensure that the model does not produce inferences based on
this data is to remove the confidential information from the training dataset and then retrain the model from scratch. Simply deleting the model and retraining it
ensures that no confidential data is learned or retained by the model. This approach follows the best practices recommended by AWS for handling sensitive data
when using machine learning services like Amazon Bedrock.
? Option B: "Mask the confidential data in the inference responses by using dynamic data masking."This option is incorrect because dynamic data masking is
typically used to mask or obfuscate sensitive data in a database. It does not address the core problem of the model being trained on confidential data. Masking
data in inference responses does not prevent the model from using confidential data it learned during training.
? Option C: "Encrypt the confidential data in the inference responses by using Amazon SageMaker."This option is incorrect because encrypting the inference
responses does not prevent the model from generating outputs based on confidential data. Encryption only secures the data at rest or in transit but does not affect
the model's underlying knowledge or training process.
? Option D: "Encrypt the confidential data in the custom model by using AWS Key Management Service (AWS KMS)."This option is incorrect as well because
encrypting the data within the model does not prevent the model from generating responses based on the confidential data it learned during training. AWS KMS
can encrypt data, but it does not modify the learning that the model has already performed.
AWS AI Practitioner References:
? Data Handling Best Practices in AWS Machine Learning: AWS advises practitioners to carefully handle training data, especially when it involves sensitive or
confidential information. This includes preprocessing steps like data anonymization or removal of sensitive data before using it to train machine learning models.
? Amazon Bedrock and Model Training Security: Amazon Bedrock provides foundational models and customization capabilities, but any training involving sensitive
data should follow best practices, such as removing or anonymizing confidential data to prevent unintended data leakage.

NEW QUESTION 28
A company wants to use a large language model (LLM) on Amazon Bedrock for sentiment analysis. The company needs the LLM to produce more consistent
responses to the same input prompt.
Which adjustment to an inference parameter should the company make to meet these requirements?

A. Decrease the temperature value


B. Increase the temperature value
C. Decrease the length of output tokens
D. Increase the maximum generation length

Answer: A

Explanation:
The temperature parameter in a large language model (LLM) controls the randomness of the model's output. A lower temperature value makes the output more
deterministic and consistent, meaning that the model is less likely to produce different results for the same input prompt.
? Option A (Correct): "Decrease the temperature value": This is the correct answer
because lowering the temperature reduces the randomness of the responses, leading to more consistent outputs for the same input.
? Option B: "Increase the temperature value" is incorrect because it would make the
output more random and less consistent.
? Option C: "Decrease the length of output tokens" is incorrect as it does not directly affect the consistency of the responses.
? Option D: "Increase the maximum generation length" is incorrect because this adjustment affects the output length, not the consistency of the model??s
responses.
AWS AI Practitioner References:
? Understanding Temperature in Generative AI Models: AWS documentation explains that adjusting the temperature parameter affects the model??s output
randomness, with lower values providing more consistent outputs.

NEW QUESTION 30
Which option is a use case for generative AI models?

A. Improving network security by using intrusion detection systems


B. Creating photorealistic images from text descriptions for digital marketing
C. Enhancing database performance by using optimized indexing
D. Analyzing financial data to forecast stock market trends

Answer: B

Explanation:
Generative AI models are used to create new content based on existing data. One common use case is generating photorealistic images from text descriptions,
which is particularly useful in digital marketing, where visual content is key to engaging potential customers.
? Option B (Correct): "Creating photorealistic images from text descriptions for digital
marketing": This is the correct answer because generative AI models, like those offered by Amazon Bedrock, can create images based on text descriptions,
making them highly valuable for generating marketing materials.
? Option A: "Improving network security by using intrusion detection systems" is
incorrect because this is a use case for traditional machine learning models, not generative AI.
? Option C: "Enhancing database performance by using optimized indexing" is
incorrect as it is unrelated to generative AI.
? Option D: "Analyzing financial data to forecast stock market trends" is incorrect because it typically involves predictive modeling rather than generative AI.
AWS AI Practitioner References:
? Use Cases for Generative AI Models on AWS: AWS highlights the use of generative AI for creative content generation, including image creation, text generation,
and more, which is suited for digital marketing applications.

NEW QUESTION 33
An AI practitioner has a database of animal photos. The AI practitioner wants to automatically identify and categorize the animals in the photos without manual
human effort.
Which strategy meets these requirements?

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AIF-C01 Questions & Answers shared by Certleader
https://www.certleader.com/AIF-C01-dumps.html (97 Q&As)

A. Object detection
B. Anomaly detection
C. Named entity recognition
D. Inpainting

Answer: A

Explanation:
Object detection is the correct strategy for automatically identifying and categorizing animals in photos.
? Object Detection:
? Why Option A is Correct:
? Why Other Options are Incorrect:

NEW QUESTION 35
A digital devices company wants to predict customer demand for memory hardware. The company does not have coding experience or knowledge of ML
algorithms and needs to develop a data-driven predictive model. The company needs to perform analysis on internal data and external data.
Which solution will meet these requirements?

A. Store the data in Amazon S3. Create ML models and demand forecast predictions by using Amazon SageMaker built-in algorithms that use the data from
Amazon S3.
B. Import the data into Amazon SageMaker Data Wrangle
C. Create ML models and demand forecast predictions by using SageMaker built-in algorithms.
D. Import the data into Amazon SageMaker Data Wrangle
E. Build ML models and demand forecast predictions by using an Amazon Personalize Trending-Now recipe.
F. Import the data into Amazon SageMaker Canva
G. Build ML models and demand forecast predictions by selecting the values in the data from SageMaker Canvas.

Answer: D

Explanation:
Amazon SageMaker Canvas is a visual, no-code machine learning interface that allows users to build machine learning models without having any coding
experience or knowledge of machine learning algorithms. It enables users to analyze internal and external data, and make predictions using a guided interface.
? Option D (Correct): "Import the data into Amazon SageMaker Canvas. Build ML
models and demand forecast predictions by selecting the values in the data from SageMaker Canvas": This is the correct answer because SageMaker Canvas is
designed for users without coding experience, providing a visual interface to build predictive models with ease.
? Option A: "Store the data in Amazon S3 and use SageMaker built-in algorithms" is
incorrect because it requires coding knowledge to interact with SageMaker's built- in algorithms.
? Option B: "Import the data into Amazon SageMaker Data Wrangler" is incorrect.
Data Wrangler is primarily for data preparation and not directly focused on creating ML models without coding.
? Option C: "Use Amazon Personalize Trending-Now recipe" is incorrect as Amazon
Personalize is for building recommendation systems, not for general demand forecasting.
AWS AI Practitioner References:
? Amazon SageMaker Canvas Overview: AWS documentation emphasizes Canvas as a no-code solution for building machine learning models, suitable for
business analysts and users with no coding experience.

NEW QUESTION 39
A company manually reviews all submitted resumes in PDF format. As the company grows, the company expects the volume of resumes to exceed the company's
review capacity. The company needs an automated system to convert the PDF resumes into plain text format for additional processing.
Which AWS service meets this requirement?

A. Amazon Textract
B. Amazon Personalize
C. Amazon Lex
D. Amazon Transcribe

Answer: A

Explanation:
Amazon Textract is a service that automatically extracts text and data from scanned documents, including PDFs. It is the best choice for converting resumes from
PDF format to plain text for further processing.
? Amazon Textract:
? Why Option A is Correct:
? Why Other Options are Incorrect:

NEW QUESTION 43
An e-commerce company wants to build a solution to determine customer sentiments based on written customer reviews of products.
Which AWS services meet these requirements? (Select TWO.)

A. Amazon Lex
B. Amazon Comprehend
C. Amazon Polly
D. Amazon Bedrock
E. Amazon Rekognition

Answer: BD

Explanation:
To determine customer sentiments based on written customer reviews, the company can use Amazon Comprehend and Amazon Bedrock.
? Amazon Comprehend:
? Amazon Bedrock:

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AIF-C01 Questions & Answers shared by Certleader
https://www.certleader.com/AIF-C01-dumps.html (97 Q&As)

? Why Other Options are Incorrect:

NEW QUESTION 44
A company wants to use AI to protect its application from threats. The AI solution needs to check if an IP address is from a suspicious source.
Which solution meets these requirements?

A. Build a speech recognition system.


B. Create a natural language processing (NLP) named entity recognition system.
C. Develop an anomaly detection system.
D. Create a fraud forecasting system.

Answer: C

Explanation:
An anomaly detection system is suitable for identifying unusual patterns or behaviors, such as suspicious IP addresses, which might indicate a potential threat.
? Anomaly Detection:
? Why Option C is Correct:
? Why Other Options are Incorrect:
Thus, C is the correct answer for detecting suspicious IP addresses.

NEW QUESTION 45
A company is using domain-specific models. The company wants to avoid creating new models from the beginning. The company instead wants to adapt pre-
trained models to create models for new, related tasks.
Which ML strategy meets these requirements?

A. Increase the number of epochs.


B. Use transfer learning.
C. Decrease the number of epochs.
D. Use unsupervised learning.

Answer: B

Explanation:
Transfer learning is the correct strategy for adapting pre-trained models for new, related tasks without creating models from scratch.
? Transfer Learning:
? Why Option B is Correct:
? Why Other Options are Incorrect:

NEW QUESTION 47
A company has built a solution by using generative AI. The solution uses large language models (LLMs) to translate training manuals from English into other
languages. The company wants to evaluate the accuracy of the solution by examining the text generated for the manuals.
Which model evaluation strategy meets these requirements?

A. Bilingual Evaluation Understudy (BLEU)


B. Root mean squared error (RMSE)
C. Recall-Oriented Understudy for Gisting Evaluation (ROUGE)
D. F1 score

Answer: A

Explanation:
BLEU (Bilingual Evaluation Understudy) is a metric used to evaluate the accuracy of machine-generated translations by comparing them against reference
translations. It is commonly used for translation tasks to measure how close the generated output is to professional human translations.
? Option A (Correct): "Bilingual Evaluation Understudy (BLEU)": This is the correct answer because BLEU is specifically designed to evaluate the quality of
translations, making it suitable for the company's use case.
? Option B: "Root mean squared error (RMSE)" is incorrect because RMSE is used for regression tasks to measure prediction errors, not translation quality.
? Option C: "Recall-Oriented Understudy for Gisting Evaluation (ROUGE)" is incorrect as it is used to evaluate text summarization, not translation.
? Option D: "F1 score" is incorrect because it is typically used for classification tasks, not for evaluating translation accuracy.
AWS AI Practitioner References:
? Model Evaluation Metrics on AWS: AWS supports various metrics like BLEU for specific use cases, such as evaluating machine translation models.

NEW QUESTION 48
A loan company is building a generative AI-based solution to offer new applicants discounts based on specific business criteria. The company wants to build and
use an AI model responsibly to minimize bias that could negatively affect some customers.
Which actions should the company take to meet these requirements? (Select TWO.)

A. Detect imbalances or disparities in the data.


B. Ensure that the model runs frequently.
C. Evaluate the model's behavior so that the company can provide transparency to stakeholders.
D. Use the Recall-Oriented Understudy for Gisting Evaluation (ROUGE) technique to ensure that the model is 100% accurate.
E. Ensure that the model's inference time is within the accepted limits.

Answer: AC

Explanation:
To build an AI model responsibly and minimize bias, it is essential to ensure fairness and transparency throughout the model development and deployment
process. This involves detecting and mitigating data imbalances and thoroughly evaluating the model's behavior to understand its impact on different groups.
? Option A (Correct): "Detect imbalances or disparities in the data": This is correct because identifying and addressing data imbalances or disparities is a critical
step in reducing bias. AWS provides tools like Amazon SageMaker Clarify to detect bias during data preprocessing and model training.

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AIF-C01 Questions & Answers shared by Certleader
https://www.certleader.com/AIF-C01-dumps.html (97 Q&As)

? Option C (Correct): "Evaluate the model's behavior so that the company can provide transparency to stakeholders": This is correct because evaluating the
model's behavior for fairness and accuracy is key to ensuring that stakeholders understand how the model makes decisions. Transparency is a crucial aspect of
responsible AI.
? Option B: "Ensure that the model runs frequently" is incorrect because the frequency of model runs does not address bias.
? Option D: "Use the Recall-Oriented Understudy for Gisting Evaluation (ROUGE) technique to ensure that the model is 100% accurate" is incorrect because
ROUGE is a metric for evaluating the quality of text summarization models, not for minimizing bias.
? Option E: "Ensure that the model's inference time is within the accepted limits" is incorrect as it relates to performance, not bias reduction.
AWS AI Practitioner References:
? Amazon SageMaker Clarify: AWS offers tools such as SageMaker Clarify for detecting bias in datasets and models, and for understanding model behavior to
ensure fairness and transparency.
? Responsible AI Practices: AWS promotes responsible AI by advocating for fairness, transparency, and inclusivity in model development and deployment.

NEW QUESTION 53
A company is building an application that needs to generate synthetic data that is based on existing data.
Which type of model can the company use to meet this requirement?

A. Generative adversarial network (GAN)


B. XGBoost
C. Residual neural network
D. WaveNet

Answer: A

Explanation:
Generative adversarial networks (GANs) are a type of deep learning model used for generating synthetic data based on existing datasets. GANs consist of two
neural networks (a generator and a discriminator) that work together to create realistic data.
? Option A (Correct): "Generative adversarial network (GAN)": This is the correct
answer because GANs are specifically designed for generating synthetic data that closely resembles the real data they are trained on.
? Option B: "XGBoost" is a gradient boosting algorithm for classification and
regression tasks, not for generating synthetic data.
? Option C: "Residual neural network" is primarily used for improving the performance of deep networks, not for generating synthetic data.
? Option D: "WaveNet" is a model architecture designed for generating raw audio waveforms, not synthetic data in general.
AWS AI Practitioner References:
? GANs on AWS for Synthetic Data Generation: AWS supports the use of GANs for creating synthetic datasets, which can be crucial for applications like training
machine learning models in environments where real data is scarce or sensitive.

NEW QUESTION 55
An AI practitioner is building a model to generate images of humans in various professions. The AI practitioner discovered that the input data is biased and that
specific attributes affect the image generation and create bias in the model.
Which technique will solve the problem?

A. Data augmentation for imbalanced classes


B. Model monitoring for class distribution
C. Retrieval Augmented Generation (RAG)
D. Watermark detection for images

Answer: A

Explanation:
Data augmentation for imbalanced classes is the correct technique to address bias in input data affecting image generation.
? Data Augmentation for Imbalanced Classes:
? Why Option A is Correct:
? Why Other Options are Incorrect:

NEW QUESTION 60
A company wants to create a chatbot by using a foundation model (FM) on Amazon Bedrock. The FM needs to access encrypted data that is stored in an Amazon
S3 bucket.
The data is encrypted with Amazon S3 managed keys (SSE-S3).
The FM encounters a failure when attempting to access the S3 bucket data. Which solution will meet these requirements?

A. Ensure that the role that Amazon Bedrock assumes has permission to decrypt data withthe correct encryption key.
B. Set the access permissions for the S3 buckets to allow public access to enable access over the internet.
C. Use prompt engineering techniques to tell the model to look for information in Amazon S3.
D. Ensure that the S3 data does not contain sensitive information.

Answer: A

Explanation:
Amazon Bedrock needs the appropriate IAM role with permission to access and decrypt data stored in Amazon S3. If the data is encrypted with Amazon S3
managed keys (SSE- S3), the role that Amazon Bedrock assumes must have the required permissions to access and decrypt the encrypted data.
? Option A (Correct): "Ensure that the role that Amazon Bedrock assumes has permission to decrypt data with the correct encryption key": This is the correct
solution as it ensures that the AI model can access the encrypted data securely without changing the encryption settings or compromising data security.
? Option B: "Set the access permissions for the S3 buckets to allow public access" is incorrect because it violates security best practices by exposing sensitive
data to the public.
? Option C: "Use prompt engineering techniques to tell the model to look for information in Amazon S3" is incorrect as it does not address the encryption and
permission issue.
? Option D: "Ensure that the S3 data does not contain sensitive information" is incorrect because it does not solve the access problem related to encryption.
AWS AI Practitioner References:
? Managing Access to Encrypted Data in AWS: AWS recommends using proper IAM roles and policies to control access to encrypted data stored in S3.

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AIF-C01 Questions & Answers shared by Certleader
https://www.certleader.com/AIF-C01-dumps.html (97 Q&As)

NEW QUESTION 62
......

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version AIF-C01 Questions & Answers shared by Certleader
https://www.certleader.com/AIF-C01-dumps.html (97 Q&As)

Thank You for Trying Our Product

* 100% Pass or Money Back


All our products come with a 90-day Money Back Guarantee.
* One year free update
You can enjoy free update one year. 24x7 online support.
* Trusted by Millions
We currently serve more than 30,000,000 customers.
* Shop Securely
All transactions are protected by VeriSign!

100% Pass Your AIF-C01 Exam with Our Prep Materials Via below:

https://www.certleader.com/AIF-C01-dumps.html

The Leader of IT Certification visit - https://www.certleader.com


Powered by TCPDF (www.tcpdf.org)

You might also like