Aws Sysops Question 3
Aws Sysops Question 3
Associate - Results
Return to review
Chart
Pie chart with 4 slices.
End of interactive chart.
Attempt 1
All knowledge areas
All questions
Question 1:
Skipped
A multi-national retail company uses AWS Organizations to manage its users across
different divisions. Even though CloudTrail is enabled on the member AWS accounts,
managers have noticed that access issues for CloudTrail logs across different divisions
and AWS Regions are becoming a bottleneck in troubleshooting issues. They have
decided to use the organization trail to keep things simple.
What are the important points to remember when configuring an organization trail?
(Select two)
Member accounts will be able to see the Organization trail, but cannot modify
or delete it
(Correct)
(Correct)
•
There is nothing called Organization Trail. The master account can, however,
enable CloudTrail logging, to keep track of all activities across AWS accounts
Explanation
Correct option:
If you have created an organization in AWS Organizations, you can also create a trail
that will log all events for all AWS accounts in that organization. This is referred to as an
organization trail.
Member accounts will be able to see the organization trail, but cannot modify or delete
it - Organization trails must be created in the master account, and when specified as
applying to an organization, are automatically applied to all member accounts in the
organization. Member accounts will be able to see the organization trail, but cannot
modify or delete it. By default, member accounts will not have access to the log files for
the organization trail in the Amazon S3 bucket.
Organization
trail:
via - https://docs.aws.amazon.com/awscloudtrail/latest/userguide/how-cloudtrail-
works.html
Incorrect options:
There is nothing called Organization Trail. The master account can, however, enable
CloudTrail logging, to keep track of all activities across AWS accounts - This statement
is incorrect. AWS offers Organization Trail for easy management and monitoring.
Member accounts do not have access to the organization trail, neither do they have
access to the Amazon S3 bucket that logs the files - This statement is only partially
correct. Member accounts will be able to see the organization trail, but cannot modify or
delete it. By default, member accounts will not have access to the log files for the
organization trail in the Amazon S3 bucket.
By default, CloudTrail event log files are not encrypted - This is an incorrect statement.
By default, CloudTrail event log files are encrypted using Amazon S3 server-side
encryption (SSE).
References:
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/how-cloudtrail-
works.html
https://aws.amazon.com/about-aws/whats-new/2016/11/aws-cloudtrail-supports-s3-
data-events/
https://aws.amazon.com/premiumsupport/knowledge-center/secure-s3-resources/
Question 2:
Skipped
A banking service uses Amazon EC2 instances and Amazon RDS databases to run its
core business functionalities. The Chief Technology Officer (CTO) of the company has
requested granular OS level metrics from the database service for benchmarking.
(Correct)
•
Subscribe to CloudWatch metrics that track CPU utilization of the instances
the RDS is hosted on
Enable Enhanced Monitoring for your RDS DB instance - Amazon RDS provides metrics
in real time for the operating system (OS) that your DB instance runs on. You can view
the metrics for your DB instance using the console. Also, you can consume the
Enhanced Monitoring JSON output from Amazon CloudWatch Logs in a monitoring
system of your choice.
By default, Enhanced Monitoring metrics are stored for 30 days in the CloudWatch Logs,
which are different from typical CloudWatch metrics. Enhanced Monitoring for RDS
provides the following OS metrics: 1.Free Memory 2.Active Memory 3.Swap Free
4.Processes Running 5.File System Used
You can use these metrics to understand the environment's performance, and these
metrics are ingested by Amazon CloudWatch Logs as log entries. You can use
CloudWatch to create alarms based on metrics. These alarms run actions, and you can
publish these metrics from within your infrastructure, device, or application into
CloudWatch as a custom metric. By using Enhanced Monitoring and CloudWatch
together, you can automate tasks by creating a custom metric for the CloudWatch Logs
RDS ingested date from the Enhanced Monitoring metrics. Enhanced Monitoring
metrics are useful when you want to see how different processes or threads on a DB
instance use the CPU.
Incorrect options:
Subscribe to CloudWatch metrics that track CPU utilization of the instances the RDS is
hosted on - CloudWatch gathers metrics about CPU utilization from the hypervisor for a
DB instance, and Enhanced Monitoring gathers its metrics from an agent on the
instance. As a result, you might find differences between the measurements, because
the hypervisor layer performs a small amount of work. The differences can be greater if
your DB instances use smaller instance classes because then there are likely more
virtual machines (VMs) that are managed by the hypervisor layer on a single physical
instance. Enhanced Monitoring metrics are useful when you want to see how different
processes or threads on a DB instance use the CPU.
Reference:
https://aws.amazon.com/premiumsupport/knowledge-center/custom-cloudwatch-
metrics-rds/
Question 3:
Skipped
The development team at your company wants to upload files to S3 buckets using the
SSE-KMS encryption mechanism. However, the team is receiving permission errors
while trying to push the objects over HTTP.
Which of the following headers should the team include in the request?
'x-amz-server-side-encryption': 'SSE-S3'
'x-amz-server-side-encryption': 'AES256'
'x-amz-server-side-encryption': 'SSE-KMS'
'x-amz-server-side-encryption': 'aws:kms'
(Correct)
Explanation
Correct option:
'x-amz-server-side-encryption': 'aws:kms'
Server-side encryption is the encryption of data at its destination by the application or
service that receives it. AWS Key Management Service (AWS KMS) is a service that
combines secure, highly available hardware and software to provide a key management
system scaled for the cloud. Amazon S3 uses AWS KMS customer master keys (CMKs)
to encrypt your Amazon S3 objects. AWS KMS encrypts only the object data. Any object
metadata is not encrypted.
If the request does not include the x-amz-server-side-encryption header, then the
request is denied.
via - https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
Incorrect options:
Reference:
https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
Question 4:
Skipped
Owing to lapses in security, a development team has deleted a Secrets Manager secret.
Now, when the team tried to create a new secret with the same name, they ended up
with an error - You can't create this secret because a secret with this name is
already scheduled for deletion . The secret has to be created with the same name to
avoid issues in their application.
How will you recreate the secret with the same name?
Use AWS Command Line Interface (AWS CLI) to permanently delete a secret
without any recovery window, run the DeleteSecret API call with
the ForceDeleteWithoutRecovery parameter
(Correct)
When you delete a secret, the Secrets Manager deprecates it with a seven-day
recovery window. It is not possible to create a new secret with the same name
for this duration
Use AWS Management Console to delete the key permanently. You will be
allowed to create a new key with the same name after the older one is
successfully deleted
Explanation
Correct option:
Use AWS Command Line Interface (AWS CLI) to permanently delete a secret without
any recovery window, run the DeleteSecret API call with
the ForceDeleteWithoutRecovery parameter
When you delete a secret, the Secrets Manager deprecates it with a seven-day recovery
window. This means that you can't recreate a secret using the same name using the
AWS Management Console until seven days have passed. You can permanently delete a
secret without any recovery window using the AWS Command Line Interface (AWS CLI).
Incorrect options:
Use AWS Management Console to delete the key permanently. You will be allowed to
create a new key with the same name after the older one is successfully deleted -
When you delete a secret, Secrets Manager deprecates it with a seven-day recovery
window. This means that you can't recreate a secret using the same name using the
AWS Management Console until seven days have passed.
When you delete a secret, the Secrets Manager deprecates it with a seven-day
recovery window. It is not possible to create a new secret with the same name for this
duration - This statement is true, but the secret can be deleted from CLI, and hence this
option is incorrect.
The secret key deletion is an asynchronous process. There might be a short delay
before updates are received. Try after few minutes for successful completion - This is
a made-up option, given only as a distractor.
Reference:
https://aws.amazon.com/premiumsupport/knowledge-center/delete-secrets-manager-
secret/
Question 5:
Skipped
A data analytics company wants to seamlessly integrate its on-premises data center
with AWS cloud-based IT systems which would be critical to manage as well as scale-
up the complex planning and execution of every stage of its analytics workflows. As
part of a pilot program, the company wants to integrate data files from its on-premises
servers into AWS via an NFS interface.
Which of the following AWS service is the MOST efficient solution for the given use-
case?
(Correct)
AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises
access to virtually unlimited cloud storage. The service provides three different types of
gateways – Tape Gateway, File Gateway, and Volume Gateway – that seamlessly
connect on-premises applications to cloud storage, caching data locally for low-latency
access.
AWS Storage Gateway's file interface, or file gateway, offers you a seamless way to
connect to the cloud in order to store application data files and backup images as
durable objects on Amazon S3 cloud storage. File gateway offers SMB or NFS-based
access to data in Amazon S3 with local caching. As the company wants to integrate
data files from its analytical instruments into AWS via an NFS interface, therefore AWS
Storage Gateway - File Gateway is the correct answer.
Incorrect options:
AWS Storage Gateway - Volume Gateway - You can configure the AWS Storage
Gateway service as a Volume Gateway to present cloud-based iSCSI block storage
volumes to your on-premises applications. Volume Gateway does not support NFS
interface, so this option is not correct.
AWS Storage Gateway - Tape Gateway - AWS Storage Gateway - Tape Gateway allows
moving tape backups to the cloud. Tape Gateway does not support NFS interface, so
this option is not correct.
AWS Site-to-Site VPN - AWS Site-to-Site VPN enables you to securely connect your on-
premises network or branch office site to your Amazon Virtual Private Cloud (Amazon
VPC). You can securely extend your data center or branch office network to the cloud
with an AWS Site-to-Site VPN (Site-to-Site VPN) connection. It uses internet protocol
security (IPSec) communications to create encrypted VPN tunnels between two
locations. You cannot use AWS Site-to-Site VPN to integrate data files via the NFS
interface, so this option is not correct.
References:
https://aws.amazon.com/storagegateway/
https://aws.amazon.com/storagegateway/volume/
https://aws.amazon.com/storagegateway/file/
https://aws.amazon.com/storagegateway/vtl/
Question 6:
Skipped
A streaming services company has created an audio streaming application and it would
like their Australian users to be served by the company's Australian servers. Other users
around the globe should not be able to access the servers through DNS queries.
Failover
Geolocation
(Correct)
•
Weighted
Latency
Explanation
Correct option:
Geolocation
Geolocation routing lets you choose the resources that serve your traffic based on the
geographic location of your users, meaning the location that DNS queries originate
from. For example, you might want all queries from Europe to be routed to an ELB load
balancer in the Frankfurt region. You can also use geolocation routing to restrict
distribution of content to only the locations in which you have distribution rights
You can create a default record that handles both queries from IP addresses that aren't
mapped to any location and queries that come from locations that you haven't created
geolocation records for. If you don't create a default record, Route 53 returns a "no
answer" response for queries from those locations.
via - https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html
Incorrect options:
Failover - Failover routing lets you route traffic to a resource when the resource is
healthy or to a different resource when the first resource is unhealthy.
Latency - If your application is hosted in multiple AWS Regions, you can improve
performance for your users by serving their requests from the AWS Region that provides
the lowest latency.
Weighted - Use this policy to route traffic to multiple resources in proportions that you
specify.
Reference:
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html
Question 7:
Skipped
A developer is trying to access an Amazon S3 bucket for storing the images used by the
web application. The S3 bucket has public read access enabled on it. However, when
the developer tries to access the bucket, an error pops up - 403 Access Denied . The
confused developer has connected with you to know why he has no access to the public
S3 bucket.
(Correct)
The resource owner which is the AWS account that created the S3 bucket, has
access to the bucket. This is an error in creation, delete the S3 bucket and re-
create it again
via - https://aws.amazon.com/premiumsupport/knowledge-center/s3-troubleshoot-
403-public-read/
Incorrect options:
Explicit deny statement in the bucket policy can cause forbidden-access errors. Check
the bucket policy of the S3 bucket
AWS Organizations service control policy doesn't allow access to Amazon S3 bucket
that the developer is trying to access. Service policy needs to be changed using AWS
Organizations
Either of these two options could be true. To know exactly what is causing the error,
AWS provides AWSSupport-TroubleshootS3PublicRead automation document on AWS
Systems Manager. This is the optimal way of troubleshooting the current issue.
The resource owner which is the AWS account that created the S3 bucket, has access
to the bucket. This is an error in creation, delete the S3 bucket and re-create it again -
It is not mentioned in the use-case if it is the resource owner trying to access the S3
bucket.
Reference:
https://aws.amazon.com/premiumsupport/knowledge-center/s3-troubleshoot-403-
public-read/
Question 8:
Skipped
A multi-national retail company wants to explore a hybrid cloud environment with AWS
so that it can start leveraging AWS services for some of its daily workflows. The
development team at the company wants to establish a dedicated, encrypted, low
latency, and high throughput connection between its data center and AWS Cloud. The
team has set aside sufficient time to account for the operational overhead of
establishing this connection.
Use AWS Direct Connect to establish a connection between the data center and
AWS Cloud
Use AWS Direct Connect plus VPN to establish a connection between the data
center and AWS Cloud
(Correct)
Use site-to-site VPN to establish a connection between the data center and
AWS Cloud
Use VPC transit gateway to establish a connection between the data center and
AWS Cloud
Explanation
Correct option:
Use AWS Direct Connect plus VPN to establish a connection between the data center
and AWS Cloud
AWS Direct Connect is a cloud service solution that makes it easy to establish a
dedicated network connection from your premises to AWS. AWS Direct Connect lets
you establish a dedicated network connection between your network and one of the
AWS Direct Connect locations.
With AWS Direct Connect plus VPN, you can combine one or more AWS Direct Connect
dedicated network connections with the Amazon VPC VPN. This combination provides
an IPsec-encrypted private connection that also reduces network costs, increases
bandwidth throughput, and provides a more consistent network experience than
internet-based VPN connections.
This solution combines the AWS managed benefits of the VPN solution with low
latency, increased bandwidth, more consistent benefits of the AWS Direct Connect
solution, and an end-to-end, secure IPsec connection. Therefore, AWS Direct Connect
plus VPN is the correct solution for this use-case.
via - https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-
options/aws-direct-connect-vpn.html
Incorrect options:
Use site-to-site VPN to establish a connection between the data center and AWS
Cloud - AWS Site-to-Site VPN enables you to securely connect your on-premises
network or branch office site to your Amazon Virtual Private Cloud (Amazon VPC). A
VPC VPN Connection utilizes IPSec to establish encrypted network connectivity
between your intranet and Amazon VPC over the Internet. VPN Connections are a good
solution if you have an immediate need, have low to modest bandwidth requirements,
and can tolerate the inherent variability in Internet-based connectivity.
However, Site-to-site VPN cannot provide low latency and high throughput connection,
therefore this option is ruled out.
Use VPC transit gateway to establish a connection between the data center and AWS
Cloud - A transit gateway is a network transit hub that you can use to interconnect your
virtual private clouds (VPC) and on-premises networks. A transit gateway by itself
cannot establish a low latency and high throughput connection between a data center
and AWS Cloud. Hence this option is incorrect.
Use AWS Direct Connect to establish a connection between the data center and AWS
Cloud - AWS Direct Connect by itself cannot provide an encrypted connection between a
data center and AWS Cloud, so this option is ruled out.
References:
https://aws.amazon.com/directconnect/
https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-
direct-connect-plus-vpn-network-to-amazon.html
Question 9:
Skipped
Your application is hosted by a provider on yourapp.freehosting.com. You would like to
have your users access your application using www.yourdomain.com, which you own
and manage under Route 53.
Create an A record
(Correct)
Explanation
Correct option:
A CNAME record maps DNS queries for the name of the current record, such as
acme.example.com, to another domain (example.com or example.net) or subdomain
(acme.example.com or zenith.example.org).
CNAME records can be used to map one domain name to another. Although you should
keep in mind that the DNS protocol does not allow you to create a CNAME record for the
top node of a DNS namespace, also known as the zone apex. For example, if you
register the DNS name example.com, the zone apex is example.com. You cannot create
a CNAME record for example.com, but you can create CNAME records for
www.example.com, newproduct.example.com, and so on.
Incorrect options:
Create an Alias Record - Alias records let you route traffic to selected AWS resources,
such as CloudFront distributions and Amazon S3 buckets. They also let you route traffic
from one record in a hosted zone to another record. 3rd party websites do not qualify
for these as we have no control over those. 'Alias record' cannot be used to map one
domain name to another.
Reference:
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-
choosing-alias-non-alias.html
Question 10:
Skipped
A media company runs its business on Amazon EC2 instances backed by Amazon S3
storage. The company is apprehensive about the consistent increase in costs incurred
from S3 buckets. The company wants to make some decisions regarding data retention,
storage, and deletion based on S3 usage and cost reports. As a SysOps Administrator,
you have been hired to develop a solution to track the costs incurred by each S3 bucket
in the AWS account.
Configure AWS Budgets to see the cost against each S3 bucket in the AWS
account
Use AWS Trusted Advisor's rich set of best practice checks to configure cost
utilization for individual S3 buckets. Trusted Advisor also provides
recommendations based on the findings derived from analyzing your AWS
cloud architecture
Add a common tag to each bucket. Activate the tag as a cost allocation tag. Use
the AWS Cost Explorer to create a cost report for the tag
(Correct)
Use AWS Simple Monthly Calculator to check the cost against each S3 bucket in
your AWS account
Explanation
Correct option:
Add a common tag to each bucket. Activate the tag as a cost allocation tag. Use the
AWS Cost Explorer to create a cost report for the tag
Before you begin, your AWS Identity and Access Management (IAM) policy must have
permission to: Access the Billing and Cost Management console, Perform the actions
s3:GetBucketTagging and s3:PutBucketTagging.
Start by adding a common tag to each bucket. Activate the tag as a cost allocation tag.
Use the AWS Cost Explorer to create a cost report for the tag. After you create the cost
report, you can use it to review the cost of each bucket that has the cost allocation tag
that you created.
You can set up a daily or hourly AWS Cost and Usage report to get more Amazon S3
billing details. However, these reports won't show you who made requests to your
buckets. To get more information on certain Amazon S3 billing items, you must enable
logging ahead of time. Then, you'll have logs that contain Amazon S3 request details.
Incorrect options:
Configure AWS Budgets to see the cost against each S3 bucket in the AWS account -
AWS Budgets gives you the ability to set custom budgets that alert you when your costs
or usage exceed (or are forecasted to exceed) your budgeted amount. You can also use
AWS Budgets to set reservation utilization or coverage targets and receive alerts when
your metrics drop below the threshold you define. It cannot showcase the cost of each
S3 bucket.
Use AWS Simple Monthly Calculator to check the cost against each S3 bucket in your
AWS account - The AWS Simple Monthly Calculator is an easy-to-use online tool that
enables you to estimate the monthly cost of AWS services for your use case based on
your expected usage. This useful tool helps estimate the cost of resources, but the
current use case is not about estimations but being able to understand which bucket is
incurring the maximum cost.
Use AWS Trusted Advisor's rich set of best practice checks to configure cost
utilization for individual S3 buckets. Trusted Advisor also provides recommendations
based on the findings derived from analyzing your AWS cloud architecture - AWS
Trusted Advisor offers a rich set of best practice checks and recommendations across
five categories. For Amazon S3 buckets, Trusted Advisor offers checks the following -
1) Checks buckets in Amazon S3 that have open access permissions 2) Checks the
logging configuration of Amazon S3 buckets- whether it is enabled and for what
duration 3) Checks for Amazon S3 buckets that do not have versioning enabled.
Trusted Advisor cannot however generate reports for costs incurred on S3 buckets.
References:
https://aws.amazon.com/premiumsupport/knowledge-center/s3-find-bucket-cost/
https://aws.amazon.com/premiumsupport/technology/trusted-advisor/best-practice-
checklist/
https://aws.amazon.com/getting-started/hands-on/control-your-costs-free-tier-
budgets/
Question 11:
Skipped
A systems administrator at a company is trying to create a digital signature for SSH'ing
into the Amazon EC2 instances.
Access keys
Key pairs
(Correct)
Key pairs - Key pairs consist of a public key and a private key. You use the private key to
create a digital signature, and then AWS uses the corresponding public key to validate
the signature. Key pairs are used only for Amazon EC2 and Amazon CloudFront. AWS
does not provide key pairs for your account; you must create them. You can create
Amazon EC2 key pairs from the Amazon EC2 console, CLI, or API. Key pairs make a
robust combination for accessing an instance securely, a better option than using
passwords.
Incorrect options:
Access keys - Access keys consist of two parts: an access key ID and a secret access
key. You use access keys to sign programmatic requests that you make to AWS if you
use AWS CLI commands (using the SDKs) or using AWS API operations. These
credentials are for accessing AWS services programmatically and not for accessing the
EC2 instance directly.
Root user credentials - Root user credentials are the Email ID and password used to
create the AWS account. This user has full privileges on the account created and has
access to all services under his account. The root user can create access keys or key
pairs from his account. But, the root account credentials cannot directly be used to
access EC2 instances or create digital signatures.
Reference:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html
Question 12:
Skipped
Security and Compliance is a Shared Responsibility between AWS and the customer. As
part of this Shared Responsibility, the customer is also responsible for securing the
resources that he has procured under his AWS account.
For Amazon EC2 service, managing guest operating system (including updates
and security patches), application software and Security Groups is the
responsibility of the customer
(Correct)
AWS is responsible for patching and fixing flaws within the infrastructure, for
patching the guest Operating Systems and applications of the customers
•
AWS is responsible for training their customers and their employees as part of
Customer Specific training
Explanation
Correct option:
For Amazon EC2 service, managing guest operating system (including updates and
security patches), application software and Security Groups is the responsibility of the
customer
Customer responsibility will be determined by the AWS Cloud services that a customer
selects. This determines the amount of configuration work the customer must perform
as part of their security responsibilities. For example, a service such as Amazon Elastic
Compute Cloud (Amazon EC2) is categorized as Infrastructure as a Service (IaaS) and,
as such, requires the customer to perform all of the necessary security configuration
and management tasks. Customers that deploy an Amazon EC2 instance are
responsible for the management of the guest operating system (including updates and
security patches), any application software or utilities installed by the customer on the
instances, and the configuration of the AWS-provided firewall (called a security group)
on each instance.
Incorrect options:
For Amazon S3 service, managing the operating system and platform is customer
responsibility - For abstracted services, such as Amazon S3, AWS operates the
infrastructure layer, the operating system, and platforms, and customers access the
endpoints to store and retrieve data. Customers are responsible for managing their data
(including encryption options), classifying their assets, and using IAM tools to apply the
appropriate permissions.
AWS is responsible for patching and fixing flaws within the infrastructure, for patching
the guest Operating Systems and applications of the customers - As part of Patch
management, AWS is responsible for patching and fixing flaws within the infrastructure,
but customers are responsible for patching their guest OS and applications.
AWS is responsible for training their customers and their employees as part of
Customer Specific training - As part of Awareness & Training, AWS trains AWS
employees, but a customer must train their own employees.
Reference:
https://aws.amazon.com/compliance/shared-responsibility-model/
Question 13:
Skipped
A telecommunications company runs its business on AWS Cloud with Amazon EC2
instances and Amazon S3 buckets. Of late, users are complaining of intermittently
receiving 500 Internal Error response when accessing the S3 bucket. The team is
looking at a way to track the frequency of the error and fix the issue.
Check if the S3 bucket has all the objects users are trying to access
Objects encrypted by AWS KMS are not accessible to users unless permission
for KMS key access is provided. Include logic to provide access to KMS keys
The users accessing the bucket do not have proper permissions to access the
objects present in S3. Check the application logic for the IAM Role being
assigned when accessing the S3 bucket
Enable Amazon CloudWatch metrics that include a metric for 5xx server
errors. Retrying generally fixes this error
(Correct)
Explanation
Correct option:
Enable Amazon CloudWatch metrics that include a metric for 5xx server errors.
Retrying generally fixes this error
If you get intermittent 500 Internal Error responses from Amazon S3, you can retry the
requests. These errors are rare and can occur during the normal use of the service. It's a
best practice to implement retry logic for requests that receive server or throttling errors
(5xx errors). For better flow control, use an exponential backoff algorithm. Each AWS
SDK uses automatic retry logic and an exponential backoff algorithm.
To monitor the number of 500 Internal Error responses that you're getting, you can
enable Amazon CloudWatch metrics. Amazon S3 CloudWatch request metrics include a
metric for 5xx server errors.
Incorrect options:
Check if the S3 bucket has all the objects users are trying to access
The users accessing the bucket do not have proper permissions to access the objects
present in S3. Check the application logic for the IAM Role being assigned when
accessing the S3 bucket
Objects encrypted by AWS KMS are not accessible to users unless permission for KMS
key access is provided. Include logic to provide access to KMS keys
All the above errors indicate access issues, specifically when access to a resource is
denied. Access Denied error is 403 or of the 4XX series.
References:
https://docs.aws.amazon.com/AmazonS3/latest/dev/cloudwatch-monitoring.html#s3-
request-cloudwatch-metrics
https://aws.amazon.com/premiumsupport/knowledge-center/s3-intermittent-500-
internal-error/
Question 14:
Skipped
A healthcare company has developed its flagship application on AWS Cloud with data
security requirements such that the encryption key must be stored in a custom
application running on-premises. The company wants to offload the data storage as
well as the encryption process to Amazon S3 but continue to use the existing encryption
keys.
Which of the following S3 encryption options allows the company to leverage Amazon
S3 for storing data with given constraints?
Server-Side Encryption with Customer Master Keys (CMKs) Stored in AWS Key
Management Service (SSE-KMS)
•
Client-Side Encryption with data encryption is done on the client-side before
sending it to Amazon S3
(Correct)
Explanation
Correct option:
You have the following options for protecting data at rest in Amazon S3:
Client-Side Encryption – Encrypt data client-side and upload the encrypted data to
Amazon S3. In this case, you manage the encryption process, the encryption keys, and
related tools.
For the given use-case, the company wants to manage the encryption keys via its
custom application and let S3 manage the encryption, therefore you must use Server-
Side Encryption with Customer-Provided Keys (SSE-C).
Incorrect options:
Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3) - When you use
Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3), each object is
encrypted with a unique key. As an additional safeguard, it encrypts the key itself with a
master key that it regularly rotates. So this option is incorrect.
Server-Side Encryption with Customer Master Keys (CMKs) Stored in AWS Key
Management Service (SSE-KMS) - Server-Side Encryption with Customer Master Keys
(CMKs) stored in AWS Key Management Service (SSE-KMS) is similar to SSE-S3. SSE-
KMS provides you with an audit trail that shows when your CMK was used and by
whom. Additionally, you can create and manage customer-managed CMKs or use AWS
managed CMKs that are unique to you, your service, and your Region.
Client-Side Encryption with data encryption is done on the client-side before sending it
to Amazon S3 - You can encrypt the data client-side and upload the encrypted data to
Amazon S3. In this case, you manage the encryption process, the encryption keys, and
related tools.
Reference:
https://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
Question 15:
Skipped
A financial services startup is building an interactive tool for personal finance needs.
The users would be required to capture their financial data via this tool. As this is
sensitive information, the backup of the user data must be kept encrypted in S3. The
startup does not want to provide its own encryption keys but still wants to maintain an
audit trail of when an encryption key was used and by whom.
Use client-side encryption with client provided keys and then upload the
encrypted user data to S3
(Correct)
Explanation
Correct option:
AWS Key Management Service (AWS KMS) is a service that combines secure, highly
available hardware and software to provide a key management system scaled for the
cloud. When you use server-side encryption with AWS KMS (SSE-KMS), you can specify
a customer-managed CMK that you have already created.
SSE-KMS provides you with an audit trail that shows when your CMK was used and by
whom. Therefore SSE-KMS is the correct solution for this use-case.
Server Side Encryption in
S3:
via - https://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
Incorrect options:
Use SSE-S3 to encrypt the user data on S3 - When you use Server-Side Encryption with
Amazon S3-Managed Keys (SSE-S3), each object is encrypted with a unique key.
However, this option does not provide the ability to audit trail the usage of the
encryption keys.
Use SSE-C to encrypt the user data on S3 - With Server-Side Encryption with Customer-
Provided Keys (SSE-C), you manage the encryption keys and Amazon S3 manages the
encryption, as it writes to disks, and decryption when you access your objects. However,
this option does not provide the ability to audit trail the usage of the encryption keys.
Use client-side encryption with client provided keys and then upload the encrypted
user data to S3 - Using client-side encryption is ruled out as the startup does not want
to provide the encryption keys.
References:
https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingClientSideEncryption.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
Question 16:
Skipped
A retail company has a web application that is deployed on 10 EC2 instances running
behind an Application Load Balancer. You have configured your web application to
capture the IP address of the client making requests. When viewing the data captured
you notice that every IP address being captured is the same, which also happens to be
the IP address of the Application Load Balancer.
As a SysOps Administrator, what should you do to identify the true IP address of the
client?
(Correct)
Modify the front-end of the website so that the users send their IP in the
requests
Explanation
Correct option:
The X-Forwarded-For request header helps you identify the IP address of a client when
you use an HTTP or HTTPS load balancer. Because load balancers intercept traffic
between clients and servers, your server access logs contain only the IP address of the
load balancer. To see the IP address of the client, use the X-Forwarded-For request
header. Elastic Load Balancing stores the IP address of the client in the X-Forwarded-
For request header and passes the header to your server.
Incorrect options:
Modify the front-end of the website so that the users send their IP in the requests -
When a user makes a request the IP address is sent with the request to the server and
the load balancer intercepts it. There is no need to modify the application.
Look at the client's cookie - For this, we would need to modify the client-side logic and
server-side logic, which would not be efficient.
Reference:
https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/x-forwarded-
headers.html
Question 17:
Skipped
A healthcare company stores confidential data on an Amazon Simple Storage Service
(S3) bucket. New security compliance guidelines require that files be stored with server-
side encryption. The encryption used must be Advanced Encryption Standard (AES-256)
and the company does not want to manage S3 encryption keys.
SSE-KMS
SSE-C
SSE-S3
(Correct)
Explanation
Correct option:
SSE-S3
Using Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3), each object is
encrypted with a unique key employing strong multi-factor encryption. As an additional
safeguard, it encrypts the key itself with a master key that it regularly rotates. Amazon
S3 server-side encryption uses one of the strongest block ciphers available, 256-bit
Advanced Encryption Standard (AES-256), to encrypt your data.
Incorrect options:
SSE-C - You manage the encryption keys and Amazon S3 manages the encryption as it
writes to disks and decryption when you access your objects.
Client-Side Encryption - You can encrypt data client-side and upload the encrypted data
to Amazon S3. In this case, you manage the encryption process, the encryption keys,
and related tools.
SSE-KMS - Similar to SSE-S3 and also provides you with an audit trail of when your key
was used and by whom. Additionally, you have the option to create and manage
encryption keys yourself.
Reference:
https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingEncryption.html
Question 18:
Skipped
A video streaming solutions company wants to use AWS Cloudfront to distribute its
content only to its service subscribers.
As a SysOps Administrator, which of the following solutions would you suggest in order
to deliver restricted content to the subscribers? (Select two)
(Correct)
•
Require HTTPS for communication between CloudFront and your S3 origin
(Correct)
Forward HTTPS requests to the origin server by using the ECDSA or RSA
ciphers
Explanation
Correct options:
Many companies that distribute content over the internet want to restrict access to
documents, business data, media streams, or content that is intended for selected
users, for example, users who have paid a fee.
To securely serve this private content by using CloudFront, you can do the following:
Require that your users access your private content by using special CloudFront signed
URLs or signed cookies.
A signed URL includes additional information, for example, expiration date and time,
that gives you more control over access to your content. So this is the correct option.
CloudFront signed cookies allow you to control who can access your content when you
don't want to change your current URLs or when you want to provide access to multiple
restricted files, for example, all of the files in the subscribers' area of a website. So this
is also a correct option.
Incorrect options:
Require HTTPS for communication between CloudFront and your custom origin
Requiring HTTPS for communication between CloudFront and your custom origin (or S3
origin) only enables secure access to the underlying content. You cannot use HTTPS to
restrict access to your private content. So both these options are incorrect.
Forward HTTPS requests to the origin server by using the ECDSA or RSA ciphers - This
option is just added as a distractor. You cannot use HTTPS to restrict access to your
private content.
References:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-
content-signed-urls.html
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-
content-signed-cookies.html
Question 19:
Skipped
A systems administrator at a company is working on a CloudFormation template to set
up resources. Resources will be defined using code and provisioned based on certain
conditions.
Outputs
Parameters
(Correct)
Conditions
Resources
Explanation
Correct option:
Parameters
Parameters enable you to input custom values to your CloudFormation template each
time you create or update a stack. Please see this note to understand how to define a
parameter in a template:
via
- https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-
section-structure.html
The optional Conditions section contains statements that define the circumstances
under which entities are created or configured. For example, you can create a condition
and then associate it with a resource or output so that AWS CloudFormation only
creates the resource or output if the condition is true.
You might use conditions when you want to reuse a template that can create resources
in different contexts, such as a test environment versus a production environment. In
your template, you can add an EnvironmentType input parameter, which accepts either
prod or test as inputs. For the production environment, you might include Amazon EC2
instances with certain capabilities; however, for the test environment, you want to use
reduced capabilities to save money.
Conditions cannot be used within the Parameters section. After you define all your
conditions, you can associate them with resources and resource properties only in the
Resources and Outputs sections of a template.
via - https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/conditions-
section-structure.html
Incorrect options:
Resources - Resources section describes the resources that you want to provision in
your AWS CloudFormation stacks. You can associate conditions with the resources that
you want to conditionally create.
References:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-
section-structure.html
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/conditions-
section-structure.html
Question 20:
Skipped
A social media company uses Amazon S3 to store the images uploaded by the users.
These images are kept encrypted in S3 by using AWS-KMS and the company manages
its own Customer Master Key (CMK) for encryption. A systems administrator
accidentally deleted the CMK a day ago, thereby rendering the user's photo data
unrecoverable. You have been contacted by the company to consult them on possible
solutions to this issue.
As a SysOps Administrator, which of the following steps would you recommend to solve
this issue?
The company should issue a notification on its web application informing the
users about the loss of their data
As the CMK was deleted a day ago, it must be in the 'pending deletion' status
and hence you can just cancel the CMK deletion and recover the key
(Correct)
As the CMK was deleted a day ago, it must be in the 'pending deletion' status and
hence you can just cancel the CMK deletion and recover the key
AWS Key Management Service (KMS) makes it easy for you to create and manage
cryptographic keys and control their use across a wide range of AWS services and in
your applications. AWS KMS is a secure and resilient service that uses hardware
security modules that have been validated under FIPS 140-2.
Deleting a customer master key (CMK) in AWS Key Management Service (AWS KMS) is
destructive and potentially dangerous. Therefore, AWS KMS enforces a waiting period.
To delete a CMK in AWS KMS you schedule key deletion. You can set the waiting period
from a minimum of 7 days up to a maximum of 30 days. The default waiting period is
30 days. During the waiting period, the CMK status and key state is Pending deletion. To
recover the CMK, you can cancel key deletion before the waiting period ends. After the
waiting period ends you cannot cancel key deletion, and AWS KMS deletes the CMK.
via - https://docs.aws.amazon.com/kms/latest/developerguide/deleting-keys.html
Incorrect options:
The AWS root account user cannot recover CMK and the AWS support does not have
access to CMK via any backups. Both these options just serve as distractors.
The company should issue a notification on its web application informing the users
about the loss of their data - This option is not needed as the data can be recovered via
the cancel key deletion feature.
Reference:
https://docs.aws.amazon.com/kms/latest/developerguide/deleting-keys.html
Question 21:
Skipped
A bug in an application code has resulted in an EC2 instance's CPU utilization touching
almost 100 percent thereby freezing the instance. The instance needs a restart to work
normally once it hits this point. It will take a few weeks for the team to fix the issue. Till
the bug fix is deployed, you have been tasked to automate the instance restart at the
first sign of the instance becoming unresponsive.
As a SysOps Administrator, how will you configure a solution for this requirement?
Create a CloudWatch alarm for CPU Utilization of the Amazon EC2 instance,
with detailed monitoring enabled. Configure an action to restart the instance
when the alarm is triggered
(Correct)
•
Create a CloudWatch alarm for CPU Utilization of the Amazon EC2 instance,
with basic monitoring enabled. Configure an AWS Lambda function against the
alarm action. The Lambda function will restart the instance, automating the
process
Explanation
Correct option:
Create a CloudWatch alarm for CPU Utilization of the Amazon EC2 instance, with
detailed monitoring enabled. Configure an action to restart the instance when the
alarm is triggered
By default, your instance is enabled for basic monitoring. You can optionally enable
detailed monitoring. After you enable detailed monitoring, the Amazon EC2 console
displays monitoring graphs with a 1-minute period for the instance. In Basic monitoring,
data is available automatically in 5-minute periods at no charge. For Detailed
monitoring, data is available in 1-minute periods for an additional charge.
If you enable detailed monitoring, you are charged per metric that is sent to
CloudWatch. You are not charged for data storage.
You can enable detailed monitoring on an instance as you launch it or after the instance
is running or stopped. Enabling detailed monitoring on an instance does not affect the
monitoring of the EBS volumes attached to the instance.
Once the detailed monitoring is active, you can configure an action to restart the
instance when the alarm is triggered based on the CPUUtilization metric.
Incorrect options:
Create a custom code to send CPU utilization of the instance to CloudWatch metrics.
Configure an action to restart the instance when the alarm is triggered - Custom code
is not needed since CPU utilization is a pre-defined metric that CloudWatch can track
for Amazon EC2 instances.
Create a CloudWatch alarm for CPU Utilization of the Amazon EC2 instance, with basic
monitoring enabled. Configure an AWS Lambda function against the alarm action. The
Lambda function will restart the instance, automating the process - Instance restart is
a configurable item for an action when an alarm is triggered. This is a straightforward
way compared to using Lambda functions.
References:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-cloudwatch-new.html
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/viewing_metrics_with_cloud
watch.html
Question 22:
Skipped
A Systems Administrator has just configured an internet facing Load Balancer for traffic
distribution across the EC2 instances placed in different Availability Zones. The clients,
however, are unable to connect to the Load Balancer.
The target returned the error code of 200 indicating an error on the server
side
The target was incorrectly configured as a Lambda function and not an EC2
instance
A security group or network ACL is not allowing traffic from the client
(Correct)
Explanation
Correct option:
A security group or network ACL is not allowing traffic from the client
If the load balancer is not responding to client requests, check for the following issues:
Incorrect options:
It is an internal server error - HTTP 500 is the error code for internal server error,
generated by Load Balancer and sent back to the requesting client. But, in the given use
case, the client is unable to connect to the Load Balancer itself.
The target returned the error code of 200 indicating an error on the server side - By
default, the success code is 200. So, returning an HTTP 200 indicates a successful
message.
The target was incorrectly configured as a Lambda function and not an EC2 instance -
An ELB can be configured to have a Lambda Function as its target. This should not
result in any access issues or errors.
Reference:
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-
troubleshooting.html
Question 23:
Skipped
A social media company is using AWS CloudFormation to manage its technology
infrastructure. It has created a template to provision a stack with a VPC and a subnet.
The output value of this subnet has to be used in another stack.
As a SysOps Administrator, which of the following options would you suggest to provide
this information to the other stack?
Use Fn::Transform
(Correct)
Use Fn::ImportValue
Explanation
Correct option:
To share information between stacks, export a stack's output values. Other stacks that
are in the same AWS account and region can import the exported values.
To export a stack's output value, use the Export field in the Output section of the stack's
template. To import those values, use the Fn::ImportValue function in the template for
the other stacks.
Incorrect options:
Use 'Expose' field in the Output section of the stack's template - 'Expose' is a made-up
option, and only given as a distractor.
Use Fn::ImportValue - To import the values exported by another stack, we use the
Fn::ImportValue function in the template for the other stacks. This function is not useful
for the current scenario.
Reference:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-stack-
exports.html
Question 24:
Skipped
A video streaming solutions provider is migrating to AWS Cloud infrastructure for
delivering its content to users across the world. The company wants to make sure that
the solution supports at least a million requests per second for its EC2 server farm.
As a SysOps Administrator, which type of Elastic Load Balancer would you recommend
as part of the solution stack?
•
Classic Load Balancer
(Correct)
Network Load Balancer is best suited for use-cases involving low latency and high
throughput workloads that involve scaling to millions of requests per second. Network
Load Balancer operates at the connection level (Layer 4), routing connections to targets
- Amazon EC2 instances, microservices, and containers – within Amazon Virtual Private
Cloud (Amazon VPC) based on IP protocol data.
Incorrect options:
Application Load Balancer - Application Load Balancer operates at the request level
(layer 7), routing traffic to targets – EC2 instances, containers, IP addresses, and
Lambda functions based on the content of the request. Ideal for advanced load
balancing of HTTP and HTTPS traffic, Application Load Balancer provides advanced
request routing targeted at the delivery of modern application architectures, including
microservices and container-based applications.
Application Load Balancer is not a good fit for the low latency and high throughput
scenario mentioned in the given use-case.
Classic Load Balancer - Classic Load Balancer provides basic load balancing across
multiple Amazon EC2 instances and operates at both the request level and connection
level. Classic Load Balancer is intended for applications that were built within the EC2-
Classic network. Classic Load Balancer is not a good fit for the low latency and high
throughput scenario mentioned in the given use-case.
Infrastructure Load Balancer - There is no such thing as Infrastructure Load Balancer
and this option just acts as a distractor.
Reference:
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html
Question 25:
Skipped
A startup is looking at moving their web application to AWS Cloud. The database will be
on Amazon RDS and it should not be accessible to the public. The application needs to
remain connected to the database for the application to work. Also, the RDS instance
will need access to the internet to download patches every month.
As a SysOps Administrator, how will you configure a solution for this requirement?
Host the application servers in the public subnet and database in the private
subnet of the VPC. The public subnet will connect to the internet using an
Internet Gateway configured with the VPC. The private subnet can connect to
the internet if they are configured using IPv6 protocol
Host the application servers in the public subnet and database in the private
subnet of the VPC. The public subnet will connect to the internet using an
Internet Gateway configured with the VPC. Use VPC-peering between the
private and public subnets to open internet access for the database in private
subnet
Host the application servers in the public subnet of the VPC and database in
the private subnet. The public subnet will connect to the internet using an
Internet Gateway configured with the VPC. Database in the private subnet will
use Network Address Translation (NAT) gateway, present in the public subnet,
to connect to internet
(Correct)
Host the application servers in the public subnet and database in the private
subnet of the VPC. Configure Network Address Translation (NAT) gateway to
provide access to the internet for both the subnets. The route table of both the
subnets will have an entry to NAT gateway
Explanation
Correct option:
Host the application servers in the public subnet of the VPC and database in the
private subnet. The public subnet will connect to the internet using an Internet
Gateway configured with the VPC. Database in the private subnet will use Network
Address Translation (NAT) gateway, present in the public subnet, to connect to
internet
For a multi-tier website, with the application servers in a public subnet and the database
servers in a private subnet, you can set up security and routing so that the application
servers can communicate with the database servers.
The instances in the public subnet can send outbound traffic directly to the Internet,
whereas the instances in the private subnet can't. Instead, the instances in the private
subnet can access the Internet by using a network address translation (NAT) gateway
that resides in the public subnet. The database servers can connect to the Internet for
software updates using the NAT gateway, but the Internet cannot establish connections
to the database servers.
Incorrect options:
Host the application servers in the public subnet and database in the private subnet of
the VPC. Configure Network Address Translation (NAT) gateway to provide access to
the internet for both the subnets. The route table of both the subnets will have an entry
to NAT gateway - NAT gateway is needed for instances in the private subnet to connect
to the internet. NAT gateway is not used in public subnets, that have access to Internet
Gateway.
Host the application servers in the public subnet and database in the private subnet of
the VPC. The public subnet will connect to the internet using an Internet Gateway
configured with the VPC. Use VPC-peering between the private and public subnets to
open internet access for the database in private subnet - A VPC peering connection is a
networking connection between two VPCs that enables you to route traffic between
them using private IPv4 addresses or IPv6 addresses. It is a communication channel
between VPCs, not between subnets of a VPC.
Host the application servers in the public subnet and database in the private subnet of
the VPC. The public subnet will connect to the internet using an Internet Gateway
configured with the VPC. The private subnet can connect to the internet if they are
configured using IPv6 protocol - IPv6, like IPV4 is an internet protocol, used for
communication over the internet. IPV6 does not provide internet access, if your
instances use IPV6 for communication, you need to configure egress-only Internet
gateway to connect to the internet.
Reference:
https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Scenario2.html
Question 26:
Skipped
An organization that started as a single AWS account, gradually moved to a multi-
account setup. The organization also has multiple AWS environments in each account,
that were being managed at the account level. Backups are a big part of this
management task. The organization is looking at moving to a centralized backup
management process that consolidates and automates Cross-Region backup tasks
across AWS accounts.
Which of the solutions below is the right choice for this requirement?
Create a backup plan in AWS Backup. Assign tags to resources based on the
environment ( Production, Development, Testing). Create one backup policy
for production environments and one backup policy for non-production
environments. Schedule the backup plan based on the organization's backup
policies
(Correct)
Explanation
Correct option:
Create a backup plan in AWS Backup. Assign tags to resources based on the
environment ( Production, Development, Testing). Create one backup policy for
production environments and one backup policy for non-production environments.
Schedule the backup plan based on the organization's backup policies
AWS Backup is a fully managed and cost-effective backup service that simplifies and
automates data backup across AWS services including Amazon EBS, Amazon EC2,
Amazon RDS, Amazon Aurora, Amazon DynamoDB, Amazon EFS, and AWS Storage
Gateway. In addition, AWS Backup leverages AWS Organizations to implement and
maintain a central view of backup policy across resources in a multi-account AWS
environment. Customers simply tag and associate their AWS resources with backup
policies managed by AWS Backup for Cross-Region data replication.
The following post shows how to centrally manage backup tasks across AWS accounts
in your organization by deploying backup policies with AWS Backup.
Incorrect options:
AWS Systems Manager Maintenance Windows let you define a schedule for when to
perform potentially disruptive actions on your instances such as patching an operating
system, updating drivers, or installing software or patches. Although a useful service, it
is not suited for the given requirements.
Use Amazon EventBridge to create a workflow for scheduled backup of all AWS
resources under an account. Amazon S3 lifecycle policies, Amazon EC2 instance
backups, and Amazon RDS backups can be used to create the events for the
EventBridge. The same workflow can be scheduled to work on production and non-
production environments, based on the tags created - Amazon EventBridge is a
serverless event bus that makes it easy to connect applications together using data
from your own applications, integrated Software-as-a-Service (SaaS) applications, and
AWS services. It is possible to build a backup solution using EventBridge, but it will not
be an optimized one, since AWS offers services with better features for centrally
managing backups.
Use Amazon Data Lifecycle Manager to manage creation, deletion and managing of all
the AWS resources under an account. Tag all the resources that need to be backed up
and use lifecycle policies to customize the backup management to cater to the needs
of the organization - DLM provides a simple way to manage the lifecycle of EBS
resources, such as volume snapshots. You should use DLM when you want to automate
the creation, retention, and deletion of EBS snapshots.
References:
https://aws.amazon.com/blogs/storage/centralized-cross-account-management-with-
cross-region-copy-using-aws-backup/
https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-
maintenance.html
https://aws.amazon.com/eventbridge/
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshot-lifecycle.html
Question 27:
Skipped
An e-commerce company has established a Direct Connect connection between AWS
Cloud and their on-premises infrastructure. The development team needs to access the
Amazon S3 bucket present in their AWS account to pull the customer data for an
application hosted on the on-premises infrastructure.
Create a VPC interface endpoint for the S3 bucket you need to access. Then use
the private virtual interface (VIF) using Direct Connect to access the bucket
Create a VPC gateway endpoint for the S3 bucket you need to access. Then use
the private virtual interface (VIF) using Direct Connect to access the bucket
Directly access the S3 bucket through a private virtual interface (VIF) using
Direct Connect
•
Create a dedicated or hosted connection. Establish a cross-network
connection and then create a public virtual interface for your connection.
Configure an end router for use with the public virtual interface
(Correct)
Explanation
Correct option:
It's not possible to directly access an S3 bucket through a private virtual interface (VIF)
using Direct Connect. This is true even if you have an Amazon Virtual Private Cloud
(Amazon VPC) endpoint for Amazon S3 in your VPC because VPC endpoint connections
can't extend outside of a VPC. Additionally, Amazon S3 resolves to public IP addresses,
even if you enable a VPC endpoint for Amazon S3.
However, you can establish access to Amazon S3 using Direct Connect by following
these steps (This configuration doesn't require a VPC endpoint for Amazon S3, because
traffic doesn't traverse the VPC):
After the BGP is up and established, the Direct Connect router advertises all global
public IP prefixes, including Amazon S3 prefixes. Traffic heading to Amazon S3 is
routed through the Direct Connect public virtual interface through a private network
connection between AWS and your data center or corporate network.
Incorrect options:
Directly access the S3 bucket through a private virtual interface (VIF) using Direct
Connect - Private virtual interface allows access to an Amazon VPC using private IP
addresses. It's not possible to directly access an S3 bucket through a private virtual
interface (VIF) using Direct Connect.
Create a VPC gateway endpoint for the S3 bucket you need to access. Then use
private virtual interface (VIF) using Direct Connect to access the bucket - VPC endpoint
connections can't extend outside of a VPC. Additionally, Amazon S3 resolves to public
IP addresses, even if you enable a VPC endpoint for Amazon S3.
Create a VPC interface endpoint for the S3 bucket you need to access. Then use
private virtual interface (VIF) using Direct Connect to access the bucket - VPC interface
endpoint is not used for accessing Amazon S3 buckets, we need to use VPC gateway
endpoint. As discussed above, VPC endpoint connections can't extend outside of a
VPC. Additionally, Amazon S3 resolves to public IP addresses, even if you enable a VPC
endpoint for Amazon S3.
Reference:
https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-access-direct-
connect/
Question 28:
Skipped
An analytics company generates reports for various client applications, some of which
have critical data. As per the company's compliance guidelines, data has to be
encrypted during data exchange, for all channels of communication. An Amazon S3
bucket is configured as a website endpoint and this is now being added as a custom
origin for CloudFront.
How will you secure this channel, as per the company's requirements?
(Correct)
•
Configure CloudFront that mandates viewers to use HTTPS to request objects
from S3. Configure S3 bucket to support HTTPS communication only. This will
force CloudFront to use HTTPS for communication between CloudFront and S3
Explanation
Correct option:
Configure CloudFront to mandate viewers to use HTTPS to request objects from S3.
CloudFront and S3 will use HTTP to communicate with each other
via - https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-
https-cloudfront-to-s3-origin.html
Incorrect options:
Configure CloudFront that mandates viewers to use HTTPS to request objects from
S3. Configure S3 bucket to support HTTPS communication only. This will force
CloudFront to use HTTPS for communication between CloudFront and S3 - As
discussed above, HTTPS between CloudFront and Amazon S3 is not supported when
the S3 bucket is configured as a website endpoint.
Communication between CloudFront and Amazon S3 is always on HTTP protocol since
the network used for communication is internal to AWS and is inherently secure -
When your origin is an Amazon S3 bucket, your options for using HTTPS for
communications with CloudFront depend on how you're using the bucket. If your
Amazon S3 bucket is configured as a website endpoint, you can't configure CloudFront
to use HTTPS to communicate with your origin.
CloudFront always forwards requests to S3 by using the protocol that viewers used to
submit the requests. So, we only need to configure CloudFront to mandate the use of
HTTPS for users - This option has been added as a distractor. As mentioned earlier, if
your Amazon S3 bucket is configured as a website endpoint, you can't configure
CloudFront to use HTTPS while communicating with S3.
Reference:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-https-
cloudfront-to-s3-origin.html
Question 29:
Skipped
A media company uses S3 to aggregate the raw video footage from its reporting teams
across the US. The company has recently expanded into new geographies in Europe and
Australia. The technical teams at the overseas branch offices have reported huge
delays in uploading large video files to the destination S3 bucket.
Which of the following are the MOST cost-effective options to improve the file upload
speed into S3? (Select two)
Use AWS Global Accelerator for faster file uploads into the destination S3
bucket
Use Amazon S3 Transfer Acceleration to enable faster file uploads into the
destination S3 bucket
(Correct)
•
Use multipart uploads for faster file uploads into the destination S3 bucket
(Correct)
Create multiple AWS direct connect connections between the AWS Cloud and
branch offices in Europe and Australia. Use the direct connect connections for
faster file uploads into S3
Create multiple site-to-site VPN connections between the AWS Cloud and
branch offices in Europe and Australia. Use these VPN connections for faster
file uploads into S3
Explanation
Correct options:
Use Amazon S3 Transfer Acceleration to enable faster file uploads into the destination
S3 bucket - Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers
of files over long distances between your client and an S3 bucket. Transfer Acceleration
takes advantage of Amazon CloudFront’s globally distributed edge locations. As the
data arrives at an edge location, data is routed to Amazon S3 over an optimized network
path.
Use multipart uploads for faster file uploads into the destination S3 bucket - Multipart
upload allows you to upload a single object as a set of parts. Each part is a contiguous
portion of the object's data. You can upload these object parts independently and in any
order. If transmission of any part fails, you can retransmit that part without affecting
other parts. After all parts of your object are uploaded, Amazon S3 assembles these
parts and creates the object. In general, when your object size reaches 100 MB, you
should consider using multipart uploads instead of uploading the object in a single
operation. Multipart upload provides improved throughput, therefore it facilitates faster
file uploads.
Incorrect options:
Create multiple AWS direct connect connections between the AWS Cloud and branch
offices in Europe and Australia. Use the direct connect connections for faster file
uploads into S3 - AWS Direct Connect is a cloud service solution that makes it easy to
establish a dedicated network connection from your premises to AWS. AWS Direct
Connect lets you establish a dedicated network connection between your network and
one of the AWS Direct Connect locations.
Direct connect takes significant time (several months) to be provisioned and is an
overkill for the given use-case.
Create multiple site-to-site VPN connections between the AWS Cloud and branch
offices in Europe and Australia. Use these VPN connections for faster file uploads into
S3 - AWS Site-to-Site VPN enables you to securely connect your on-premises network or
branch office site to your Amazon Virtual Private Cloud (Amazon VPC). You can
securely extend your data center or branch office network to the cloud with an AWS
Site-to-Site VPN connection. A VPC VPN Connection utilizes IPSec to establish
encrypted network connectivity between your intranet and Amazon VPC over the
Internet.
VPN Connections are a good solution if you have low to modest bandwidth
requirements and can tolerate the inherent variability in Internet-based connectivity.
Site-to-site VPN will not help in accelerating the file transfer speeds into S3 for the given
use-case.
Use AWS Global Accelerator for faster file uploads into the destination S3 bucket -
AWS Global Accelerator is a service that improves the availability and performance of
your applications with local or global users. It provides static IP addresses that act as a
fixed entry point to your application endpoints in a single or multiple AWS Regions, such
as your Application Load Balancers, Network Load Balancers or Amazon EC2 instances.
AWS Global Accelerator will not help in accelerating the file transfer speeds into S3 for
the given use-case.
References:
https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html
Question 30:
Skipped
An e-commerce company has used Aurora Serverless MySQL compatible DB clusters
for deploying a new application to understand its capacity needs. Based on the scaling
actions of Aurora, the company will decide on the database requirements for deploying
the new application. In this context, the company wants to audit the database activity,
collect and publish the logs generated by Aurora Serverless to Amazon CloudWatch.
For MySQL-compatible DB clusters, you can enable the slow query log, general
log, or audit logs to get a view of the database activity
(Correct)
You can view the logs directly from the Amazon Relational Database Service
(Amazon RDS) console
Aurora Serverless cluster is integrated with Amazon CloudWatch and logs are
sent automatically
Explanation
Correct option:
For MySQL-compatible DB clusters, you can enable the slow query log, general log, or
audit logs to get a view of the database activity
To enable logs, first modify the cluster parameter groups for an Aurora serverless
cluster. For MySQL-compatible DB clusters, you can enable the slow query log, general
log, or audit logs.
Incorrect options:
You can view the logs directly from the Amazon Relational Database Service (Amazon
RDS) console - Because there isn't a direct DB instance to access and host the log files,
you can't view the logs directly from the Amazon Relational Database Service (Amazon
RDS) console.
Aurora Serverless cluster is integrated with Amazon CloudWatch and logs are sent
automatically - Aurora Serverless will automatically publish the logs to CloudWatch if
Aurora cluster is configured as mentioned above. They are not enabled by default.
Aurora Serverless connects to a proxy fleet of DB instances and hence you cannot see
the log files. You can connect with your AWS support contact to get help on this
requirement - As discussed above, there isn't a direct DB instance to access and host
the log files. However, Aurora can be configured as above, for MYSQL to push logs to
CloudWatch from where they can be accessed and analyzed.
Reference:
https://aws.amazon.com/premiumsupport/knowledge-center/aurora-serverless-logs-
enable-view/
Question 31:
Skipped
A company is moving their on-premises technology infrastructure to AWS Cloud.
Compliance rules and regulatory guidelines mandate the company to use its own
software that needs socket level configurations. As the company is new to AWS Cloud,
they have reached out to you for guidance on this requirement.
As an AWS Certified SysOps Administrator, which option will you suggest for the given
requirement?
Opt for On-Demand instances that are highly available and require no prior
planning
(Correct)
Opt for Reserved Instances that allow you to plan and help install the
necessary software
Explanation
Correct option:
An Amazon EC2 Dedicated Host is a physical server with EC2 instance capacity fully
dedicated to your use. Dedicated Hosts allow you to use your existing per-socket, per-
core, or per-VM software licenses, including Windows Server, Microsoft SQL Server,
SUSE, and Linux Enterprise Server. Hence, is the right choice for the current
requirement.
Incorrect options:
Opt for Amazon EC2 Dedicated Instance - Dedicated Instances are Amazon EC2
instances that run in a virtual private cloud (VPC) on hardware that's dedicated to a
single customer. Dedicated Instances that belong to different AWS accounts are
physically isolated at a hardware level, even if those accounts are linked to a single
payer account. However, Dedicated Instances may share hardware with other instances
from the same AWS account that are not Dedicated Instances.
Opt for On-Demand instances that are highly available and require no prior planning
Opt for Reserved Instances that allow you to plan and help install the necessary
software
You cannot install your own software that needs socket level programming on On-
Demand or Reserved Instances.
References:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/dedicated-hosts-
overview.html
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/dedicated-instance.html
Question 32:
Skipped
An e-commerce company runs their database workloads on Provisioned IOPS SSD (io1)
volumes.
(Correct)
100 GiB size volume with 7500 IOPS - This is an incorrect configuration. The maximum
ratio of provisioned IOPS to the requested volume size (in GiB) is 50:1. So, for a 100 GiB
volume size, the max IOPS possible is 100*50 = 5000 IOPS.
Incorrect options:
Provisioned IOPS SSD (io1) volumes allow you to specify a consistent IOPS rate when
you create the volume, and Amazon EBS delivers the provisioned performance 99.9
percent of the time. An io1 volume can range in size from 4 GiB to 16 TiB. The
maximum ratio of provisioned IOPS to the requested volume size (in GiB) is 50:1. For
example, a 100 GiB volume can be provisioned with up to 5,000 IOPS.
100 GiB size volume with 1000 IOPS - As explained above, up to 5000 IOPS is a valid
configuration for the given use-case.
100 GiB size volume with 5000 IOPS - As explained above, up to 5000 IOPS is a valid
configuration for the given use-case.
100 GiB size volume with 3000 IOPS - As explained above, up to 5000 IOPS is a valid
configuration for the given use-case.
Reference:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html
Question 33:
Skipped
A junior developer is tasked with creating necessary configurations for AWS
CloudFormation that is extensively used in a project. After declaring the necessary
stack policy, the developer realized that the users still do not have access to stack
resources. The stack policy created by the developer looks like so:
{
"Statement" : [
{
"Effect" : "Allow",
"Action" : "Update:*",
"Principal": "*",
"Resource" : "*"
},
{
"Effect" : "Deny",
"Action" : "Update:*",
"Principal": "*",
"Resource" : "LogicalResourceId/ProductionDatabase"
}
]
}
Why are the users unable to access the stack resources even after giving access
permissions to all?
Stack policies are associated with a particular IAM role or an IAM user. Hence,
they only work for the users you have explicitly attached the policy to
The stack policy is invalid and hence the users are not granted any
permissions. The developer needs to fix the syntactical errors in the policy
•
A stack policy applies only during stack updates, it doesn't provide access
controls. The developer needs to provide access through IAM policies
(Correct)
Explanation
Correct option:
A stack policy applies only during stack updates, it doesn't provide access controls.
The developer needs to provide access through IAM policies - When you create a stack,
all update actions are allowed on all resources. By default, anyone with stack update
permissions can update all of the resources in the stack. You can prevent stack
resources from being unintentionally updated or deleted during a stack update by using
a stack policy. A stack policy is a JSON document that defines the update actions that
can be performed on designated resources.
After you set a stack policy, all of the resources in the stack are protected by default. To
allow updates on specific resources, you specify an explicit Allow statement for those
resources in your stack policy. You can define only one stack policy per stack, but, you
can protect multiple resources within a single policy.
A stack policy applies only during stack updates. It doesn't provide access controls like
an AWS Identity and Access Management (IAM) policy. Use a stack policy only as a fail-
safe mechanism to prevent accidental updates to specific stack resources. To control
access to AWS resources or actions, use IAM.
Incorrect options:
The stack policy is invalid and hence the users are not granted any permissions. The
developer needs to fix the syntactical errors in the policy - This statement is incorrect
and given only as a distractor.
Stack policies do not allow wildcard character value ( * ) for the Principal element of
the policy - The Principal element specifies the entity that the policy applies to. This
element is required while creating a policy but supports only the wildcard (*), which
means that the policy applies to all principals.
Stack policies are associated with a particular IAM role or an IAM user. Hence, they
only work for the users you have explicitly attached the policy to - A stack policy
applies to all AWS CloudFormation users who attempt to update the stack. You can't
associate different stack policies with different users.
Reference:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/protect-stack-
resources.html
Question 34:
Skipped
A company wants to migrate a part of its on-premises infrastructure to AWS Cloud. As a
starting point, the company is looking at moving their daily workflow files to AWS Cloud,
such that the files are accessible from the on-premises systems as well as AWS Cloud.
To reduce the management overhead, the company wants a fully managed service.
(Correct)
File Gateway of AWS Storage Gateway - AWS Storage Gateway is a hybrid cloud
storage service that gives you on-premises access to virtually unlimited cloud storage.
Storage Gateway provides a standard set of storage protocols such as iSCSI, SMB, and
NFS, which allow you to use AWS storage without rewriting your existing applications. It
provides low-latency performance by caching frequently accessed data on-premises,
while storing data securely and durably in Amazon cloud storage services. Storage
Gateway optimizes data transfer to AWS by sending only changed data and
compressing data.
File Gateway presents a file-based interface to Amazon S3, which appears as a network
file share. It enables you to store and retrieve Amazon S3 objects through standard file
storage protocols. File Gateway allows your existing file-based applications or devices
to use secure and durable cloud storage without needing to be modified. With S3 File
Gateway, your configured S3 buckets will be available as Network File System (NFS)
mount points or Server Message Block (SMB) file shares. Your applications read and
write files and directories over NFS or SMB, interfacing to the gateway as a file server. In
turn, the gateway translates these file operations into object requests on your S3
buckets.
Incorrect options:
Volume Gateway of AWS Storage Gateway - Volume Gateway provides an iSCSI target,
which enables you to create block storage volumes and mount them as iSCSI devices
from your on-premises or EC2 application servers. The Volume Gateway runs in either a
cached or stored mode. Volume Gateway cannot be used for file storage.
Amazon Simple Storage Service (Amazon S3) - Amazon S3 is object storage built to
store and retrieve any amount of data from anywhere. Amazon S3 provides a simple
web service interface that you can use to store and retrieve any amount of data, at any
time, from anywhere. Using this service, you can easily build applications that make use
of cloud-native storage. The given use case needs a hybrid storage facility since the
data will be accessed from the on-premises servers and applications on AWS Cloud.
Hence, S3 is not the right choice.
Amazon Elastic Block Store (Amazon EBS) - Amazon Elastic Block Store (EBS) is an
easy-to-use, high-performance, block-storage service designed for use with Amazon
Elastic Compute Cloud (EC2) for both throughput and transaction-intensive workloads
at any scale. A broad range of workloads, such as relational and non-relational
databases, enterprise applications, containerized applications, big data analytics
engines, file systems, and media workflows are widely deployed on Amazon EBS. The
given use case needs a hybrid storage facility since the data will be accessed from the
on-premises servers and applications on AWS Cloud. Hence, EBS is not the right choice.
Reference:
https://aws.amazon.com/storagegateway/
Question 35:
Skipped
A data analytics company runs its technology operations on AWS Cloud using different
VPC configurations for each of its applications. A systems administrator wants to
configure the Network Access Control List (ACL) and Security Group (SG) of VPC1 to
allow access for AWS resources in VPC2.
•
Based on the inbound and outbound traffic configurations on Network ACL of
VPC1, you can create a similar deny rules on Security Groups of the instances
in VPC1 to deny all traffic, other than the one originating from resources in
VPC2
By default, Security Groups allow outbound traffic. Hence, only the inbound
traffic configuration of the security groups have to be changed to allow
requests from resources in VPC2 to access instances in VPC1. If the subnet is
not associated with any Network ACL, you will not need any configuration
changes
(Correct)
A network access control list (ACL) is an optional layer of security for your VPC that
acts as a firewall for controlling traffic in and out of one or more subnets. You might set
up network ACLs with rules similar to your security groups in order to add an additional
layer of security to your VPC.
Security groups are stateful — if you send a request from your instance, the response
traffic for that request is allowed to flow in regardless of inbound security group rules.
Responses to allowed inbound traffic are allowed to flow out, regardless of outbound
rules.
Network ACLs are stateless, which means that responses to allowed inbound traffic are
subject to the rules for outbound traffic (and vice versa).
Incorrect options:
By default, Security Groups allow outbound traffic. Hence, only the inbound traffic
configuration of the security groups have to be changed to allow requests from
resources in VPC2 to access instances in VPC1. If the subnet is not associated with
any Network ACL, you will not need any configuration changes - Each subnet in your
VPC must be associated with a network ACL. If you don't explicitly associate a subnet
with a network ACL, the subnet is automatically associated with the default network
ACL. Hence, a subnet will always have a network ACL associated with it.
Based on the inbound and outbound traffic configurations on Network ACL of VPC1,
you can create similar deny rules on Security Groups of the instances in VPC1 to deny
all traffic, other than the one originating from resources in VPC2 - Security Groups and
Network ACLs are mutually exclusive and do not share permissions. Also, Security
Groups can only be used to specify allow rules, and not deny rules.
References:
https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html
Question 36:
Skipped
An e-commerce company manages its IT infrastructure on AWS Cloud via Elastic
Beanstalk. The development team at the company is planning to deploy the next version
with MINIMUM application downtime and the ability to rollback quickly in case the
deployment goes wrong.
As a SysOps Administrator, which of the following options would you recommend to
address the given use-case?
(Correct)
Deploy the new application version using 'All at once' deployment policy
Deploy the new application version using 'Rolling with additional batch'
deployment policy
Explanation
Correct option:
Deploy the new version to a separate environment via Blue/Green Deployment, and
then swap Route 53 records of the two environments to redirect traffic to the new
version
With deployment policies such as 'All at once', AWS Elastic Beanstalk performs an in-
place update when you update your application versions and your application can
become unavailable to users for a short period of time. You can avoid this downtime by
performing a blue/green deployment, where you deploy the new version to a separate
environment, and then swap CNAMEs (via Route 53) of the two environments to redirect
traffic to the new version instantly. In case of any deployment issues, the rollback
process is very quick via swapping the URLs for the two environments.
Incorrect options:
Deploy the new application version using 'All at once' deployment policy - Although 'All
at once' is the quickest deployment method, but the application may become
unavailable to users (or have low availability) for a short time. So this option is not
correct.
Deploy the new application version using 'Rolling' deployment policy - This policy
avoids downtime and minimizes reduced availability, at a cost of a longer deployment
time. However rollback process is via manual redeploy, so it's not as quick as the
Blue/Green deployment.
Deploy the new application version using 'Rolling with additional batch' deployment
policy - This policy avoids any reduced availability, at a cost of an even longer
deployment time compared to the Rolling method. Suitable if you must maintain the
same bandwidth throughout the deployment. However rollback process is via manual
redeploy, so it's not as quick as the Blue/Green deployment.
Reference:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.deploy-
existing-version.html
Question 37:
Skipped
The development team at an e-commerce company uses Amazon MySQL RDS because
it simplifies much of the time-consuming administrative tasks typically associated with
databases. A new systems administrator has joined the team and wants to understand
the replication capabilities for Multi-AZ as well as Read-replicas.
Which of the following correctly summarizes these capabilities for the given database?
(Correct)
Multi-AZ follows synchronous replication and spans at least two Availability Zones
within a single region. Read replicas follow asynchronous replication and can be within
an Availability Zone, Cross-AZ, or Cross-Region
Amazon RDS Multi-AZ deployments provide enhanced availability and durability for RDS
database (DB) instances, making them a natural fit for production database workloads.
When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a
primary DB Instance and synchronously replicates the data to a standby instance in a
different Availability Zone (AZ). Multi-AZ spans at least two Availability Zones within a
single region.
Amazon RDS Read Replicas provide enhanced performance and durability for RDS
database (DB) instances. They make it easy to elastically scale out beyond the capacity
constraints of a single DB instance for read-heavy database workloads. For the MySQL,
MariaDB, PostgreSQL, Oracle, and SQL Server database engines, Amazon RDS creates a
second DB instance using a snapshot of the source DB instance. It then uses the
engines' native asynchronous replication to update the read replica whenever there is a
change to the source DB instance.
Amazon RDS replicates all databases in the source DB instance. Read replicas can be
within an Availability Zone, Cross-AZ, or Cross-Region.
Exam Alert:
Incorrect Options:
Multi-AZ follows asynchronous replication and spans one Availability Zone within a
single region. Read replicas follow synchronous replication and can be within an
Availability Zone, Cross-AZ, or Cross-Region
Multi-AZ follows asynchronous replication and spans at least two Availability Zones
within a single region. Read replicas follow synchronous replication and can be within
an Availability Zone, Cross-AZ, or Cross-Region
Multi-AZ follows asynchronous replication and spans at least two Availability Zones
within a single region. Read replicas follow asynchronous replication and can be within
an Availability Zone, Cross-AZ, or Cross-Region
These three options contradict the explanation above, so these options are incorrect.
References:
https://aws.amazon.com/rds/features/multi-az/
https://aws.amazon.com/rds/features/read-replicas/
Question 38:
Skipped
An application hosted on Amazon EC2 instances polls messages from Amazon SQS
queue for downstream processing. The team is now looking at configuring an Auto
Scaling group to scale using the CloudWatch metrics for Amazon SQS queue to process
messages without delays.
As a Systems Administrator, which feature or dimension of an SQS queue will you pick
to collect SQS data from CloudWatch metrics?
A composite key Queue Name - Queue ID is used to fetch SQS queue data from
CloudWatch metrics
Queue name of the SQS queue should be used to fetch the necessary data from
CloudWatch metrics
(Correct)
Queue ID should be used to fetch the SQS queue data from the CloudWatch
metrics
Explanation
Correct option:
Queue name of the SQS queue should be used to fetch the necessary data from
CloudWatch metrics
The only dimension that Amazon SQS sends to CloudWatch is QueueName. This means
that all available statistics are filtered by QueueName.
Incorrect options:
Queue ID should be used to fetch the SQS queue data from the CloudWatch metrics
A composite key Queue Name - Queue ID is used to fetch SQS queue data from
CloudWatch metrics
These three options contradict the explanation above, so these options are incorrect.
Reference:
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/s
qs-available-cloudwatch-metrics.html
Question 39:
Skipped
A retail company has complex AWS VPC architecture that is getting difficult to
maintain. The company has decided to configure VPC flow logs to track the network
traffic to analyze various traffic flow scenarios. The systems administration team has
configured VPC flow logs for one of the VPCs, but it's not able to see any logs. After
initial analysis, the team has been able to track the error. It says Access error and the
administrator of the team wants to change the IAM Role defined in the flow log
definition.
What is the correct way of configuration a solution for this issue so that the VPC flow
logs can be operational?
The error indicates IAM role is not correctly configured. After you've created a
flow log, you cannot change its configuration. Instead, you need to delete the
flow log and create a new one with the required configuration
(Correct)
The error indicates an internal error has occurred in the flow logs service.
Raise a service request with AWS
•
The error indicates that the IAM role does not have a trust relationship with
the flow logs service. Change the trust relationship from flow log configuration
The flow log is still in the process of being created. It sometimes takes almost
10 minutes to start the logs
Explanation
Correct option:
The error indicates the IAM role is not correctly configured. After you've created a flow
log, you cannot change its configuration. Instead, you need to delete the flow log and
create a new one with the required configuration
1. The IAM role for your flow log does not have sufficient permissions to publish
flow log records to the CloudWatch log group
2. The IAM role does not have a trust relationship with the flow logs service
3. The trust relationship does not specify the flow logs service as the principal
After you've created a flow log, you cannot change its configuration or the flow log
record format. For example, you can't associate a different IAM role with the flow log or
add or remove fields in the flow log record. Instead, you can delete the flow log and
create a new one with the required configuration.
Incorrect options:
The error indicates that the IAM role does not have a trust relationship with the flow
logs service. Change the trust relationship from flow log configuration - As discussed
above, the VPC flow log configuration cannot be changed once created.
The flow log is still in the process of being created. It sometimes takes almost 10
minutes to start the logs - This scenario is possible when you have just configured the
flow logs. However, the status of the flow logs will not be in an error state.
The error indicates an internal error has occurred in the flow logs service. Raise a
service request with AWS - This is a made-up option, given only as a distractor.
References:
https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs-troubleshooting.html
https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html#flow-log-records
Question 40:
Skipped
You are working as an AWS Certified SysOps Administrator at an e-commerce company
and you want to build a fleet of EBS-optimized EC2 instances to handle the load of your
new application. To meet the compliance guidelines, your organization wants any secret
strings used in the application to be encrypted to prevent exposing values as clear text.
The solution requires that decryption events be audited and API calls to be simple. How
can this be achieved? (Select two)
(Correct)
(Correct)
Explanation
Correct options:
With AWS Systems Manager Parameter Store, you can create SecureString parameters,
which are parameters that have a plaintext parameter name and an encrypted
parameter value. Parameter Store uses AWS KMS to encrypt and decrypt the parameter
values of Secure String parameters. Also, if you are using customer-managed CMKs,
you can use IAM policies and key policies to manage to encrypt and decrypt
permissions. To retrieve the decrypted value you only need to do one API call.
via - https://docs.aws.amazon.com/kms/latest/developerguide/services-parameter-
store.html
CloudTrail will allow you to see all API calls made to SSM and KMS.
Incorrect options:
Encrypt first with KMS then store in SSM Parameter store - This could work but will
require two API calls to get the decrypted value instead of one. So this is not the right
option.
Store the secret as PlainText in SSM Parameter Store - Plaintext parameters are not
secure and shouldn't be used to store secrets.
Audit using SSM Audit Trail - This is a made-up option and has been added as a
distractor.
Reference:
https://docs.aws.amazon.com/kms/latest/developerguide/services-parameter-
store.html
Question 41:
Skipped
A new systems administrator has joined a large healthcare services company recently.
As part of his onboarding, the IT department is conducting a review of the checklist for
tasks related to AWS Identity and Access Management.
Use user credentials to provide access specific permissions for Amazon EC2
instances
(Correct)
•
Configure AWS CloudTrail to log all IAM actions
(Correct)
Explanation
Correct options:
Enable MFA for privileged users - As per the AWS best practices, it is better to enable
Multi Factor Authentication (MFA) for privileged users via an MFA-enabled mobile
device or hardware MFA token.
Configure AWS CloudTrail to record all account activity - AWS recommends to turn on
CloudTrail to log all IAM actions for monitoring and audit purposes.
Incorrect options:
Create a minimum number of accounts and share these account credentials among
employees - AWS recommends that user account credentials should not be shared
between users. So, this option is incorrect.
Use user credentials to provide access specific permissions for Amazon EC2
instances - It is highly recommended to use roles to grant access permissions for EC2
instances working on different AWS services. So, this option is incorrect.
References:
https://aws.amazon.com/iam/
https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html
https://aws.amazon.com/cloudtrail/faqs/
Question 42:
Skipped
The technology team at a retail company uses CloudFormation to manage its AWS
infrastructure. The team has created a network stack containing a VPC with subnets
and a web application stack with EC2 instances and an RDS instance. The team wants
to reference the VPC created in the network stack into its web application stack.
As a SysOps Administrator, which of the following solutions would you recommend for
the given use-case?
•
Create a cross-stack reference and use the Export output field to flag the value
of VPC from the network stack. Then use Ref intrinsic function to reference the
value of VPC into the web application stack
Create a cross-stack reference and use the Outputs output field to flag the
value of VPC from the network stack. Then use Fn::ImportValue intrinsic
function to import the value of VPC into the web application stack
Create a cross-stack reference and use the Export output field to flag the value
of VPC from the network stack. Then use Fn::ImportValue intrinsic function to
import the value of VPC into the web application stack
(Correct)
Create a cross-stack reference and use the Outputs output field to flag the
value of VPC from the network stack. Then use Ref intrinsic function to
reference the value of VPC into the web application stack
Explanation
Correct option:
Create a cross-stack reference and use the Export output field to flag the value of VPC
from the network stack. Then use Fn::ImportValue intrinsic function to import the
value of VPC into the web application stack
How CloudFormation
Works:
via - https://aws.amazon.com/cloudformation/
You can create a cross-stack reference to export resources from one AWS
CloudFormation stack to another. For example, you might have a network stack with a
VPC and subnets and a separate public web application stack. To use the security
group and subnet from the network stack, you can create a cross-stack reference that
allows the web application stack to reference resource outputs from the network stack.
With a cross-stack reference, owners of the web application stacks don't need to create
or maintain networking rules or assets.
To create a cross-stack reference, use the Export output field to flag the value of a
resource output for export. Then, use the Fn::ImportValue intrinsic function to import the
value.
You cannot use the Ref intrinsic function to import the value.
via
- https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/walkthrough-
crossstackref.html
Incorrect options:
Create a cross-stack reference and use the Outputs output field to flag the value of
VPC from the network stack. Then use Fn::ImportValue intrinsic function to import the
value of VPC into the web application stack
Create a cross-stack reference and use the Outputs output field to flag the value of
VPC from the network stack. Then use Ref intrinsic function to reference the value of
VPC into the web application stack
Create a cross-stack reference and use the Export output field to flag the value of VPC
from the network stack. Then use Ref intrinsic function to reference the value of VPC
into the web application stack
These three options contradict the explanation above, so these options are not correct.
References:
https://aws.amazon.com/cloudformation/
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/walkthrough-
crossstackref.html
Question 43:
Skipped
A retail company runs its server infrastructure on a fleet of Amazon EC2 instances with
Amazon RDS as the database service. For the high availability of the entire architecture,
multi-AZ deployments have been chosen for the RDS instance. A new version of the
database engine has been released by the vendor and the company wants to test the
release with production data and configurations before upgrading the production
instance.
Procure an instance which has the new version of the database engine. Take
the snapshot of your existing database and restore the snapshot to this
instance. Test on this instance
Create a read replica of the RDS instance in production. Upgrade the read
replica to the latest version and experiment with this instance
(Correct)
Explanation
Correct option:
Create a DB snapshot of your existing DB instance and create a new instance from the
restored snapshot. Initiate a version upgrade on this new instance and safely
experiment with the instance
You can trial test the new version before opting it for production systems. To do so,
create a DB snapshot of your existing DB instance, restore from the DB snapshot to
create a new DB instance, and then initiate a version upgrade for the new DB instance.
You can then experiment safely on the upgraded copy of your DB instance before
deciding whether or not to upgrade your original DB instance.
Incorrect options:
Create a read replica of the RDS instance in production. Upgrade the read replica to
the latest version and experiment with this instance
Procure an instance that has the new version of the database engine. Take the
snapshot of your existing database and restore the snapshot to this instance. Test on
this instance
These two options are incorrect because to trial test a production database, we need a
running database, in the same status as the existing one. This running database
instance will be upgraded and then tested thoroughly to know its viability for production.
The read replica operates as a DB instance that allows just read-only connections.
Applications can connect to a read replica just as they would to any DB instance. The
order of activities should be exactly the way it will be in production, so the upgrade goes
smoothly without any glitches when done on the live system.
Reference:
https://aws.amazon.com/rds/faqs/
Question 44:
Skipped
A social media company manages over 100 c4.large instances in the us-west-1 region.
The EC2 instances run complex algorithms. The systems administrator would like to
track CPU utilization of the EC2 instances as frequently as every 10 seconds.
Which of the following represents the BEST solution for the given use-case?
•
Create a high-resolution custom metric and push the data using a script
triggered every 10 seconds
(Correct)
Create a high-resolution custom metric and push the data using a script triggered
every 10 seconds
Incorrect options:
Enable EC2 detailed monitoring - As part of basic monitoring, Amazon EC2 sends
metric data to CloudWatch in 5-minute periods. To send metric data for your instance to
CloudWatch in 1-minute periods, you can enable detailed monitoring on the instance,
however, this comes at an additional cost.
Simply get it from the CloudWatch Metrics - You can get data from metrics. The basic
monitoring data is available automatically in a 5-minute interval and detailed monitoring
data is available in a 1-minute interval.
Open a support ticket with AWS - This option has been added as a distractor.
Reference:
https://aws.amazon.com/blogs/aws/new-high-resolution-custom-metrics-and-alarms-
for-amazon-cloudwatch/
Question 45:
Skipped
An e-commerce company is running its server infrastructure on Amazon EC2 instance
store-backed instances. For better performance, the company has decided to move
their applications to another Amazon EC2 instance store-backed instance with a
different instance type.
You can't resize an instance store-backed instance. Instead, you choose a new
compatible instance and move your application to the new instance
Create an image of your instance, and then launch a new instance from this
image with the instance type that you need. Take any Elastic IP address that
you've associated with your original instance and associate it with the new
instance for uninterrupted service to your application
(Correct)
Create an image of your instance, and then launch a new instance from this
image with the instance type that you need. Any public IP address associated
with the instance can be moved with the instance for uninterrupted access of
services
•
You can't resize an instance store-backed instance. Instead, configure an EBS
volume to be the root device for the instance and migrate using the EBS
volume
Explanation
Correct option:
Create an image of your instance, and then launch a new instance from this image with
the instance type that you need. Take any Elastic IP address that you've associated
with your original instance and associate it with the new instance for uninterrupted
service to your application
When you want to move your application from one instance store-backed instance to an
instance store-backed instance with a different instance type, you must migrate it by
creating an image from your instance, and then launching a new instance from this
image with the instance type that you need. To ensure that your users can continue to
use the applications that you're hosting on your instance uninterrupted, you must take
any Elastic IP address that you've associated with your original instance and associate
it with the new instance. Then you can terminate the original instance.
Incorrect options:
You can't resize an instance store-backed instance. Instead, you choose a new
compatible instance and move your application to the new instance - An instance
store-backed EC2 instance can be resized, as explained above.
You can't resize an instance store-backed instance. Instead, configure an EBS volume
to be the root device for the instance and migrate using the EBS volume - This
statement is incorrect.
Create an image of your instance, and then launch a new instance from this image with
the instance type that you need. Any public IP address associated with the instance
can be moved with the instance for uninterrupted access of services - Public IP
addresses are released when an instance is changed. You need an Elastic IP to keep the
service uninterrupted for users since these can be moved across instances.
Reference:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-resize.html
Question 46:
Skipped
The development team at an IT company is looking at moving its web applications to
Amazon EC2 instances. The team is weighing its options for EBS volumes and instance
store-backed instances for these applications with varied workloads.
Which of the following would you identify as correct regarding instance store and EBS
volumes? (Select three)
Use separate Amazon EBS volumes for the operating system and your data,
even though root volume persistence feature is available
(Correct)
(Correct)
Data stored in the instance store is preserved when you stop or terminate
your instance. However, data is lost when you hibernate the instance.
Configure EBS volumes or have a backup plan to avoid using critical data to
this behavior
•
EBS snapshots only capture data that has been written to your Amazon EBS
volume, which might exclude any data that has been locally cached by your
application or operating system
(Correct)
Explanation
Correct options:
Use separate Amazon EBS volumes for the operating system and your data, even
though root volume persistence feature is available
As a best practice, AWS recommends the use of separate Amazon EBS volumes for the
operating system and your data. This ensures that the volume with your data persists
even after instance termination or any issues to the operating system.
EBS snapshots only capture data that has been written to your Amazon EBS volume,
which might exclude any data that has been locally cached by your application or
operating system
Snapshots only capture data that has been written to your Amazon EBS volume, which
might exclude any data that has been locally cached by your application or OS. To
ensure consistent snapshots on volumes attached to an instance, AWS recommends
detaching the volume cleanly, issuing the snapshot command, and then reattaching the
volume. For Amazon EBS volumes that serve as root devices, AWS recommends
shutting down the machine to take a clean snapshot.
Incorrect options:
Data stored in the instance store is preserved when you stop or terminate your
instance. However, data is lost when you hibernate the instance. Configure EBS
volumes or have a backup plan to avoid using critical data to this behavior - Data
stored in instance store is lost when you stop, hibernate or terminate the instance.
EBS encryption does not support boot volumes - EBS volumes used as root devices can
be encrypted without any issue.
Snapshots of EBS volumes, stored on Amazon S3, can be accessed using Amazon S3
APIs - This is incorrect. Snapshots are only available through the Amazon EC2 API.
References:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-best-practices.html
https://aws.amazon.com/ebs/faqs/
Question 47:
Skipped
An IT company extensively uses Amazon S3 buckets for storage, hosting, backup and
compliance specific replication. A Team Lead has reached out to you for creating a
report that lists all the objects that have failed replication in the S3 buckets that the
project manages. This process needs to be automated as the Team Lead needs this list
daily.
As a SysOps Administrator, how will you configure a solution for this request?
Use Amazon S3 Storage Lens to report all the objects that failed replication
process in the S3 buckets
Configure Amazon Simple Queue Service (Amazon SQS) queue against the
CloudWatch metrics for S3 replication. You can use custom code to aggregate
these messages to get the final list of objects that failed replication
Use Amazon S3 Select to list the objects that have failed replication in the S3
buckets
Use Amazon S3 Inventory reports to list the objects that have failed
replication in the S3 buckets
(Correct)
Explanation
Correct option:
Use Amazon S3 Inventory reports to list the objects that have failed replication in the
S3 buckets
Amazon S3 Inventory is one of the tools Amazon S3 provides to help manage your
storage. You can use it to audit and report on the replication and encryption status of
your objects for business, compliance, and regulatory needs. You can also simplify and
speed up business workflows and big data jobs using Amazon S3 inventory, which
provides a scheduled alternative to the Amazon S3 synchronous List API operation.
You can configure multiple inventory lists for a bucket. You can configure what object
metadata to include in the inventory, whether to list all object versions or only current
versions, where to store the inventory list file output, and whether to generate the
inventory on a daily or weekly basis. You can also specify that the inventory list file is
encrypted.
The inventory list contains a list of the objects in an S3 bucket and the metadata for
each listed object.
Incorrect options:
Use Amazon S3 Storage Lens to report all the objects that failed replication process in
the S3 buckets - Amazon S3 Storage Lens aggregates your usage and activity metrics
and displays the information in an interactive dashboard on the Amazon S3 console or
through a metrics data export that can be downloaded in CSV or Parquet format. This is
the wrong choice because we are not looking for usage metrics.
Use Amazon S3 Select to list the objects that have failed replication in the S3 buckets -
With S3 Select, you can use a simple SQL expression to return only the data from the
store you’re interested in, instead of retrieving the entire object. You cannot use S3
Select to list the objects that have failed replication in the S3 buckets.
Configure Amazon Simple Queue Service (Amazon SQS) queue against the
CloudWatch metrics for S3 replication. You can use custom code to aggregate these
messages to get the final list of objects that failed replication - CloudWatch metrics for
replication are only available if S3 Replication Time Control (S3 RTC) is enabled. And
metrics are generated for action on each object in the S3 bucket. Since we need an
aggregated list, we choose the Amazon S3 Inventory that is tailor-made for such
requirements.
References:
https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-inventory.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/storage_lens.html
Question 48:
Skipped
A developer is tasked with cleaning up obsolete resources. When he tried to delete an
AWS CloudFormation stack, the stack deletion process returned without any error or a
success message. The stack was not deleted either.
What is the reason for this behavior and how will you fix it?
(Correct)
Dependent resources should be deleted first, before deleting the rest of the
resources in the stack. If this order is not followed, then stack deletion fails
without an error
Some resources must be empty before they can be deleted. Such resources will
not be deleted if they are not empty and stack deletion fails without any error
The AWS user who initiated the stack deletion does not have enough
permissions
Explanation
Correct option:
If you attempt to delete a stack with termination protection enabled, the deletion fails
and the stack - including its status - remains unchanged
You cannot delete stacks that have termination protection enabled. If you attempt to
delete a stack with termination protection enabled, the deletion fails and the stack -
including its status - remains unchanged. Disable termination protection on the stack,
then perform the delete operation again.
This includes nested stacks whose root stacks have termination protection enabled.
Disable termination protection on the root stack, then perform the delete operation
again. It is strongly recommended that you do not delete nested stacks directly, but only
delete them as part of deleting the root stack and all its resources.
via - https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-
console-delete-stack.html
Incorrect options:
The AWS user who initiated the stack deletion does not have enough permissions - If
the user does not have enough permissions to delete the stack, an error explaining the
same is displayed and the stack will be in the DELETE_FAILED state.
Some resources must be empty before they can be deleted. Such resources will not be
deleted if they are not empty and stack deletion fails without any error - Some
resources must be empty before they can be deleted. For example, you must delete all
objects in an Amazon S3 bucket or remove all instances in an Amazon EC2 security
group before you can delete the bucket or security group. Otherwise, stack deletion fails
and the stack will be in the DELETE_FAILED state.
Dependent resources should be deleted first, before deleting the rest of the resources
in the stack. If this order is not followed, then stack deletion fails without an error - Any
error during stack deletion will result in the stack being in the DELETE_FAILED state.
Reference:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/troubleshooting.
html
Question 49:
Skipped
A junior administrator at a retail company is documenting the process flow to provision
EC2 instances via the Amazon EC2 API. These instances are to be used for an internal
application that processes HR payroll data. He wants to highlight those volume types
that cannot be used as a boot volume.
Can you help the intern by identifying those storage volume types that CANNOT be used
as boot volumes while creating the instances? (Select two)
(Correct)
(Correct)
Instance Store
Explanation
Correct options:
Throughput Optimized HDD (st1)
Throughput Optimized HDD (st1) and Cold HDD (sc1) volume types CANNOT be used
as a boot volume, so these two options are correct.
Please see this detailed overview of the volume types for EBS
volumes.
via - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html
Incorrect options:
Instance Store
General Purpose SSD (gp2), Provisioned IOPS SSD (io1), and Instance Store can be used
as a boot volume.
References:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/RootDeviceStorage.html
Question 50:
Skipped
A Systems Administrator is configuring an Application Load Balancer (ALB) that fronts
Amazon EC2 instances.
Which of the following options would you identify as correct for configuring the ALB?
(Select two)
(Correct)
When you create a listener, you define actions and conditions for the default
rule
Before you start using your Application Load Balancer, you must add one or
more listeners
(Correct)
A target can be registered with only one target group at any given time
The targets of a target group in an ALB should all belong to the same
Availability Zone
Explanation
Correct options:
Before you start using your Application Load Balancer, you must add one or more
listeners - A listener checks for connection requests from clients, using the protocol
and port that you configure. The rules that you define for a listener determine how the
load balancer routes requests to its registered targets. Each rule consists of a priority,
one or more actions, and one or more conditions. When the conditions for a rule are
met, then its actions are performed. You must define a default rule for each listener, and
you can optionally define additional rules.
You configure target groups of an ALB by attaching them to the listeners - Each target
group is used to route requests to one or more registered targets. When you create
each listener rule, you specify a target group and conditions. When a rule condition is
met, traffic is forwarded to the corresponding target group. You can create different
target groups for different types of requests.
via
- https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.ht
ml
Incorrect options:
The targets of a target group in an ALB should all belong to the same Availability
Zone - A load balancer serves as the single point of contact for clients. The load
balancer distributes incoming application traffic across multiple targets, such as EC2
instances, in multiple Availability Zones. This increases the availability of your
application.
A target can be registered with only one target group at any given time - Each target
group routes requests to one or more registered targets, such as EC2 instances, using
the protocol and port number that you specify. You can register a target with multiple
target groups.
When you create a listener, you define actions and conditions for the default rule -
When you create a listener, you define actions for the default rule. Default rules can't
have conditions. If the conditions for none of a listener's rules are met, then the action
for the default rule is performed.
Reference:
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-
listeners.html
Question 51:
Skipped
A systems administrator has attached two policies to an IAM user. The first policy
states that the user has explicitly been denied all access to EC2 instances. The second
policy states that the user has been allowed permission for EC2:Describe action.
When the user tries to use 'Describe' action on an EC2 instance using the CLI, what will
be the output?
The order of the policy matters. If policy 1 is before 2, then the user is denied
access. If policy 2 is before 1, then the user is allowed access
The user will be denied access because one of the policies has an explicit deny
on it
(Correct)
Explanation
Correct option:
The user will be denied access because the policy has an explicit deny on it - User will
be denied access because any explicit deny overrides the allow.
Policy Evaluation
explained:
via
- https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-
logic.html
Incorrect options:
The IAM user stands in an invalid state, because of conflicting policies - This is an
incorrect statement. Access policies can have allow and deny permissions on them and
based on policy rules they are evaluated. A user account does not get invalid because of
policies.
The user will get access because it has an explicit allow - As discussed above, explicit
deny overrides all other permissions and hence the user will be denied access.
The order of the policy matters. If policy 1 is before 2, then the user is denied access.
If policy 2 is before 1, then the user is allowed access - If policies that apply to a
request include an Allow statement and a Deny statement, the Deny statement trumps
the Allow statement. The request is explicitly denied.
Reference:
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-
logic.html
Question 52:
Skipped
As part of the systems administration work, an AWS Certified SysOps Administrator is
creating policies and attaching them to IAM identities. After creating necessary Identity-
based policies, he is now creating Resource-based policies.
Which is the only resource-based policy that the IAM service supports?
Permissions boundary
Trust policy
(Correct)
Explanation
Correct option:
You manage access in AWS by creating policies and attaching them to IAM identities
(users, groups of users, or roles) or AWS resources. A policy is an object in AWS that,
when associated with an identity or resource, defines their permissions. Resource-
based policies are JSON policy documents that you attach to a resource such as an
Amazon S3 bucket. These policies grant the specified principal permission to perform
specific actions on that resource and define under what conditions this applies.
Trust policy - Trust policies define which principal entities (accounts, users, roles, and
federated users) can assume the role. An IAM role is both an identity and a resource
that supports resource-based policies. For this reason, you must attach both a trust
policy and an identity-based policy to an IAM role. The IAM service supports only one
type of resource-based policy called a role trust policy, which is attached to an IAM role.
Incorrect options:
AWS Organizations Service Control Policies (SCP) - If you enable all features of AWS
organization, then you can apply service control policies (SCPs) to any or all of your
accounts. SCPs are JSON policies that specify the maximum permissions for an
organization or organizational unit (OU). The SCP limits permissions for entities in
member accounts, including each AWS account root user. An explicit deny in any of
these policies overrides the allow.
Access control list (ACL) - Access control lists (ACLs) are service policies that allow
you to control which principals in another account can access a resource. ACLs cannot
be used to control access for a principal within the same account. Amazon S3, AWS
WAF, and Amazon VPC are examples of services that support ACLs.
Permissions boundary - AWS supports permissions boundaries for IAM entities (users
or roles). A permissions boundary is an advanced feature for using a managed policy to
set the maximum permissions that an identity-based policy can grant to an IAM entity.
An entity's permissions boundary allows it to perform only the actions that are allowed
by both its identity-based policies and its permissions boundaries.
References:
https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_re
source-based
https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html
Question 53:
Skipped
A systems administration intern is trying to configure what an Amazon EC2 should do
when it interrupts a Spot Instance.
A Spot Instance is an unused EC2 instance that is available for less than the On-
Demand price. Your Spot Instance runs whenever capacity is available and the
maximum price per hour for your request exceeds the Spot price. Any instance present
with unused capacity will be allocated.
You can specify that Amazon EC2 should do one of the following when it interrupts a
Spot Instance:
Incorrect options:
It is always possible that Spot Instances might be interrupted. Therefore, you must
ensure that your application is prepared for a Spot Instance interruption.
Stop the Spot Instance - This is a valid option. Amazon EC2 can be configured to stop
the instance when an interruption occurs on Spot instances.
Hibernate the Spot Instance - This is a valid option. Amazon EC2 can be configured to
hibernate the instance when an interruption occurs on Spot instances.
Terminate the Spot Instance - This is a valid option. Amazon EC2 can be configured to
hibernate the instance when an interruption occurs on Spot instances. The default
behavior is to terminate Spot Instances when they are interrupted.
Reference:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-interruptions.html
Question 54:
Skipped
A healthcare solutions company is undergoing a compliance audit by the regulator. The
company has hundreds of IAM users that make API calls but specifically it needs to be
determined who is making KMS API calls.
Config
X-Ray
CloudWatch Metrics
CloudTrail
(Correct)
Explanation
Correct option:
CloudTrail
With CloudTrail, you can log, continuously monitor, and retain account activity related to
actions across your AWS infrastructure. You can use AWS CloudTrail to answer
questions such as - “Who made an API call to modify this resource?”. CloudTrail
provides event history of your AWS account activity thereby enabling governance,
compliance, operational auditing, and risk auditing of your AWS account. You cannot
use CloudTrail to maintain a history of resource configuration changes.
How CloudTrail
Works:
via - https://aws.amazon.com/cloudtrail/
Exam Alert:
You may see scenario-based questions asking you to select one of CloudWatch vs
CloudTrail vs Config. Just remember this thumb rule -
Incorrect options:
CloudWatch Metrics - CloudWatch provides you with data and actionable insights to
monitor your applications, respond to system-wide performance changes, optimize
resource utilization, and get a unified view of operational health.
Amazon CloudWatch allows you to monitor AWS cloud resources and the applications
you run on AWS. Metrics are provided automatically for several AWS products and
services. CloudWatch cannot help determine the source for KMS API calls.
X-Ray - AWS X-Ray helps developers analyze and debug distributed applications. With
X-Ray, you can understand how your application and its underlying services are
performing to identify and troubleshoot the root cause of performance issues and
errors. X-Ray cannot help determine the source for KMS API calls.
Config - AWS Config is a service that enables you to assess, audit, and evaluate the
configurations of your AWS resources. With Config, you can review changes in
configurations and relationships between AWS resources, dive into detailed resource
configuration histories, and determine your overall compliance against the
configurations specified in your internal guidelines. You can use Config to answer
questions such as - “What did my AWS resource look like at xyz point in time?”.
AWSConfig cannot help determine the source for KMS API calls.
References:
https://aws.amazon.com/config/
https://aws.amazon.com/cloudwatch/
https://aws.amazon.com/cloudtrail/
Question 55:
Skipped
A retail company has branch offices in multiple locations and the development team
has configured an Application Load Balancer across targets in multiple Availability
Zones. The team wants to analyze the incoming requests for latencies and the client's
IP address patterns.
Which feature of the Load Balancer can be used to collect the required information?
CloudWatch metrics
(Correct)
CloudTrail logs
Explanation
Correct option:
Elastic Load Balancing provides access logs that capture detailed information about
requests sent to your load balancer. Each log contains information such as the time the
request was received, the client's IP address, latencies, request paths, and server
responses. You can use these access logs to analyze traffic patterns and troubleshoot
issues. Access logging is an optional feature of Elastic Load Balancing that is disabled
by default.
Access logs for your Application Load
Balancer:
via - https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-
balancer-access-logs.html
Incorrect options:
CloudTrail logs - Elastic Load Balancing is integrated with AWS CloudTrail, a service
that provides a record of actions taken by a user, role, or an AWS service in Elastic Load
Balancing. CloudTrail captures all API calls for Elastic Load Balancing as events. You
can use AWS CloudTrail to capture detailed information about the calls made to the
Elastic Load Balancing API and store them as log files in Amazon S3. You can use these
CloudTrail logs to determine which API calls were made, the source IP address where
the API call came from, who made the call, when the call was made, and so on.
ALB request tracing - You can use request tracing to track HTTP requests. The load
balancer adds a header with a trace identifier to each request it receives. Request
tracing will not help you to analyze latency specific data.
Reference:
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-
monitoring.html
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-
access-logs.html
Question 56:
Skipped
Which of the following security credentials can only be generated by the AWS Account
root user?
(Correct)
For Amazon CloudFront, you use key pairs to create signed URLs for private content,
such as when you want to distribute restricted content that someone paid for.
CloudFront Key Pairs - IAM users can't create CloudFront key pairs. You must log in
using root credentials to create key pairs.
Incorrect options:
EC2 Instance Key Pairs - You use key pairs to access Amazon EC2 instances, such as
when you use SSH to log in to a Linux instance. These key pairs can be created from the
IAM user login and do not need root user access.
IAM User Access Keys - Access keys consist of two parts: an access key ID and a
secret access key. You use access keys to sign programmatic requests that you make
to AWS if you use AWS CLI commands (using the SDKs) or using AWS API operations.
IAM users can create their own Access Keys. This process does not need root access.
IAM User passwords - Every IAM user has access to his own credentials and can reset
the password whenever they need to.
References:
https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-
keys-and-secret-access-keys
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-
content-trusted-signers.html
Question 57:
Skipped
A Silicon Valley based startup uses Elastic Beanstalk to manage its IT infrastructure on
AWS Cloud and it would like to deploy the new application version to the EC2 instances.
When the deployment is executed, some instances should serve requests with the old
application version, while other instances should serve requests using the new
application version until the deployment is completed.
All at once
Rolling
(Correct)
•
Immutable
Rolling
With Elastic Beanstalk, you can quickly deploy and manage applications in the AWS
Cloud without having to learn about the infrastructure that runs those applications.
Elastic Beanstalk reduces management complexity without restricting choice or control.
You simply upload your application, and Elastic Beanstalk automatically handles the
details of capacity provisioning, load balancing, scaling, and application health
monitoring.
The rolling deployment policy deploys the new version in batches. Each batch is taken
out of service during the deployment phase, reducing your environment's capacity by
the number of instances in a batch. The cost remains the same as the number of EC2
instances does not increase. This policy avoids downtime and minimizes reduced
availability, at a cost of a longer deployment time.
Incorrect options:
Immutable - The 'Immutable' deployment policy ensures that your new application
version is always deployed to new instances, instead of updating existing instances. It
also has the additional advantage of a quick and safe rollback in case the deployment
fails.
All at once - This policy deploys the new version to all instances simultaneously.
Although 'All at once' is the quickest deployment method, but the application may
become unavailable to users (or have low availability) for a short time.
Rolling with additional batches - This policy deploys the new version in batches, but
first launches a new batch of instances to ensure full capacity during the deployment
process. This policy avoids any reduced availability, at a cost of an even longer
deployment time compared to the Rolling method. Suitable if you must maintain the
same bandwidth throughout the deployment. These increase the costs as you're adding
extra instances during the deployment.
Reference:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.deploy-
existing-version.html
Question 58:
Skipped
A financial services firm wants to run its applications on single-tenant hardware to meet
security guidelines.
Which of the following is the MOST cost-effective way of isolating the Amazon EC2
instances to a single tenant?
Dedicated Instances
(Correct)
Dedicated Hosts
Spot Instances
On-Demand Instances
Explanation
Correct option:
Dedicated Instances - Dedicated Instances are Amazon EC2 instances that run in a
virtual private cloud (VPC) on hardware that's dedicated to a single customer. Dedicated
Instances that belong to different AWS accounts are physically isolated at a hardware
level, even if those accounts are linked to a single-payer account. However, Dedicated
Instances may share hardware with other instances from the same AWS account that
are not Dedicated Instances.
A Dedicated Host is also a physical server that's dedicated for your use. With a
Dedicated Host, you have visibility and control over how instances are placed on the
server.
Differences between Dedicated Hosts and Dedicated
Instances:
via - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/dedicated-hosts-
overview.html#dedicated-hosts-dedicated-instances
Incorrect options:
Spot Instances - A Spot Instance is an unused EC2 instance that is available for less
than the On-Demand price. Your Spot Instance runs whenever capacity is available and
the maximum price per hour for your request exceeds the Spot price. Any instance
present with unused capacity will be allocated. Even though this is cost-effective, it
does not fulfill the single-tenant hardware requirement of the client and hence is not the
correct option.
Dedicated Hosts - An Amazon EC2 Dedicated Host is a physical server with EC2
instance capacity fully dedicated to your use. Dedicated Hosts allow you to use your
existing software licenses on EC2 instances. With a Dedicated Host, you have visibility
and control over how instances are placed on the server. This option is costlier than the
Dedicated Instance and hence is not the right choice for the current requirement.
On-Demand Instances - With On-Demand Instances, you pay for the compute capacity
by the second with no long-term commitments. You have full control over its lifecycle—
you decide when to launch, stop, hibernate, start, reboot, or terminate it. Hardware
isolation is not possible and on-demand has one of the costliest instance charges and
hence is not the correct answer for current requirements.
References:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/dedicated-instance.html
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-purchasing-
options.html
Question 59:
Skipped
As a SysOps Administrator, you have been tasked to generate a report on all API calls
made for Elastic Load Balancer from the AWS Management Console.
•
Load Balancer Request tracing
CloudTrail logs
(Correct)
CloudWatch metrics
Explanation
Correct option:
CloudTrail logs - Elastic Load Balancing is integrated with AWS CloudTrail, a service
that provides a record of actions taken by a user, role, or an AWS service in Elastic Load
Balancing. CloudTrail captures all API calls for Elastic Load Balancing as events. The
calls captured include calls from the AWS Management Console and code calls to the
Elastic Load Balancing API operations. If you create a trail, you can enable continuous
delivery of CloudTrail events to an Amazon S3 bucket, including events for Elastic Load
Balancing. If you don't configure a trail, you can still view the most recent events in the
CloudTrail console in Event history. Using the information collected by CloudTrail, you
can determine the request that was made to Elastic Load Balancing, the IP address
from which the request was made, who made the request, when it was made, and
additional details.
Incorrect options:
CloudWatch metrics - You can use Amazon CloudWatch to retrieve statistics about data
points for your load balancers and targets as an ordered set of time-series data, known
as metrics. You can use these metrics to verify that your system is performing as
expected.
Load Balancer Access logs - You can use access logs to capture detailed information
about the requests made to your load balancer and store them as log files in Amazon
S3. You can use these access logs to analyze traffic patterns and to troubleshoot
issues with your targets.
Load Balancer Request tracing - You can use request tracing to track HTTP requests.
The load balancer adds a header with a trace identifier to each request it receives.
Reference:
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-
cloudtrail-logs.html
Question 60:
Skipped
A data analytics company uses AWS CloudFormation templates to provision their AWS
infrastructure for Amazon EC2, Amazon VPC, and Amazon S3 resources. Using cross-
stack referencing, a systems administrator creates a stack called NetworkStack which
will export the subnetId that can be used when creating EC2 instances in another stack.
To use the exported value in another stack, which of the following functions must be
used?
!Ref
!GetAtt
!Sub
!ImportValue
(Correct)
Explanation
Correct option:
!ImportValue
Incorrect options:
!Sub - Substitutes variables in an input string with values that you specify.
Reference:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-
function-reference-importvalue.html
Question 61:
Skipped
A company uses Amazon S3 bucket replication to copy data from one S3 bucket into
the other, for compliance purposes. The Technical Lead of the development team wants
to be notified if replication of an object across S3 buckets fails.
Use Amazon Simple Queue Service (Amazon SQS) queue to copy objects from
one S3 bucket to the other. If replication fails, messages in the queue can be
configured to send notification using SNS
Enable S3 Replication Time Control (S3 RTC), which allows you to set up
notifications for eligible objects that failed replication
(Correct)
Enable S3 Replication Time Control (S3 RTC), which allows you to set up notifications
for eligible objects that failed replication
S3 Replication Time Control (S3 RTC) helps you meet compliance or business
requirements for data replication and provides visibility into Amazon S3 replication
times. S3 RTC replicates most objects that you upload to Amazon S3 in seconds, and
99.99 percent of those objects within 15 minutes.
S3 RTC by default includes S3 replication metrics and S3 event notifications, with which
you can monitor the total number of S3 API operations that are pending replication, the
total size of objects pending replication, and the maximum replication time.
You can track replication time for objects that did not replicate within 15 minutes by
monitoring specific event notifications that S3 Replication Time Control (S3 RTC)
publishes. These events are published when an object that was eligible for replication
using S3 RTC didn't replicate within 15 minutes, and when that object replicates to the
destination Region.
Incorrect options:
Enable S3 Replication with Notification, which allows you to set up notifications for
objects that failed replication - This is a made-up option and given only as a distractor.
Use Amazon Simple Queue Service (Amazon SQS) queue to copy objects from one S3
bucket to the other. If replication fails, messages in the queue can be configured to
send notifications using SNS - Amazon S3 offers a direct replication feature. Hence, a
custom logic to do the same does not make sense.
Reference:
https://docs.aws.amazon.com/AmazonS3/latest/dev/replication-time-
control.html#using-s3-events-to-track-rtc
Question 62:
Skipped
A SysOps Administrator was asked to enable versioning on an Amazon S3 bucket after
a few objects were accidentally deleted by the development team.
Which of the following represent valid scenarios when a developer deletes an object in
the versioning-enabled bucket? (Select two)
A delete marker is set on the deleted object, but the actual object is not
deleted
(Correct)
(Correct)
The delete marker has the same data associated with it, as the actual object
A delete marker has a key, version ID and Access Control List (ACL) associated
with it
Explanation
Correct options:
A delete marker is set on the deleted object, but the actual object is not deleted - A
delete marker in Amazon S3 is a placeholder (or marker) for a versioned object that was
named in a simple DELETE request. Because the object is in a versioning-enabled
bucket, the object is not deleted. But the delete marker makes Amazon S3 behave as if
it is deleted. A delete marker has a key name (or key) and version ID like any other
object. It does not have data associated with it. It is not associated with an access
control list (ACL) value.
GET requests do not retrieve delete marker objects - The only way to list delete
markers (and other versions of an object) is by using the versions subresource in a GET
Bucket versions request. A simple GET does not retrieve delete marker objects.
Incorrect options:
A delete marker has a key, version ID and Access Control List (ACL) associated with it
The delete marker has the same data associated with it, as the actual object
These three options contradict the explanation provided above, so these options are
incorrect.
Reference:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/DeleteMarker.html
Question 63:
Skipped
A startup uses Amazon S3 buckets for storing their customer data. The company has
defined different retention periods for different objects present in their Amazon S3
buckets, based on the compliance requirements. But, the retention rules do not seem to
work as expected.
Which of the following points are important to remember when configuring retention
periods for objects in Amazon S3 buckets (Select two)?
Different versions of a single object can have different retention modes and
periods
(Correct)
•
You cannot place a retention period on an object version through a bucket
default setting
The bucket default settings will override any explicit retention mode or
period you request on an object version
When you apply a retention period to an object version explicitly, you specify
a Retain Until Date for the object version
(Correct)
When you use bucket default settings, you specify a Retain Until Date for the
object version
Explanation
Correct options:
When you apply a retention period to an object version explicitly, you specify a Retain
Until Date for the object version - You can place a retention period on an object
version either explicitly or through a bucket default setting. When you apply a retention
period to an object version explicitly, you specify a Retain Until Date for the object
version. Amazon S3 stores the Retain Until Date setting in the object version's metadata
and protects the object version until the retention period expires.
Different versions of a single object can have different retention modes and periods -
Like all other Object Lock settings, retention periods apply to individual object versions.
Different versions of a single object can have different retention modes and periods.
For example, suppose that you have an object that is 15 days into a 30-day retention
period, and you PUT an object into Amazon S3 with the same name and a 60-day
retention period. In this case, your PUT succeeds, and Amazon S3 creates a new version
of the object with a 60-day retention period. The older version maintains its original
retention period and becomes deletable in 15 days.
Incorrect options:
You cannot place a retention period on an object version through a bucket default
setting - You can place a retention period on an object version either explicitly or
through a bucket default setting.
When you use bucket default settings, you specify a Retain Until Date for the object
version - When you use bucket default settings, you don't specify a Retain Until Date.
Instead, you specify a duration, in either days or years, for which every object version
placed in the bucket should be protected.
The bucket default settings will override any explicit retention mode or period you
request on an object version - If your request to place an object version in a bucket
contains an explicit retention mode and period, those settings override any bucket
default settings for that object version.
Reference:
https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lock-overview.html
Question 64:
Skipped
A large IT company uses several AWS accounts for the different lines of business. Quite
often, the systems administrator is faced with the problem of sharing Customer Master
Keys (CMKs) across multiple AWS accounts for accessing AWS resources spread
across these accounts.
Use AWS KMS service-linked roles to share access across AWS accounts
AWS Owned CMK can be used across AWS accounts. Configure an AWS Owned
CMK and use it across accounts that need to share the key material
Declare a key policy for the CMK to give the external account permission to
use the CMK. This key policy should be embedded with the first request of
every transaction
The key policy for the CMK must give the external account (or users and roles
in the external account) permission to use the CMK. IAM policies in the
external account must delegate the key policy permissions to its users and
roles
(Correct)
Explanation
Correct option:
The key policy for the CMK must give the external account (or users and roles in the
external account) permission to use the CMK. IAM policies in the external account
must delegate the key policy permissions to its users and roles
You can allow IAM users or roles in one AWS account to use a customer master key
(CMK) in a different AWS account. You can add these permissions when you create the
CMK or change the permissions for an existing CMK.
To permit the usage of a CMK to users and roles in another account, you must use two
different types of policies:
1. The key policy for the CMK must give the external account (or users and roles in
the external account) permission to use the CMK. The key policy is in the account
that owns the CMK.
2. IAM policies in the external account must delegate the key policy permissions to
its users and roles. These policies are set in the external account and give
permissions to users and roles in that account.
Incorrect options:
AWS Owned CMK can be used across AWS accounts. Configure an AWS Owned CMK
and use it across accounts that need to share the key material - AWS owned CMKs are
a collection of CMKs that an AWS service owns and manages for use in multiple AWS
accounts. However, you cannot view, use, track, or audit them
Use AWS KMS service-linked roles to share access across AWS accounts - AWS Key
Management Service uses AWS Identity and Access Management (IAM) service-linked
roles. A service-linked role is a unique type of IAM role that is linked directly to AWS
KMS. The service-linked roles are defined by AWS KMS and include all the permissions
that the service requires to call other AWS services on your behalf. You cannot use AWS
KMS service-linked roles to share access across AWS accounts.
Declare a key policy for the CMK to give the external account permission to use the
CMK. This key policy should be embedded with the first request of every transaction -
Key policy can not be directly shared across accounts.
References:
https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-modifying-
external-accounts.html
https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-owned-
cmk
Question 65:
Skipped
After a developer had mistakenly shutdown a test instance, the Team Lead has decided
to configure termination protection on all the instances. As a systems administrator,
you have been tasked to review the termination policy and check its viability for the
given requirements.
Which of the following choices are correct about Amazon EC2 instance's termination
policy (Select two)?
(Correct)
To prevent instances that are part of an Auto Scaling group from terminating
on scale in, use instance protection
(Correct)
You can't enable termination protection for Spot Instances - You can't enable
termination protection for Spot Instances—a Spot Instance is terminated when the Spot
price exceeds the amount you're willing to pay for Spot Instances. However, you can
prepare your application to handle Spot Instance interruptions.
To prevent instances that are part of an Auto Scaling group from terminating on scale
in, use instance protection - The DisableApiTermination attribute does not prevent
Amazon EC2 Auto Scaling from terminating an instance. For instances in an Auto
Scaling group, use the following Amazon EC2 Auto Scaling features instead of Amazon
EC2 termination protection:
1. To prevent instances that are part of an Auto Scaling group from terminating on
scale in, use instance protection.
2. To prevent Amazon EC2 Auto Scaling from terminating unhealthy instances,
suspend the ReplaceUnhealthy process.
3. To specify which instances Amazon EC2 Auto Scaling should terminate first,
choose a termination policy.
Incorrect options:
Reference:
https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/terminating-
instances.html#Using_ChangingDisableAPITermination