AWS-DevOps-Engineer-Professional
AWS-DevOps-Engineer-Professional
AWS-DevOps-Engineer-Professional
http://www.testpassed.com
TestPassed provides Test Passed dumps & test questions
IT Certification Guaranteed, The Easy Way!
Exam : AWS-DevOps-Engineer-
Professional
Vendor : Amazon
Version : DEMO
1
IT Certification Guaranteed, The Easy Way!
NO.2 Which of these techniques enables the fastest possible rollback times in the event of a failed
deployment?
A. Rolling; Immutable
B. Rolling; Mutable
C. Canary or A/B
D. Blue-Green
Answer: D
Explanation:
AWS specifically recommends Blue-Green for super-fast, zero-downtime deploys - and thus rollbacks,
which are redeploying old code.
You use various strategies to migrate the traffic from your current application stack (blue) to a new
version
of the application (green). This is a popular technique for deploying applications with zero downtime.
Reference: https://d0.awsstatic.com/whitepapers/overview-of-deployment-options-on-aws.pdf
NO.3 Your system automatically provisions EIPs to EC2 instances in a VPC on boot. The system
provisions the
whole VPC and stack at once. You have two of them per VPC. On your new AWS account, your
attempt
to create a Development environment failed, after successfully creating Staging and Production
environments in the same region. What happened?
A. You didn't choose the Development version of the AMI you are using.
B. You didn't set the Development flag to true when deploying EC2 instances.
C. You hit the soft limit of 5 EIPs per region and requested a 6th.
D. You hit the soft limit of 2 VPCs per region and requested a 3rd.
Answer: C
Explanation:
There is a soft limit of 5 EIPs per Region for VPC on new accounts. The third environment could not
allocate the 6th EIP.
Reference: http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_vpc
2
IT Certification Guaranteed, The Easy Way!
NO.5 You need to grant a vendor access to your AWS account. They need to be able to read
protected
messages in a private S3 bucket at their leisure. They also use AWS. What is the best way to
accomplish
this?
A. Create an IAM User with API Access Keys. Grant the User permissions to access the bucket. Give
the
vendor the AWS Access Key ID and AWS Secret Access Key for the User.
B. Create an EC2 Instance Profile on your account. Grant the associated IAM role full access to the
bucket. Start an EC2 instance with this Profile and give SSH access to the instance to the vendor.
C. Create a cross-account IAM Role with permission to access the bucket, and grant permission to use
the Role to the vendor AWS account.
D. Generate a signed S3 PUT URL and a signed S3 PUT URL, both with wildcard values and 2 year
durations. Pass the URLs to the vendor.
Answer: C
Explanation:
When third parties require access to your organization's AWS resources, you can use roles to
delegate
access to them. For example, a third party might provide a service for managing your AWS resources.
With IAM roles, you can grant these third parties access to your AWS resources without sharing your
AWS security credentials. Instead, the third party can access your AWS resources by assuming a role
that you create in your AWS account.
Reference:
http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_third-party.html
NO.6 You were just hired as a DevOps Engineer for a startup. Your startup uses AWS for 100% of their
infrastructure. They currently have no automation at all for deployment, and they have had many
failures
while trying to deploy to production. The company has told you deployment process risk mitigation is
3
IT Certification Guaranteed, The Easy Way!
the
most important thing now, and you have a lot of budget for tools and AWS resources.
Their stack:
2 -tier API
Data stored in DynamoDB or S3, depending on type
Compute layer is EC2 in Auto Scaling Groups
They use Route53 for DNS pointing to an ELB
An ELB balances load across the EC2 instances
The scaling group properly varies between 4 and 12 EC2 servers.
Which of the following approaches, given this company's stack and their priorities, best meets the
company's needs?
A. Model the stack in AWS Elastic Beanstalk as a single Application with multiple Environments. Use
Elastic Beanstalk's Rolling Deploy option to progressively roll out application code changes when
promoting across environments.
B. Model the stack in 3 CloudFormation templates: Data layer, compute layer, and networking layer.
Write
stack deployment and integration testing automation following Blue-Green methodologies.
C. Model the stack in AWS OpsWorks as a single Stack, with 1 compute layer and its associated ELB.
Use Chef and App Deployments to automate Rolling Deployment.
D. Model the stack in 1 CloudFormation template, to ensure consistency and dependency graph
resolution. Write deployment and integration testing automation following Rolling Deployment
methodologies.
Answer: B
Explanation:
AWS recommends Blue-Green for zero-downtime deploys. Since you use DynamoDB, and neither
AWS
OpsWorks nor AWS Elastic Beanstalk directly supports DynamoDB, the option selecting
CloudFormation
and Blue-Green is correct.
You use various strategies to migrate the traffic from your current application stack (blue) to a new
version
of the application (green). This is a popular technique for deploying applications with zero downtime.
The
deployment services like AWS Elastic Beanstalk, AWS CloudFormation, or AWS OpsWorks are
particularly useful as they provide a simple way to clone your running application stack. You can set
up a
new version of your application (green) by simply cloning current version of the application (blue).
Reference: https://d0.awsstatic.com/whitepapers/overview-of-deployment-options-on-aws.pdf
NO.7 Your system uses a multi-master, multi-region DynamoDB configuration spanning two regions
to achieve
high availablity. For the first time since launching your system, one of the AWS Regions in which you
operate over went down for 3 hours, and the failover worked correctly. However, after recovery,
your
users are experiencing strange bugs, in which users on different sides of the globe see different data.
What is a likely design issue that was not accounted for when launching?
4
IT Certification Guaranteed, The Easy Way!
A. The system does not have Lambda Functor Repair Automatons, to perform table scans and chack
for
corrupted partition blocks inside the Table in the recovered Region.
B. The system did not implement DynamoDB Table Defragmentation for restoring partition
performance in
the Region that experienced an outage, so data is served stale.
C. The system did not include repair logic and request replay buffering logic for post-failure, to
re-synchronize data to the Region that was unavailable for a number of hours.
D. The system did not use DynamoDB Consistent Read requests, so the requests in different areas are
not utilizing consensus across Regions at runtime.
Answer: C
Explanation:
When using multi-region DynamoDB systems, it is of paramount importance to make sure that all
requests made to one Region are replicated to the other. Under normal operation, the system in
question
would correctly perform write replays into the other Region. If a whole Region went down, the
system
would be unable to perform these writes for the period of downtime. Without buffering write
requests
somehow, there would be no way for the system to replay dropped cross-region writes, and the
requests
would be serviced differently depending on the Region from which they were served after recovery.
Reference:
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.CrossRegionRepl.ht
ml
NO.8 You run a clustered NoSQL database on AWS EC2 using AWS EBS. You need to reduce latency
for
database response times. Performance is the most important concern, not availability. You did not
perform the initial setup, someone without much AWS knowledge did, so you are not sure if they
configured everything optimally. Which of the following is NOT likely to be an issue contributing to
increased latency?
A. The EC2 instances are not EBS Optimized.
B. The database and requesting system are both in the wrong Availability Zone.
C. The EBS Volumes are not using PIOPS.
D. The database is not running in a placement group.
Answer: B
Explanation:
For the highest possible performance, all instances in a clustered database like this one should be in a
single Availability Zone in a placement group, using EBS optimized instances, and using PIOPS SSD
EBS Volumes. The particular Availability Zone the system is running in should not be important, as
long
as it is the same as the requesting resources.
Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html