AWS Solution Architect Associate Dump4
AWS Solution Architect Associate Dump4
Recommend!! Get the Full AWS-Solution-Architect-Associate dumps in VCE and PDF From SurePassExam
https://www.surepassexam.com/AWS-Solution-Architect-Associate-exam-dumps.html (1487 New Questions) https://www.surepassexam.com/AWS-Solution-Architect-Associate-exam-dumps.html (1487 New Questions)
A. Yes, they do but only if they are detached from the instance.
B. No, you cannot attach EBS volumes to an instance.
NEW QUESTION 1 C. No, they are dependent.
You are trying to launch an EC2 instance, however the instance seems to go into a terminated status immediately. What would probably not be a reason that this D. Yes, they d
is happening?
Answer: D
A. The AMI is missing a required part.
B. The snapshot is corrupt. Explanation:
C. You need to create storage in EBS first. An Amazon EBS volume behaves like a raw, unformatted, external block device that you can attach to a
D. You've reached your volume limi single instance. The volume persists independently from the running life of an Amazon EC2 instance. Reference:
http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/Storage.html
Answer: C
Answer: D
NEW QUESTION 3
To specify a resource in a policy statement, in Amazon EC2, can you use its Amazon Resource Name (ARN)? Explanation:
If you have multiple VPN connections, you can provide secure communication between sites using the
A. Yes, you can. AWS VPN CIoudHub. The VPN CIoudHub operates on a simple hub-and-spoke model that you can use with or without a VPC. This design is suitable for
B. No, you can't because EC2 is not related to ARN. customers with multiple branch offices and existing Internet connections who would like to implement a convenient, potentially low-cost hub-and-spoke model for
C. No, you can't because you can't specify a particular Amazon EC2 resource in an IAM policy. primary or backup connectMty between these remote offices.
D. Yes, you can but only for the resources that are not affected by the actio Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPN_CIoudHub.htmI
Answer: A
NEW QUESTION 8
Explanation: Amazon EC2 provides a . It is an HTTP or HTTPS request that uses the HTTP verbs GET or POST.
Some Amazon EC2 API actions allow you to include specific resources in your policy that can be created or modified by the action. To specify a resource in the
statement, you need to use its Amazon Resource Name (ARN). A. web database
Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-ug.pdf B. .net framework
C. Query API
D. C library
NEW QUESTION 4
After you recommend Amazon Redshift to a client as an alternative solution to paying data warehouses to analyze his data, your client asks you to explain why you Answer: C
are recommending Redshift. Which of the following would be a reasonable response to his request?
Explanation:
A. It has high performance at scale as data and query complexity grows. Amazon EC2 provides a Query API. These requests are HTTP or HTTPS requests that use the HTTP verbs GET or POST and a Query parameter named Action.
B. It prevents reporting and analytic processing from interfering with the performance of OLTP workloads. Reference: http://docs.aws.amazon.com/AWSEC2/latest/APIReference/making-api-requests.html
C. You don't have the administrative burden of running your own data warehouse and dealing with setup, durability, monitoring, scaling, and patching.
D. All answers listed are a reasonable response to his QUESTION
NEW QUESTION 9
Answer: D You need to import several hundred megabytes of data from a local Oracle database to an Amazon RDS DB instance. What does AWS recommend you use to
accomplish this?
Explanation:
Amazon Redshift delivers fast query performance by using columnar storage technology to improve I/O efficiency and parallelizing queries across multiple nodes. A. Oracle export/import utilities
Redshift uses standard PostgreSQL JDBC and ODBC drivers, allowing you to use a wide range of familiar SQL clients. Data load speed scales linearly with cluster B. Oracle SQL Developer
size, with integrations to Amazon S3, Amazon DynamoDB, Amazon Elastic MapReduce, C. Oracle Data Pump
Amazon Kinesis or any SSH-enabled host. D. DBMS_FILE_TRANSFER
AWS recommends Amazon Redshift for customers who have a combination of needs, such as: High performance at scale as data and query complexity grows
Desire to prevent reporting and analytic processing from interfering with the performance of OLTP workloads Answer: C
Large volumes of structured data to persist and query using standard SQL and existing BI tools Desire to the administrative burden of running one's own data
warehouse and dealing with setup, durability, monitoring, scaling and patching Explanation:
Reference: https://aws.amazon.com/running_databases/#redshift_anchor How you import data into an Amazon RDS DB instance depends on the amount of data you have and the number and variety of database objects in your
database.
For example, you can use Oracle SQL Developer to import a simple, 20 MB database; you want to use Oracle Data Pump to import complex databases or
NEW QUESTION 5 databases that are several hundred megabytes or several terabytes in size.
Do Amazon EBS volumes persist independently from the running life of an Amazon EC2 instance? Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Oracle.Procedural.Importing.htmI
Passing Certification Exams Made Easy visit - https://www.surepassexam.com Passing Certification Exams Made Easy visit - https://www.surepassexam.com
Recommend!! Get the Full AWS-Solution-Architect-Associate dumps in VCE and PDF From SurePassExam Recommend!! Get the Full AWS-Solution-Architect-Associate dumps in VCE and PDF From SurePassExam
https://www.surepassexam.com/AWS-Solution-Architect-Associate-exam-dumps.html (1487 New Questions) https://www.surepassexam.com/AWS-Solution-Architect-Associate-exam-dumps.html (1487 New Questions)
D. No, you can specify the security group created for EC2-Classic to a non-VPC based instance onl
NEW QUESTION 10
You need to migrate a large amount of data into the cloud that you have stored on a hard disk and you decide that the best way to accomplish this is with AWS Answer: B
Import/Export and you mail the hard disk to AWS. Which of the following statements is incorrect in regards to AWS Import/Export?
Explanation:
A. It can export from Amazon S3 If you're using EC2-Classic, you must use security groups created specifically for EC2-Classic. When you launch an instance in EC2-Classic, you must specify a
B. It can Import to Amazon Glacier security group in the same region as the instance. You can't specify a security group that you created for a VPC when you launch an instance in
C. It can export from Amazon Glacier. EC2-Classic.
D. It can Import to Amazon EBS Reference:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.htmI#ec2-classic-securit y-groups
Answer: C
Passing Certification Exams Made Easy visit - https://www.surepassexam.com Passing Certification Exams Made Easy visit - https://www.surepassexam.com
Recommend!! Get the Full AWS-Solution-Architect-Associate dumps in VCE and PDF From SurePassExam Recommend!! Get the Full AWS-Solution-Architect-Associate dumps in VCE and PDF From SurePassExam
https://www.surepassexam.com/AWS-Solution-Architect-Associate-exam-dumps.html (1487 New Questions) https://www.surepassexam.com/AWS-Solution-Architect-Associate-exam-dumps.html (1487 New Questions)
Auto Scaling supports both EC2 classic and EC2-VPC. When an instance is launched as a part of EC2 classic, it will have the public IP and DNS as well as the In Amazon EC2, while sharing an Amazon EBS snapshot, can the snapshots with AWS IV|arketpIace product codes be public?
private IP and DNS.
Reference: http://docs.aws.amazon.com/AutoScaIing/latest/DeveIoperGuide/GettingStartedTutoriaI.html A. Yes, but only for US-based providers.
B. Yes, they can be public.
C. No, they cannot be made public.
NEW QUESTION 30 D. Yes, they are automatically made public by the syste
You are building infrastructure for a data warehousing solution and an extra request has come through that there will be a lot of business reporting queries running
all the time and you are not sure if your current DB instance will be able to handle it. What would be the best solution for this? Answer: C
Explanation: Answer: A
Amazon DynamoDB integrates with AWS Identity and Access Management (IAM). You can use AWS IAM to grant access to Amazon DynamoDB resources and
API actions. To do this, you first write an AWS IAM policy, which is a document that explicitly lists the permissions you want to grant. You then attach that policy to Explanation:
an AWS IAM user or role. For Amazon Web Services, the Web identity federation allows you to create cloud-backed mobile apps that use public identity providers, such as login with
Reference: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/UsingIAMWithDDB.htmI Facebook, Google, or Amazon. It will create temporary security credentials for each user, which will be authenticated by the AWS services, such as S3.
Reference: http://docs.aws.amazon.com/STS/latest/UsingSTS/CreatingWIF.htmI
NEW QUESTION 35
You've created your first load balancer and have registered your EC2 instances with the load balancer. Elastic Load Balancing routinely performs health checks on NEW QUESTION 45
all the registered EC2 instances and automatically distributes all incoming requests to the DNS name of your load balancer across your registered, healthy EC2 You are setting up a very complex financial services grid and so far it has 5 Elastic IP (EIP) addresses.
instances. By default, the load balancer uses the _ protocol for checking the health of your instances. You go to assign another EIP address, but all accounts are limited to 5 Elastic IP addresses per region by default, so you aren't able to. What is the reason for
this?
A. HTTPS
B. HTTP A. For security reasons.
C. ICMP B. Hardware restrictions.
D. IPv6 C. Public (IPV4) internet addresses are a scarce resource.
D. There are only 5 network interfaces per instanc
Answer: B
Answer: C
Explanation:
In Elastic Load Balancing a health configuration uses information such as protocol, ping port, ping path (URL), response timeout period, and health check interval Explanation:
to determine the health state of the instances registered with the load balancer. Public (IPV4) internet addresses are a scarce resource. There is only a limited amount of public IP space available, and Amazon EC2 is committed to helping use
Currently, HTTP on port 80 is the default health check. Reference: that space efficiently.
http://docs.aws.amazon.com/E|asticLoadBaIancing/latest/DeveIoperGuide/TerminoIogyandKeyConcepts. html By default, all accounts are limited to 5 Elastic IP addresses per region. If you need more than 5 Elastic IP addresses, AWS asks that you apply for your limit to be
raised. They will ask you to think through your use case and help them understand your need for additional addresses.
Reference: http://aws.amazon.com/ec2/faqs/#How_many_instances_can_I_run_in_Amazon_EC2
NEW QUESTION 36
is a fast, filexible, fully managed push messaging service.
NEW QUESTION 48
A. Amazon SNS Amazon RDS provides high availability and failover support for DB instances using .
B. Amazon SES
C. Amazon SQS A. customized deployments
D. Amazon FPS B. Appstream customizations
C. log events
Answer: A D. MuIti-AZ deployments
Explanation: Answer: D
Amazon Simple Notification Service (Amazon SNS) is a fast, filexible, fully managed push messaging service. Amazon SNS makes it simple and cost-effective to
push to mobile devices such as iPhone, iPad, Android, Kindle Fire, and internet connected smart devices, as well as pushing to other distributed services. Explanation:
Reference: http://aws.amazon.com/sns/?nc1=h_I2_as Amazon RDS provides high availability and failover support for DB instances using MuIti-AZ deployments. MuIti-AZ deployments for Oracle, PostgreSQL, MySQL,
and MariaDB DB instances use Amazon technology, while SQL Server DB instances use SQL Server Mrroring.
Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.IV|u|tiAZ.htmI
NEW QUESTION 37
Passing Certification Exams Made Easy visit - https://www.surepassexam.com Passing Certification Exams Made Easy visit - https://www.surepassexam.com
Recommend!! Get the Full AWS-Solution-Architect-Associate dumps in VCE and PDF From SurePassExam Recommend!! Get the Full AWS-Solution-Architect-Associate dumps in VCE and PDF From SurePassExam
https://www.surepassexam.com/AWS-Solution-Architect-Associate-exam-dumps.html (1487 New Questions) https://www.surepassexam.com/AWS-Solution-Architect-Associate-exam-dumps.html (1487 New Questions)
}
NEW QUESTION 50 }
What does Amazon DynamoDB provide? }
Reference:
A. A predictable and scalable MySQL database http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/cfn-whatis-howdoesitwork.html
B. A fast and reliable PL/SQL database cluster
C. A standalone Cassandra database, managed by Amazon Web Services
D. A fast, highly scalable managed NoSQL database service NEW QUESTION 62
True or False: In Amazon Route 53, you can create a hosted zone for a top-level domain (TLD).
Answer: D
A. FALSE
Explanation: B. False, Amazon Route 53 automatically creates it for you.
Amazon DynamoDB is a managed NoSQL database service offered by Amazon. It automatically manages tasks like scalability for you while it provides high C. True, only if you send an XML document with a CreateHostedZoneRequest element for TLD.
availability and durability for your data, allowing you to concentrate in other aspects of your application. D. TRUE
Reference: check link - https://aws.amazon.com/running_databases/
Answer: A
Explanation:
NEW QUESTION 59 Instances that you launch into a default subnet receive both a public IP address and a private IP address. Instances in a default subnet also receive both public
Having set up a website to automatically be redirected to a backup website if it fails, you realize that there are different types of failovers that are possible. You and private DNS hostnames. Instances that you launch into a nondefault subnet in a default VPC don't receive a public IP address or a DNS hostname. You can
need all your resources to be available the majority of the time. Using Amazon Route 53 which configuration would best suit this requirement? change your subnet's default public IP addressing behavior.
Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/default-vpc.html
A. Active-active failover.
B. Non
C. Route 53 can't failover. NEW QUESTION 67
D. Active-passive failover. An existing client comes to you and says that he has heard that launching instances into a VPC (virtual private cloud) is a better strategy than launching instances
E. Active-active-passive and other mixed configuration into a EC2-classic which he knows is what you currently do. You suspect that he is correct and he has asked you to do some research about this and get back to
him. Which of the following statements is true in regards to what ability launching your instances into a VPC instead of EC2-Classic gives you?
Answer: A
A. All of the things listed here.
Explanation: B. Change security group membership for your instances while they're running
You can set up a variety of failover configurations using Amazon Route 53 alias: weighted, latency, geolocation routing, and failover resource record sets. C. Assign static private IP addresses to your instances that persist across starts and stops
Active-active failover: Use this failover configuration when you want all of your resources to be available the majority of the time. When a resource becomes D. Define network interfaces, and attach one or more network interfaces to your instances
unavailable, Amazon Route 53 can detect that it's unhealthy and stop including it when responding to queries.
Active-passive failover: Use this failover configuration when you want a primary group of resources to be available the majority of the time and you want a Answer: A
secondary group of resources to be on standby in case all of the primary resources become unavailable. When responding to queries, Amazon Route 53 includes
only the healthy primary resources. If all of the primary resources are unhealthy, Amazon Route 53 begins to include only the healthy secondary resources in Explanation:
response to DNS queries. By launching your instances into a VPC instead of EC2-Classic, you gain the ability to: Assign static private IP addresses to your instances that persist across
Active-active-passive and other mixed configurations: You can combine alias and non-alias resource record sets to produce a variety of Amazon Route 53 starts and stops Assign multiple IP addresses to your instances
behaviors. Define network interfaces, and attach one or more network interfaces to your instances Change security group membership for your instances while they're
Reference: http://docs.aws.amazon.com/Route53/Iatest/DeveIoperGuide/dns-failover.html running
Control the outbound traffic from your instances (egress filtering) in addition to controlling the inbound traffic to them (ingress filtering)
Add an additional layer of access control to your instances in the form of network access control lists (ACL)
NEW QUESTION 60 Run your instances on single-tenant hardware
AWS CIoudFormation is a service that helps you model and set up your Amazon Web Services resources so that you can spend less time managing those Reference: http://media.amazonwebservices.com/AWS_CIoud_Best_Practices.pdf
resources and more time focusing on your applications that run in AWS. You create a template that describes all the AWS resources that you want (like Amazon
EC2 instances or Amazon RDS DB instances), and AWS CIoudFormation takes care of provisioning and configuring those resources for you. What formatting is
required for this template? NEW QUESTION 70
Which of the following statements is true of creating a launch configuration using an EC2 instance?
A. JSON-formatted document
B. CSS-formatted document A. The launch configuration can be created only using the Query APIs.
C. XML-formatted document B. Auto Scaling automatically creates a launch configuration directly from an EC2 instance.
D. HTML-formatted document C. A user should manually create a launch configuration before creating an Auto Scaling group.
D. The launch configuration should be created manually from the AWS CL
Answer: A
Answer: B
Explanation:
You can write an AWS CIoudFormation template (a JSON-formatted document) in a text editor or pick an existing template. The template describes the resources Explanation:
you want and their settings. For example, You can create an Auto Scaling group directly from an EC2 instance. When you use this feature, Auto Scaling automatically creates a launch configuration for you
suppose you want to create an Amazon EC2. Your template can declare an instance Amazon EC2 and describe its properties, as shown in the following example: as well.
{ Reference:
"AWSTemp|ateFormatVersion" : "2010-09-O9", http://docs.aws.amazon.com/AutoScaling/latest/DeveIoperGuide/create-Ic-with-instancelD.htmI
"Description" : "A simple Amazon EC2 instance", "Resources" : {
"MyEC2Instance" : {
"Type" : "AWS::EC2::Instance", "Properties" : { NEW QUESTION 72
"Image|d" : "ami-2f726546", "|nstanceType" : "t1.micro" You need to set up a high level of security for an Amazon Relational Database Service (RDS) you have just built in order to protect the confidential information
} stored in it. What are all the possible security groups that RDS uses?
Passing Certification Exams Made Easy visit - https://www.surepassexam.com Passing Certification Exams Made Easy visit - https://www.surepassexam.com
Recommend!! Get the Full AWS-Solution-Architect-Associate dumps in VCE and PDF From SurePassExam Recommend!! Get the Full AWS-Solution-Architect-Associate dumps in VCE and PDF From SurePassExam
https://www.surepassexam.com/AWS-Solution-Architect-Associate-exam-dumps.html (1487 New Questions) https://www.surepassexam.com/AWS-Solution-Architect-Associate-exam-dumps.html (1487 New Questions)
A. DB security groups, VPC security groups, and EC2 security groups. A. Attach one more volume with RAID 1 configuration.
B. DB security groups only. B. Attach one more volume with RAID 0 configuration.
C. EC2 security groups only. C. Connect multiple volumes and stripe them with RAID 6 configuration.
D. VPC security groups, and EC2 security group D. Use the EBS volume as a root devic
Answer: A Answer: A
Explanation: Explanation:
A security group controls the access to a DB instance. It does so by allowing access to IP address ranges or Amazon EC2 instances that you specify. The user can join multiple provisioned IOPS volumes together in a RAID 1 configuration to achieve better fault tolerance. RAID 1 does not provide a write
Amazon RDS uses DB security groups, VPC security groups, and EC2 security groups. In simple terms, a DB security group controls access to a DB instance that performance improvement; it requires more bandwidth than non-RAID configurations since the data is written simultaneously to multiple volumes.
is not in a VPC, a VPC security group controls access to a DB instance inside a VPC, and an Amazon EC2 security group controls access to an EC2 instance and Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/raid-config.html
can be used with a DB instance.
Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html
NEW QUESTION 86
A user has created a subnet in VPC and launched an EC2 instance within it. The user has not selected the option to assign the IP address while launching the
NEW QUESTION 76 instance. The user has 3 elastic IPs and is trying to assign one of the Elastic IPs to the VPC instance from the console. The console does not show any instance in
A user has created an application which will be hosted on EC2. The application makes calls to DynamoDB to fetch certain data. The application is using the the IP assignment screen. What is a possible reason that the instance is unavailable in the assigned IP console?
DynamoDB SDK to connect with from the EC2 instance. Which of the below mentioned statements is true with respect to the best practice for security in this
scenario? A. The IP address may be attached to one of the instances
B. The IP address belongs to a different zone than the subnet zone
A. The user should create an IAM user with DynamoDB access and use its credentials within the application to connect with DynamoDB C. The user has not created an internet gateway
B. The user should attach an IAM role with DynamoDB access to the EC2 instance D. The IP addresses belong to EC2 Classic; so they cannot be assigned to VPC
C. The user should create an IAM role, which has EC2 access so that it will allow deploying the application
D. The user should create an IAM user with DynamoDB and EC2 acces Answer: D
E. Attach the user with the application so that it does not use the root account credentials
Explanation:
Answer: B A Virtual Private Cloud (VPC) is a virtual network dedicated to the user’s AWS account. A user can create a subnet with VPC and launch instances inside that
subnet. When the user is launching an instance he needs to select an option which attaches a public IP to the instance. If the user has not selected the option to
Explanation: attach the public IP then it will only have a private IP when launched. If the user wants to connect to
With AWS IAM a user is creating an application which runs on an EC2 instance and makes requests to an instance from the internet he should create an elastic IP with VPC. If the elastic IP is a part of EC2
AWS, such as DynamoDB or S3 calls. Here it is recommended that the user should not create an IAM user and pass the user's credentials to the application or Classic it cannot be assigned to a VPC instance.
embed those credentials inside the application. Instead, the user should use roles for EC2 and give that role access to DynamoDB /S3. When the roles are Reference: http://docs.aws.amazon.com/AmazonVPC/Iatest/GettingStartedGuide/LaunchInstance.htmI
attached to EC2, it will give temporary security credentials to the application hosted on that EC2, to connect with DynamoDB / S3.
Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_WorkingWithGroupsAndUsers.htmI
NEW QUESTION 91
Select a true statement about Amazon EC2 Security Groups (EC2-Classic).
NEW QUESTION 79
After setting up several database instances in Amazon Relational Database Service (Amazon RDS) you decide that you need to track the performance and health A. After you launch an instance in EC2-Classic, you can't change its security groups.
of your databases. How can you do this? B. After you launch an instance in EC2-Classic, you can change its security groups only once.
C. After you launch an instance in EC2-Classic, you can only add rules to a security group.
A. Subscribe to Amazon RDS events to be notified when changes occur with a DB instance, DB snapshot, DB parameter group, or DB security group. D. After you launch an instance in EC2-Classic, you cannot add or remove rules from a security grou
B. Use the free Amazon CIoudWatch service to monitor the performance and health of a DB instance.
C. All of the items listed will track the performance and health of a database. Answer: A
D. View, download, or watch database log files using the Amazon RDS console or Amazon RDS API
E. You can also query some database log files that are loaded into database tables. Explanation:
After you launch an instance in EC2-Classic, you can't change its security groups. However, you can add rules to or remove rules from a security group, and those
Answer: C changes are automatically applied to all instances that are associated with the security group.
Reference: http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/using-network-security.html
Explanation:
Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the cloud. It
provides cost-efficient, resizeable capacity for an industry-standard relational database and manages common database administration tasks. NEW QUESTION 94
There are several ways you can track the performance and health of a database or a DB instance. You can: A user has created photo editing software and hosted it on EC2. The software accepts requests from the user about the photo format and resolution and sends a
Use the free Amazon CIoudWatch service to monitor the performance and health of a DB instance. Subscribe to Amazon RDS events to be notified when changes message to S3 to enhance the picture accordingly. Which of the below mentioned AWS services will help make a scalable software with the AWS infrastructure in
occur with a DB instance, DB snapshot, DB parameter group, or DB security group. this scenario?
View, download, or watch database log files using the Amazon RDS console or Amazon RDS APIs. You can also query some database log files that are loaded
into database tables. A. AWS Simple Notification Service
Use the AWS CIoudTraiI service to record AWS calls made by your AWS account. The calls are recorded in log files and stored in an Amazon S3 bucket. B. AWS Simple Queue Service
Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Monitoring.htmI C. AWS Elastic Transcoder
D. AWS Glacier
Passing Certification Exams Made Easy visit - https://www.surepassexam.com Passing Certification Exams Made Easy visit - https://www.surepassexam.com
Recommend!! Get the Full AWS-Solution-Architect-Associate dumps in VCE and PDF From SurePassExam Recommend!! Get the Full AWS-Solution-Architect-Associate dumps in VCE and PDF From SurePassExam
https://www.surepassexam.com/AWS-Solution-Architect-Associate-exam-dumps.html (1487 New Questions) https://www.surepassexam.com/AWS-Solution-Architect-Associate-exam-dumps.html (1487 New Questions)
Reference: http://docs.aws.amazon.com/AmazonCIoudWatch/latest/DeveloperGuide/AlarmThatSendsEmaiI.html A. 2
B. 16
C. 4
NEW QUESTION 100 D. As many as you nee
Which of the following strategies can be used to control access to your Amazon EC2 instances?
Answer: A
A. DB security groups
B. IAM policies Explanation:
C. None of these DynamoDB supports two types of secondary indexes:
D. EC2 security groups Local secondary index — an index that has the same hash key as the table, but a different range key. A local secondary index is "IocaI" in the sense that every
partition of a local secondary index is scoped to a table partition that has the same hash key.
Answer: D Global secondary index — an index with a hash and range key that can be different from those on the table. A global secondary index is considered "gIobaI"
because queries on the index can span all of the data in a table, across all partitions.
Explanation: Reference: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Secondarylndexes.html
IAM policies allow you to specify what actions your IAM users are allowed to perform against your EC2 Instances. However, when it comes to access control,
security groups are what you need in order to define and control the way you want your instances to be accessed, and whether or not certain kind of
communications are allowed or not. NEW QUESTION 111
Reference: http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/UsingIAM.htmI Select the correct statement: Within Amazon EC2, when using Linux instances, the device name
/dev/sda1 is .
Explanation:
NEW QUESTION 106 With Amazon EBS, you can use any of the standard RAID configurations that you can use with a traditional bare metal server, as long as that particular RAID
A user is observing the EC2 CPU utilization metric on CIoudWatch. The user has observed some interesting patterns while filtering over the 1 week period for a configuration is supported by the operating system for your instance. This is because all RAID is accomplished at the software level. For greater I/O performance
particular hour. The user wants to zoom that data point to a more granular period. How can the user do that easily with CIoudWatch? than you can achieve with a single volume, RAID 0 can stripe multiple volumes together; for on-instance redundancy, RAID 1 can mirror two volumes together.
RAID 5 and RAID 6 are not recommended for Amazon EBS because the parity write operations of these RAID modes consume some of the IOPS available to your
A. The user can zoom a particular period by selecting that period with the mouse and then releasing the mouse volumes.
B. The user can zoom a particular period by specifying the aggregation data for that period Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/raid-config.html
C. The user can zoom a particular period by double clicking on that period with the mouse
D. The user can zoom a particular period by specifying the period in the Time Range
NEW QUESTION 123
Answer: A Doug has created a VPC with CIDR 10.201.0.0/16 in his AWS account. In this VPC he has created a public subnet with CIDR block 10.201.31.0/24. While
launching a new EC2 from the console, he is not able to assign the private IP address 10.201.31.6 to this instance. Which is the most likely reason for this issue?
Explanation:
Amazon CIoudWatch provides the functionality to graph the metric data generated either by the AWS services or the custom metric to make it easier for the user to A. Private IP address 10.201.31.6 is blocked via ACLs in Amazon infrastructure as a part of platform security.
analyse. The AWS CIoudWatch console provides the option to change the granularity of a graph and zoom in to see data over a shorter time period. To zoom, the B. Private address IP 10.201.31.6 is currently assigned to another interface.
user has to click in the graph details pane, drag on the graph area for selection, and then release the mouse button. C. Private IP address 10.201.31.6 is not part of the associated subnet's IP address range.
Reference: http://docs.aws.amazon.com/AmazonCloudWatch/Iatest/Deve|operGuide/zoom_in_on_graph.htmI D. Private IP address 10.201.31.6 is reserved by Amazon for IP networking purpose
Answer: B
NEW QUESTION 108
A scope has been handed to you to set up a super fast gaming server and you decide that you will use Amazon DynamoDB as your database. For efficient access Explanation:
to data in a table, Amazon DynamoDB creates and maintains indexes for the primary key attributes. A secondary index is a data structure that contains a subset of In Amazon VPC, you can assign any Private IP address to your instance as long as it is: Part of the associated subnet's IP address range
attributes from a table, along with an alternate key to support Query operations. How many types of secondary indexes does DynamoDB support?
Passing Certification Exams Made Easy visit - https://www.surepassexam.com Passing Certification Exams Made Easy visit - https://www.surepassexam.com
Recommend!! Get the Full AWS-Solution-Architect-Associate dumps in VCE and PDF From SurePassExam Recommend!! Get the Full AWS-Solution-Architect-Associate dumps in VCE and PDF From SurePassExam
https://www.surepassexam.com/AWS-Solution-Architect-Associate-exam-dumps.html (1487 New Questions) https://www.surepassexam.com/AWS-Solution-Architect-Associate-exam-dumps.html (1487 New Questions)
Not reserved by Amazon for IP networking purposes Not currently assigned to another interface Reference: http://aws.amazon.com/vpc/faqs/ A. Set a target value and choose whether the alarm will trigger when the value is greater than (>), greater than or equal to (>=), less than (<), or less than or equal
to (<=) that value.
B. Thresholds need to be set in IAM not CIoudWatch
NEW QUESTION 125 C. Only default thresholds can be set you can't choose your own thresholds.
Can a single EBS volume be attached to multiple EC2 instances at the same time? D. Set a target value and choose whether the alarm will trigger when the value hits this threshold
A. Yes Answer: A
B. No
C. Only for high-performance EBS volumes. Explanation:
D. Only when the instances are located in the US region Amazon CIoudWatch is a monitoring service for AWS cloud resources and the applications you run on AWS. You can use Amazon CIoudWatch to collect and
track metrics, collect and monitor log files, and set
Answer: B alarms.
When you create an alarm, you first choose the Amazon CIoudWatch metric you want it to monitor. Next, you choose the evaluation period (e.g., five minutes or
Explanation: one hour) and a statistical value to measure (e.g., Average or Maximum).
You can't attach an EBS volume to multiple EC2 instances. This is because it is equivalent to using a single hard drive with many computers at the same time. To set a threshold, set a target value and choose whether the alarm will trigger when the value is greater than (>), greater than or equal to (>=), less than (<), or
Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.htmI less than or equal to (<=) that value.
Reference: http://aws.amazon.com/cIoudwatch/faqs/
A. Japan A. It starts when the Status column for your distribution changes from Creating to Deployed.
B. Singapore B. It starts as soon as you click the create instance option on the main EC2 console.
C. US East C. It starts when your instance reaches 720 instance hours.
D. US West-1 D. It starts when Amazon EC2 initiates the boot sequence of an AM instanc
Answer: D Answer: D
Explanation: Explanation:
Access to Amazon S3 from within Amazon EC2 in the same region is fast. In this aspect, though the client base is Singapore, the application is being hosted in the Billing commences when Amazon EC2 initiates the boot sequence of an AM instance. Billing ends when the instance terminates, which could occur through a web
US West-1 region. Thus, it is recommended that S3 objects be stored in the US-West-1 region. services command, by running "shutdown -h", or through instance failure. When you stop an instance, Amazon shuts it down but doesn/Et charge hourly usage for
Reference: http://media.amazonwebservices.com/AWS_Storage_Options.pdf a stopped instance, or data transfer fees, but charges for the storage for any Amazon EBS volumes.
Reference: http://aws.amazon.com/ec2/faqs/
Passing Certification Exams Made Easy visit - https://www.surepassexam.com Passing Certification Exams Made Easy visit - https://www.surepassexam.com
Recommend!! Get the Full AWS-Solution-Architect-Associate dumps in VCE and PDF From SurePassExam Recommend!! Get the Full AWS-Solution-Architect-Associate dumps in VCE and PDF From SurePassExam
https://www.surepassexam.com/AWS-Solution-Architect-Associate-exam-dumps.html (1487 New Questions) https://www.surepassexam.com/AWS-Solution-Architect-Associate-exam-dumps.html (1487 New Questions)
A. Accelerates transferring large amounts of data between the AWS cloud and portable storage devices . Explanation:
B. A web service that speeds up distribution of your static and dynamic web content. Penetration tests are allowed after obtaining permission from AWS to perform them. Reference: http://aws.amazon.com/security/penetration-testing/
C. Connects an on-premises software appliance with cloud-based storage to provide seamless and secure integration between your on-premises IT environment
and AWS's storage infrastructure.
D. Is a storage service optimized for infrequently used data, or "cold data." NEW QUESTION 165
What happens to data on an ephemeral volume of an EBS-backed EC2 instance if it is terminated or if it fails?
Answer: C
A. Data is automatically copied to another volume.
Explanation: B. The volume snapshot is saved in S3.
AWS Storage Gateway connects an on-premises software appliance with cloud-based storage to provide seamless integration with data security features between C. Data persists.
your on-premises IT environment and the Amazon Web Services (AWS) storage infrastructure. You can use the service to store data in the AWS cloud for scalable D. Data is delete
and cost-effective storage that helps maintain data security. AWS Storage Gateway offers both volume-based and tape-based storage solutions:
Volume gateways Gateway-cached volumes Gateway-stored volumes Answer: D
Gateway-virtual tape library (VTL)
Reference: Explanation:
http://media.amazonwebservices.com/architecturecenter/AWS_ac_ra_disasterrecovery_07.pdf Any data on the instance store volumes persists as long as the instance is running, but this data is deleted when the instance is terminated or if it fails (such as if
an underlying drive has issues). After an instance store-backed instance fails or terminates, it cannot be restored.
Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/RootDeviceStorage.htmI
NEW QUESTION 148
An organization has a statutory requirement to protect the data at rest for the S3 objects. Which of the below mentioned options need not be enabled by the
organization to achieve data security? NEW QUESTION 167
A user is sending bulk emails using AWS SES. The emails are not reaching some of the targeted audience because they are not authorized by the ISPs. How can
A. MFA delete for S3 objects the user ensure that the emails are all delivered?
B. Client side encryption
C. Bucket versioning A. Send an email using DKINI with SES.
D. Data replication B. Send an email using SMTP with SES.
C. Open a ticket with AWS support to get it authorized with the ISP.
Answer: D D. Authorize the ISP by sending emails from the development accoun
Explanation: Answer: A
AWS S3 provides multiple options to achieve the protection of data at REST. The options include Permission (Policy), Encryption (Client and Server Side), Bucket
Versioning and MFA based delete. The user can enable any of these options to achieve data protection. Data replication is an internal facility by AWS where S3 Explanation:
replicates each object across all the Availability Zones and the organization need not Domain Keys Identified MaiI (DKIM) is a standard that allows senders to sign their email messages and ISPs, and use those signatures to verify that those
enable it in this case. messages are legitimate and have not been modified by a third party in transit.
Reference: http://media.amazonwebservices.com/AWS_Security_Best_Practices.pdf Reference: http://docs.aws.amazon.com/ses/latest/DeveloperGuide/dkim.html
Explanation: Answer: C
A private IP address is an IP address that's not reachable over the Internet. You can use private IP addresses for communication between instances in the same
network (EC2-Classic or a VPC). Reference: Explanation:
http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/using-instance-addressing.htmI In addition to supporting IAM user policies, some services support resource-based permissions, which let you attach policies to the service's resources instead of
to IAM users or groups. Resource-based permissions are supported by Amazon S3, Amazon SNS, Amazon SQS, Amazon Glacier and Amazon EBS.
Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_SpecificProducts.htm|
NEW QUESTION 155
A friend tells you he is being charged $100 a month to host his WordPress website, and you tell him you can move it to AWS for him and he will only pay a fraction
of that, which makes him very happy. He then tells you he is being charged $50 a month for the domain, which is registered with the same people that NEW QUESTION 170
set it up, and he asks if it's possible to move that to AWS as well. You tell him you aren't sure, but will look into it. Which of the following statements is true in You are setting up your first Amazon Virtual Private Cloud (Amazon VPC) network so you decide you should probably use the AWS Management Console and the
regards to transferring domain names to AWS? VPC Wizard. Which of the following is not an option for network architectures after launching the "Start VPC Wizard" in Amazon VPC page on the AWS
Management Console?
A. You can't transfer existing domains to AWS.
B. You can transfer existing domains into Amazon Route 53’s management. A. VPC with a Single Public Subnet Only
C. You can transfer existing domains via AWS Direct Connect. B. VPC with a Public Subnet Only and Hardware VPN Access
D. You can transfer existing domains via AWS Import/Expor C. VPC with Public and Private Subnets and Hardware VPN Access
Passing Certification Exams Made Easy visit - https://www.surepassexam.com Passing Certification Exams Made Easy visit - https://www.surepassexam.com
Recommend!! Get the Full AWS-Solution-Architect-Associate dumps in VCE and PDF From SurePassExam Recommend!! Get the Full AWS-Solution-Architect-Associate dumps in VCE and PDF From SurePassExam
https://www.surepassexam.com/AWS-Solution-Architect-Associate-exam-dumps.html (1487 New Questions) https://www.surepassexam.com/AWS-Solution-Architect-Associate-exam-dumps.html (1487 New Questions)
D. VPC with a Private Subnet Only and Hardware VPN Access NEW QUESTION 186
You have been given a scope to set up an AWS Media Sharing Framework for a new start up photo
Answer: B sharing company similar to flickr. The first thing that comes to mind about this is that it will obviously need a huge amount of persistent data storage for this
framework. Which of the following storage options would be appropriate for persistent storage?
Explanation:
Amazon VPC enables you to build a virtual network in the AWS cloud - no VPNs, hardware, or physical datacenters required. A. Amazon Glacier or Amazon S3
Your AWS resources are automatically provisioned in a ready-to-use default VPC. You can choose to create additional VPCs by going to Amazon VPC page on B. Amazon Glacier or AWS Import/Export
the AWS Management Console and click on the "Start VPC Wizard" button. C. AWS Import/Export or Amazon C|oudFront
You’II be presented with four basic options for network architectures. After selecting an option, you can modify the size and IP address range of the VPC and its D. Amazon EBS volumes or Amazon S3
subnets. If you select an option with Hardware VPN Access, you will need to specify the IP address of the VPN hardware on your network. You can modify the
VPC to add more subnets or add or remove gateways at any time after the VPC has been created. Answer: D
The four options are:
VPC with a Single Public Subnet Only VPC with Public and Private Subnets Explanation:
VPC with Public and Private Subnets and Hardware VPN Access VPC with a Private Subnet Only and Hardware VPN Access Reference: Persistent storage-If you need persistent virtual disk storage similar to a physical disk drive for files or other data that must persist longer than the lifetime of a
https://aws.amazon.com/vpc/faqs/ single Amazon EC2 instance, Amazon EBS volumes or Amazon S3 are more appropriate.
Reference: http://media.amazonwebservices.com/AWS_Storage_Options.pdf
Explanation: Answer: C
To create a VPC peering connection with another VPC, you need to be aware of the following limitations and rules:
You cannot create a VPC peering connection between VPCs that have matching or overlapping CIDR blocks. Explanation:
You cannot create a VPC peering connection between VPCs in different regions. You can use a NAT device to enable instances in a private subnet to connect to the Internet (for example, for software updates) or other AWS services, but
You have a limit on the number active and pending VPC peering connections that you can have per VPC. VPC peering does not support transitive peering prevent the Internet from initiating connections with the instances. AWS offers two kinds of NAT devices u a NAT gateway or a NAT instance. We recommend NAT
relationships; in a VPC peering connection, your VPC will not have access to any other VPCs that the peer VPC may be peered with. This includes VPC peering gateways, as they provide better availability and bandwidth over NAT instances. The NAT Gateway service is also a managed service that does not require your
connections that are established entirely within your own AWS account. administration efforts. A NAT instance is launched from a NAT AM. You can choose to use a NAT instance for special purposes.
You cannot have more than one VPC peering connection between the same two VPCs at the same time. The Maximum Transmission Unit (MTU) across a VPC Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-nat.html
peering connection is 1500 bytes.
A placement group can span peered VPCs; however, you will not get full-bisection bandwidth between instances in peered VPCs.
Unicast reverse path forwarding in VPC peering connections is not supported. NEW QUESTION 195
You cannot reference a security group from the peer VPC as a source or destination for ingress or egress rules in your security group. Instead, reference CIDR All Amazon EC2 instances are assigned two IP addresses at launch. Which are those?
blocks of the peer VPC as the source or destination of your security group's ingress or egress rules.
Private DNS values cannot be resolved between instances in peered VPCs. Reference: A. 2 Elastic IP addresses
http://docs.aws.amazon.com/AmazonVPC/Iatest/PeeringGuide/vpc-peering-overview.htmI#vpc-peering-Ii mitations B. A private IP address and an Elastic IP address
C. A public IP address and an Elastic IP address
D. A private IP address and a public IP address
NEW QUESTION 183
You are architecting a highly-scalable and reliable web application which will have a huge amount of content .You have decided to use Cloudfront as you know it Answer: D
will speed up distribution of your static and dynamic web content and know that Amazon C|oudFront integrates with Amazon CIoudWatch metrics so that you can
monitor your web application. Because you live in Sydney you have chosen the the Asia Pacific (Sydney) region in the AWS console. However you have set up Explanation:
this up but no CIoudFront metrics seem to be appearing in the CIoudWatch console. What is the most likely reason from the possible choices below for this? In Amazon EC2-Classic every instance is given two IP Addresses: a private IP address and a public IP address
Reference:
A. Metrics for CIoudWatch are available only when you choose the same region as the application you aremonitoring. http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-instance-addressing.htmI#differences
B. You need to pay for CIoudWatch for it to become active.
C. Metrics for CIoudWatch are available only when you choose the US East (
D. Virginia) NEW QUESTION 196
E. Metrics for CIoudWatch are not available for the Asia Pacific region as ye A user has launched an EC2 instance. The instance got terminated as soon as it was launched. Which of the below mentioned options is not a possible reason for
this?
Answer: C
A. The user account has reached the maximum volume limit
Explanation: B. The AM is missin
CIoudFront is a global service, and metrics are available only when you choose the US East (N. Virginia) region in the AWS console. If you choose another region, C. It is the required part
no CIoudFront metrics will appear in the CIoudWatch console. D. The snapshot is corrupt
Reference: E. The user account has reached the maximum EC2 instance limit
http://docs.aws.amazon.com/AmazonCIoudFront/latest/Deve|operGuide/monitoring-using-cloudwatch.ht ml
Answer:
Passing Certification Exams Made Easy visit - https://www.surepassexam.com Passing Certification Exams Made Easy visit - https://www.surepassexam.com
Recommend!! Get the Full AWS-Solution-Architect-Associate dumps in VCE and PDF From SurePassExam Recommend!! Get the Full AWS-Solution-Architect-Associate dumps in VCE and PDF From SurePassExam
https://www.surepassexam.com/AWS-Solution-Architect-Associate-exam-dumps.html (1487 New Questions) https://www.surepassexam.com/AWS-Solution-Architect-Associate-exam-dumps.html (1487 New Questions)
Answer: D
NEW QUESTION 200
A user has created an ELB with Auto Scaling. Which of the below mentioned offerings from ELB helps the Explanation:
user to stop sending new requests traffic from the load balancer to the EC2 instance when the instance is being deregistered while continuing in-flight requests? In EC2-Classic, you can associate an instance with up to 500 security groups and add up to 100 rules to a security group.
Reference: http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/using-network-security.htmI
A. ELB sticky session
B. ELB deregistration check
C. ELB auto registration Off NEW QUESTION 216
D. ELB connection draining Identify a true statement about the On-Demand instances purchasing option provided by Amazon EC2.
Answer: D A. Pay for the instances that you use by the hour, with no long-term commitments or up-front payments.
B. Make a low, one-time, up-front payment for an instance, reserve it for a one- or three-year term, and pay a significantly lower hourly rate for these instances.
Explanation: C. Pay for the instances that you use by the hour, with long-term commitments or up-front payments.
The Elastic Load Balancer connection draining feature causes the load balancer to stop sending new requests to the back-end instances when the instances are D. Make a high, one-time, all-front payment for an instance, reserve it for a one- or three-year term, andpay a significantly higher hourly rate for these instance
deregistering or become unhealthy, while ensuring that in-flight requests continue to be served.
Reference: Answer: A
http://docs.aws.amazon.com/EIasticLoadBaIancing/latest/DeveIoperGuide/config-conn-drain.htmI
Explanation:
On-Demand instances allow you to pay for the instances that you use by the hour, with no long-term commitments or up-front payments.
NEW QUESTION 203 Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/reserved-instances-offerings.html
A user is running a webserver on EC2. The user wants to receive the SMS when the EC2 instance utilization is above the threshold limit. Which AWS services
should the user configure in this case?
NEW QUESTION 220
A. AWS C|oudWatch + AWS SQS. You have a Business support plan with AWS. One of your EC2 instances is running Mcrosoft Windows Server 2008 R2 and you are having problems with the
B. AWS CIoudWatch + AWS SNS. software. Can you receive support from AWS for this software?
C. AWS CIoudWatch + AWS SES.
D. AWS EC2 + AWS Cloudwatc A. Yes
B. No, AWS does not support any third-party software.
Answer: B C. No, Mcrosoft Windows Server 2008 R2 is not supported.
D. No, you need to be on the enterprise support pla
Explanation:
Amazon SNS makes it simple and cost-effective to push to mobile devices, such as iPhone, iPad, Android, Kindle Fire, and internet connected smart devices, as Answer: A
well as pushing to other distributed services. In this case, the user can configure that Cloudwatch sends an alarm on when the threshold is crossed to SNS which
will trigger an SMS. Explanation:
Reference: http://aws.amazon.com/sns/ Third-party software support is available only to AWS Support customers enrolled for Business or Enterprise Support. Third-party support applies only to software
running on Amazon EC2 and does not extend to assisting with on-premises software. An exception to this is a VPN tunnel configuration running supported devices
for Amazon VPC.
NEW QUESTION 204 Reference: https://aws.amazon.com/premiumsupport/features/
A user is making a scalable web application with compartmentalization. The user wants the log module to be able to be accessed by all the application
functionalities in an asynchronous way. Each module of the application sends data to the log module, and based on the resource availability it will process the logs.
Which AWS service helps this functionality? NEW QUESTION 225
A 3-tier e-commerce web application is current deployed on-premises and will be migrated to AWS for
A. AWS Simple Queue Service. greater scalability and elasticity The web server currently shares read-only data using a network distributed file system The app server tier uses a clustering
B. AWS Simple Notification Service. mechanism for discovery and shared session state that depends on I P multicast The database tier uses shared-storage clustering to provide database fail over
C. AWS Simple Workflow Service. capability, and uses several read slaves for scaling Data on all sewers and the distributed file system directory is backed up weekly to off-site tapes
D. AWS Simple Email Servic Which AWS storage and database architecture meets the requirements of the application?
Answer: A A. Web sewers: store read-only data in 53, and copy from 53 to root volume at boot tim
B. App servers: share state using a combination of DynamoDB and IP unicas
Explanation: C. Database: use RDS with multi-AZ deployment and one or more read replica
Amazon Simple Queue Service (SQS) is a highly reliable distributed messaging system for storing messages as they travel between computers. By using Amazon D. Backup: web servers, app servers, and database backed up weekly to Glacier using snapshots.
SQS, developers can simply move data between distributed application components. It is used to achieve compartmentalization or loose coupling. In this case all E. Web sewers: store read-only data in an EC2 NFS server, mount to each web server at boot tim
the modules will send a message to the logger queue and the data will be processed by queue as per the resource availability. F. App servers: share state using a combination of DynamoDB and IP multicas
Reference: http://media.amazonwebservices.com/AWS_Building_FauIt_To|erant_AppIications.pdf G. Database: use RDS with multi-AZ deployment and one or more Read Replica
H. Backup: web and app servers backed up weekly via AM Is, database backed up via DB snapshots.
I. Web servers: store read-only data in 53, and copy from 53 to root volume at boot tim
NEW QUESTION 207 J. App servers: share state using a combination of DynamoDB and IP unicas
You have been asked to set up monitoring of your network and you have decided that Cloudwatch would be the best service to use. Amazon CIoudWatch monitors K. Database: use RDS with multi-AZ deployment and one or more Read Replica
your Amazon Web Services (AWS) resources and the applications you run on AWS in real-time. You can use CIoudWatch to collect and track metrics, which are L. Backup: web and app servers backed up weekly viaAM Is, database backed up via DB snapshots.
the variables you want to measure for your resources and applications. Which of the following items listed can AWS Cloudwatch monitor? M. Web servers: store read-only data in 53, and copy from 53 to root volume at boot tim
N. App servers: share state using a combination of DynamoDB and IP unicas
A. Log files your applications generate. O. Database: use RDS with multi-AZ deploymen
B. All of the items listed on this page. P. Backup: web and app sewers backed up weekly via ANI Is, database backed up via DB snapshots.
C. System-wide visibility into resource utilization, application performance, and operational health.
D. Custom metrics generated by your applications and services . Answer: C
Answer: B Explanation:
Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database
Explanation: workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a
Amazon CIoudWatch can monitor AWS resources such as Amazon EC2 instances, Amazon DynamoDB tables, and Amazon RDS DB instances, as well as standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly
custom metrics generated by your applications and services, and any log files your applications generate. You can use Amazon CIoudWatch to gain reliable. In case of an infrastructure failure (for example, instance hardware failure, storage failure, or network disruption), Amazon RDS performs an automatic
system-wide visibility into resource utilization, application performance, and operational health. You can use these insights to react and keep your application failover to the standby, so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the
running smoothly. same after a failover, your application can resume database operation without the need for manual administrative intervention.
Reference: http://aws.amazon.com/cIoudwatch/ Benefits
Enhanced Durability
Passing Certification Exams Made Easy visit - https://www.surepassexam.com Passing Certification Exams Made Easy visit - https://www.surepassexam.com
Recommend!! Get the Full AWS-Solution-Architect-Associate dumps in VCE and PDF From SurePassExam Recommend!! Get the Full AWS-Solution-Architect-Associate dumps in VCE and PDF From SurePassExam
https://www.surepassexam.com/AWS-Solution-Architect-Associate-exam-dumps.html (1487 New Questions) https://www.surepassexam.com/AWS-Solution-Architect-Associate-exam-dumps.html (1487 New Questions)
MuIti-AZ deployments for the MySQL, Oracle, and PostgreSQL engines utilize synchronous physical replication to keep data on the standby up-to-date with the Explanation:
primary. MuIti-AZ deployments for the SQL Server engine use synchronous logical replication to achieve the same result, employing SQL Reference:
Server-native Mrroring technology. Both approaches safeguard your data in the event of a DB Instance failure or loss of an Availability Zone. http://tech.com/wp-content/themes/optimize/download/AWSDisaster_Recovery.pdf (page 6)
If a storage volume on your primary fails in a Multi-AZ deployment, Amazon RDS automatically initiates a failover to the up-to-date standby. Compare this to a
Single-AZ deployment: in case of a Single-AZ database failure, a user-initiated point-in-time-restore operation will be required. This operation can take several
hours to complete, and any data updates that occurred after the latest restorable time (typically within the last five minutes) will not be available. NEW QUESTION 237
Amazon Aurora employs a highly durable, SSD-backed virtualized storage layer purpose-built for An International company has deployed a multi-tier web application that relies on DynamoDB in a single region For regulatory reasons they need disaster recovery
database workloads. Amazon Aurora automatically replicates your volume six ways, across three Availability Zones. Amazon Aurora storage is fault-tolerant, capability In a separate region with a Recovery Time Objective of 2 hours and a Recovery Point Objective of 24 hours They should synchronize their data on a
transparently handling the loss of up to two copies of data without affecting database write availability and up to three copies without affecting read availability. regular basis and be able to provision me web application rapidly using CIoudFormation.
Amazon Aurora storage is also self-healing. Data blocks and disks are continuously scanned for errors and replaced automatically. The objective is to minimize changes to the existing web application, control the throughput of DynamoDB used for the synchronization of data and synchronize
Increased Availability only the modified elements.
You also benefit from enhanced database availability when running Multi-AZ deployments. If an Availability Zone failure or DB Instance failure occurs, your Which design would you choose to meet these requirements?
availability impact is limited to the time automatic failover takes to complete: typically under one minute for Amazon Aurora and one to two minutes for other
database engines (see the RDS FAQ for details). A. Use AWS data Pipeline to schedule a DynamoDB cross region copy once a da
The availability benefits of MuIti-AZ deployments also extend to planned maintenance and backups. B. create a Last updated' attribute in your DynamoDB table that would represent the timestamp of the last update and use it as a filter.
In the case of system upgrades like OS patching or DB Instance scaling, these operations are applied first on the standby, prior to the automatic failover. As a C. Use EMR and write a custom script to retrieve data from DynamoDB in the current region using a SCAN operation and push it to Dynamo DB in the second
result, your availability impact is, again, only the time required for automatic fail over to complete. region.
Unlike Single-AZ deployments, 1/0 actMty is not suspended on your primary during backup for MuIti-AZ deployments for the MySOL, Oracle, and PostgreSQL D. Use AWS data Pipeline to schedule an export of the DynamoDB table to 53 in the current region once a day then schedule another task immediately after it that
engines, because the backup is taken from the standby. However, note that you may still experience elevated latencies for a few minutes during backups for Mu|ti- will import data from 53 to DynamoDB in the other region.
AZ deployments. E. Send also each Ante into an SOS queue in me second region; use an auto-scaling group behind the SOS queue to replay the write in the second region.
On instance failure in Amazon Aurora deployments, Amazon RDS uses RDS MuIti-AZ technology to automate failover to one of up to 15 Amazon Aurora Replicas
you have created in any of three Availability Zones. If no Amazon Aurora Replicas have been provisioned, in the case of a failure, Amazon RDS will attempt to Answer: A
create a new Amazon Aurora DB instance for you automatically.
No Administrative Intervention
DB Instance failover is fully automatic and requires no administrative intervention. Amazon RDS monitors the health of your primary and standbys, and initiates a NEW QUESTION 240
failover automatically in response to a variety of failure conditions. Your startup wants to implement an order fulfillment process for selling a personalized gadget that needs an average of 3-4 days to produce with some orders
Failover conditions taking up to 6 months you expect 10 orders per day on your first day. 1000 orders per day after 6 months and 10,000 orders after 12 months.
Amazon RDS detects and automatically recovers from the most common failure scenarios for Multi-AZ deployments so that you can resume database operations Orders coming in are checked for consistency men dispatched to your manufacturing plant for production quality control packaging shipment and payment
as quickly as possible without administrative intervention. Amazon RDS automatically performs a failover in the event of any of the following: processing If the product does not meet the quality standards at any stage of the process employees may force the process to repeat a step Customers are
Loss of availability in primary Availability Zone Loss of network connectMty to primary Compute unit failure on primary notified via email about order status and any critical issues with their orders such as payment failure.
Storage failure on primary Your case architecture includes AWS Elastic Beanstalk for your website with an RDS MySQL instance for customer data and orders.
Note: When operations such as DB Instance scaling or system upgrades like OS patching are initiated for Multi-AZ deployments, for enhanced availability, they are How can you implement the order fulfillment process while making sure that the emails are delivered reliably?
applied first on the standby prior to an automatic failover. As a result, your availability impact is limited only to the time required for automatic failover to complete.
Note that Amazon RDS Multi-AZ deployments do not failover automatically in response to database operations such as long running queries, deadlocks or A. Add a business process management application to your Elastic Beanstalk app servers and re-use the ROS database for tracking order status use one of the
database corruption errors. Elastic Beanstalk instances to send emails to customers.
B. Use SWF with an Auto Scaling group of actMty workers and a decider instance in another Auto Scaling group with min/max=I Use the decider instance to send
emails to customers.
NEW QUESTION 228 C. Use SWF with an Auto Scaling group of actMty workers and a decider instance in another Auto Scaling group with min/max=I use SES to send emails to
Your company has HQ in Tokyo and branch offices all over the world and is using a logistics software with a multi-regional deployment on AWS in Japan, Europe customers.
and USA, The logistic software has a 3- tier architecture and currently uses MySQL 5.6 for data persistence. Each region has deployed its own database D. Use an SOS queue to manage all process tasks Use an Auto Scaling group of EC2 Instances that poll the tasks and execute the
In the HQ region you run an hourly batch process reading data from every region to compute cross regional reports that are sent by email to all offices this batch E. Use SES to send emails to customers.
process must be completed as fast as possible to quickly optimize logistics how do you build the database architecture in order to meet the requirements'?
Answer: C
A. For each regional deployment, use RDS MySQL with a master in the region and a read replica in the HQ region
B. For each regional deployment, use MySQL on EC2 with a master in the region and send hourly EBS snapshots to the HQ region
C. For each regional deployment, use RDS MySQL with a master in the region and send hourly RDS snapshots to the HQ region NEW QUESTION 245
D. For each regional deployment, use MySQL on EC2 with a master in the region and use 53 to copy data files hourly to the HQ region A web company is looking to implement an external payment service into their highly available application deployed in a VPC Their application EC2 instances are
E. Use Direct Connect to connect all regional MySQL deployments to the HQ region and reduce network latency for the batch process behind a public lacing ELB Auto scaling is used to add additional instances as traffic increases under normal load the application runs 2 instances in the
Auto Scaling group but at peak it can scale 3x in size. The application instances need to communicate with the payment service over the Internet which requires
Answer: A whitelisting of all public IP addresses used to communicate with it. A maximum of 4 whitelisting IP addresses are allowed at a time and can be added through an
API.
How should they architect their solution?
NEW QUESTION 231
A customer has a 10 GB AWS Direct Connect connection to an AWS region where they have a web application hosted on Amazon Elastic Computer Cloud (EC2). A. Route payment requests through two NAT instances setup for High Availability and whitelist the Elastic IP addresses attached to the MAT instances.
The application has dependencies on an on-premises mainframe database that uses a BASE (Basic Available. Sort stale Eventual consistency) rather than an B. Whitelist the VPC Internet Gateway Public IP and route payment requests through the Internet Gateway.
ACID (Atomicity. Consistency isolation. Durability) consistency model. C. Whitelist the ELB IP addresses and route payment requests from the Application servers through the ELB.
The application is exhibiting undesirable behavior because the database is not able to handle the volume of writes. How can you reduce the load on your on- D. Automatically assign public IP addresses to the application instances in the Auto Scaling group and run a script on boot that adds each instances public IP
premises database resources in the most address to the payment validation whitelist API.
cost-effective way?
Answer: D
A. Use an Amazon Elastic Map Reduce (EMR) S3DistCp as a synchronization mechanism between the on-premises database and a Hadoop cluster on AWS.
B. Modify the application to write to an Amazon SQS queue and develop a worker process to flush the queue to the on-premises database.
C. Modify the application to use DynamoDB to feed an EMR cluster which uses a map function to write to the on-premises database. NEW QUESTION 249
D. Provision an RDS read-replica database on AWS to handle the writes and synchronize the two databases using Data Pipeline. You are implementing AWS Direct Connect. You intend to use AWS public service end points such as Amazon 53, across the AWS Direct Connect link. You want
other Internet traffic to use your existing link to an Internet Service Provider.
Answer: A What is the correct way to configure AW5 Direct connect for access to services such as Amazon 53?
Explanation: A. Configure a public Interface on your AW5 Direct Connect link Configure a static route via your AW5 Direct Connect link that points to Amazon 53 Advertise a
Reference: https://aws.amazon.com/blogs/aws/category/amazon-elastic-map-reduce/ default route to AW5 using BGP.
B. Create a private interface on your AW5 Direct Connect lin
C. Configure a static route via your AW5 Direct connect link that points to Amazon 53 Configure specific routes to your network in your VPC,
NEW QUESTION 235 D. Create a public interface on your AW5 Direct Connect link Redistribute BGP routes into your existing routing infrastructure advertise specific routes for your
You would like to create a mirror image of your production environment in another region for disaster recovery purposes. Which of the following AWS resources do network to AW5.
not need to be recreated in the second region? (Choose 2 answers) E. Create a private interface on your AW5 Direct connect lin
F. Redistribute BGP routes into your existing routing infrastructure and advertise a default route to AW5.
A. Route 53 Record Sets
B. IM Roles Answer: C
C. Elastic IP Addresses (EIP)
D. EC2 Key Pairs
E. Launch configurations NEW QUESTION 253
F. Security Groups You've been brought in as solutions architect to assist an enterprise customer with their migration of an e-commerce platform to Amazon Virtual Private Cloud
(VPC) The previous architect has already deployed a 3-tier VPC, The configuration is as follows:
Answer: AC VPC: vpc-2f8bc447 IGW: igw-2d8bc445 NACL: ad-208bc448
Passing Certification Exams Made Easy visit - https://www.surepassexam.com Passing Certification Exams Made Easy visit - https://www.surepassexam.com
Recommend!! Get the Full AWS-Solution-Architect-Associate dumps in VCE and PDF From SurePassExam Recommend!! Get the Full AWS-Solution-Architect-Associate dumps in VCE and PDF From SurePassExam
https://www.surepassexam.com/AWS-Solution-Architect-Associate-exam-dumps.html (1487 New Questions) https://www.surepassexam.com/AWS-Solution-Architect-Associate-exam-dumps.html (1487 New Questions)
5ubnets and Route Tables: Web sewers: subnet-258bc44d You currently operate a web application In the AWS US-East region The application runs on an autoscaled layer of EC2 instances and an RDS Multi-AZ database
Application servers: subnet-248bc44c Database sewers: subnet-9189c6f9 Route Tables: Your IT security compliance officer has tasked you to develop a reliable and durable logging solution to track changes made to your EC2.1AM And RDS resources.
rrb-218bc449 rtb-238bc44b Associations: The solution must ensure the integrity and confidentiality of your log data. Which of these solutions would you recommend?
subnet-258bc44d : rtb-218bc449 subnet-248bc44c : rtb-238bc44b subnet-9189c6f9 : rtb-238bc44b
You are now ready to begin deploying EC2 instances into the VPC Web servers must have direct access to the internet Application and database sewers cannot A. Create a new C|oudTraiI trail with one new 53 bucket to store the logs and with the global services option selected Use IAM roles 53 bucket policies and Multi
have direct access to the internet. Factor Authentication (MFA) Delete on the 53 bucket that stores your logs.
Which configuration below will allow you the ability to remotely administer your application and database servers, as well as allow these sewers to retrieve updates B. Create a new CIoudTraiI with one new 53 bucket to store the logs Configure SNS to send log file delivery notifications to your management system Use IAM
from the Internet? roles and 53 bucket policies on the 53 bucket mat stores your logs.
C. Create a new CIoudTraiI trail with an existing 53 bucket to store the logs and with the global services option selected Use 53 ACLs and Multi Factor
A. Create a bastion and NAT instance in subnet-258bc44d, and add a route from rtb- 238bc44b to the NAT instance. Authentication (MFA) Delete on the 53 bucket that stores your logs.
B. Add a route from rtb-238bc44b to igw-2d8bc445 and add a bastion and NAT instance within subnet-248bc44c. D. Create three new C|oudTrai| trails with three new 53 buckets to store the logs one for the AWS Management console, one for AWS 5DKs and one for command
C. Create a bastion and NAT instance in subnet-248bc44c, and add a route from rtb- 238bc44b to subneb258bc44d. line tools Use IAM roles and 53 bucket policies on the 53 buckets that store your logs.
D. Create a bastion and NAT instance in subnet-258bc44d, add a route from rtb-238bc44b to Igw- 2d8bc445, and a new NACL that allows access between
subnet-258bc44d and subnet -248bc44c. Answer: A
Answer: A
NEW QUESTION 266
An enterprise wants to use a third-party SaaS application. The SaaS application needs to have access to issue several API commands to discover Amazon EC2
NEW QUESTION 255 resources running within the enterprise's account The enterprise has internal security policies that require any outside access to their environment must conform to
Your company has an on-premises multi-tier PHP web application, which recently experienced downtime due to a large burst In web traffic due to a company the principles of least prMlege and there must be controls in place to ensure that the credentials used by the 5aa5 vendor cannot be used by any other third party.
announcement Over the coming days, you are expecting similar announcements to drive similar unpredictable bursts, and are looking to find ways to quickly Which of the following would meet all of these conditions?
improve your infrastructures ability to handle unexpected increases in traffic.
The application currently consists of 2 tiers a web tier which consists of a load balancer and several Linux Apache web servers as well as a database tier which A. From the AW5 Management Console, navigate to the Security Credentials page and retrieve the access and secret key for your account.
hosts a Linux server hosting a MySQL database. Which scenario below will provide full site functionality, while helping to improve the ability of your application in B. Create an IAM user within the enterprise account assign a user policy to the IAM user that allows only the actions required by the SaaS application create a new
the short timeframe required? access and secret key for the user and provide these credentials to the 5aa5 provider.
C. Create an IAM role for cross-account access allows the SaaS provider's account to assume the role and assign it a policy that allows only the actions required
A. Failover environment: Create an 53 bucket and configure it for website hostin by the SaaS application.
B. Migrate your DNS to Route53 using zone file import, and leverage Route53 DNS failover to failover to the 53 hosted website. D. Create an IAM role for EC2 instances, assign it a policy that allows only the actions required tor the Saas application to work, provide the role ARM to the SaaS
C. Hybrid environment: Create an AMI, which can be used to launch web servers in EC2. Create an Auto Scaling group, which uses the AMI to scale the web tier provider to use when launching their application instances.
based on incoming traffi
D. Leverage Elastic Load Balancing to balance traffic between on-premises web servers and those hosted In AWS. Answer: C
E. Offload traffic from on-premises environment: Setup a C|oudFront distribution, and configure CIoudFront to cache objects from a custom origi
F. Choose to customize your object cache behavior, and select a TIL that objects should exist in cache. Explanation:
G. Migrate to AWS: Use VM Import/Export to quickly convert an on-premises web server to an AM Granting Cross-account Permission to objects It Does Not Own
H. Create an Auto Scaling group, which uses the imported AMI to scale the web tier based on incoming traffi In this example scenario, you own a bucket and you have enabled other AWS accounts to upload objects. That is, your bucket can have objects that other AWS
I. Create an RDS read replica and setup replication between the RDS instance and on-premises MySQL server to migrate the database. accounts own.
Now, suppose as a bucket owner, you need to grant cross-account permission on objects, regardless of who the owner is, to a user in another account. For
Answer: C example, that user could be a billing application that needs to access object metadata. There are two core issues:
The bucket owner has no permissions on those objects created by other AWS accounts. So for the bucket owner to grant permissions on objects it does not own,
the object owner, the AWS account that created the objects, must first grant permission to the bucket owner. The bucket owner can then delegate those
NEW QUESTION 257 permissions.
Your company produces customer commissioned one-of-a-kind skiing helmets combining nigh fashion with custom technical enhancements Customers can show Bucket owner account can delegate permissions to users in its own account but it cannot delegate permissions to other AWS accounts, because cross-account
off their IndMduality on the ski slopes and have access to head-up-displays. GPS rear-view cams and any other technical innovation they wish to embed in the delegation is not supported.
helmet. In this scenario, the bucket owner can create an AWS Identity and Access Management (IAM) role with permission to access objects, and grant another AWS
The current manufacturing process is data rich and complex including assessments to ensure that the custom electronics and materials used to assemble the account permission to assume the role temporarily enabling it to access objects in the bucket.
helmets are to the highest standards Assessments are a mixture of human and automated assessments you need to add a new set of assessment to model the Background: Cross-Account Permissions and Using IAM Roles
failure modes of the custom electronics using GPUs with CUDA, across a cluster of servers with low latency networking. IAM roles enable several scenarios to delegate access to your resources, and cross-account access is
What architecture would allow you to automate the existing process using a hybrid approach and ensure that the architecture can support the evolution of one of the key scenarios. In this example, the bucket owner, Account A, uses an IAM role to temporarily delegate object access cross-account to users in another
processes over time? AWS account, Account C. Each IAM role you create has two policies attached to it:
A trust policy identifying another AWS account that can assume the role.
A. Use AWS Data Pipeline to manage movement of data & meta-data and assessments Use anautoscaling group of G2 instances in a placement group. An access policy defining what permissions-for example, s3:Get0bject-are allowed when someone assumes the role. For a list of permissions you can specify in a
B. Use Amazon Simple Workflow {SWF) to manages assessments, movement of data & meta-data Use an auto-scaling group of G2 instances in a placement policy, see Specifying Permissions in a Policy.
group. The AWS account identified in the trust policy then grants its user permission to assume the role. The user can then do the following to access objects:
C. Use Amazon Simple Workflow (SWF) to manages assessments movement of data & meta-data Use an auto-scaling group of C3 instances with SR-IOV {Single Assume the role and, in response, get temporary security credentials. Using the temporary security credentials, access the objects in the bucket.
Root 1/0 Virtualization). For more information about IAM roles, go to Roles (Delegation and Federation) in IAM User Guide. The following is a summary of the walkthrough steps:
D. Use AWS data Pipeline to manage movement of data & meta-data and assessments use autoscaling group of C3 with SR-IOV (Single Root 1/0 virtualization). Account A administrator user attaches a bucket policy granting Account B conditional permission to upload objects.
Account A administrator creates an IAM role, establishing trust with Account C, so users in that account can access Account A. The access policy attached to the
Answer: B ro Ie limits what user in Account C can do when the user accesses Account A.
Account B administrator uploads an object to the bucket owned by Account A, granting full —controI permission to the bucket owner.
Account C administrator creates a user and attaches a user policy that allows the user to assume the role. User in Account C first assumes the role, which returns
NEW QUESTION 260 the user temporary security credentials.
Your website is serving on-demand training videos to your workforce. Videos are uploaded monthly in high resolution MP4 format. Your workforce is distributed Using those temporary credentials, the user then accesses objects in the bucket.
globally often on the move and using company-provided tablets that require the HTTP Live Streaming (HLS) protocol to watch a video. Your company has no video For this example, you need three accounts. The following table shows how we refer to these accounts and the administrator users in these accounts. Per IAM
transcoding expertise and it required you may need to pay for a consultant. guidelines (see About Using an Administrator User to Create Resources and Grant Permissions) we do not use the account root
How do you implement the most cost-efficient architecture without compromising high availability and quality of video delivery'? credentials in this walkthrough. Instead, you create an administrator user in each account and use those credentials in creating resources and granting them
permissions
A. A video transcoding pipeline running on EC2 using SQS to distribute tasks and Auto Scaling to adjust the number of nodes depending on the length of the queu
B. EBS volumes to host videos and EBS snapshots to incrementally backup original files after a few day
C. CIoudFront to serve HLS transcoded videos from EC2. NEW QUESTION 271
D. Elastic Transcoder to transcode original high-resolution MP4 videos to HL You are designing an SSUTLS solution that requires HTIPS clients to be authenticated by the Web server using client certificate authentication. The solution must
E. EBS volumes to host videos and EBS snapshots to incrementally backup original files after a few day be resilient.
F. CIoudFront to serve HLS transcoded videos from EC2. Which of the following options would you consider for configuring the web server infrastructure? (Choose 2 answers)
G. Elastic Transcoder to transcode original high-resolution NIP4 videos to HL
H. 53 to host videos with Lifecycle Management to archive original files to Glacier after a few day A. Configure ELB with TCP listeners on TCP/4d3. And place the Web servers behind it.
I. C|oudFront to serve HLS transcoded videos from 53. B. Configure your Web servers with EIPS Place the Web servers in a Route53 Record Set and configure health checks against all Web servers.
J. A video transcoding pipeline running on EC2 using SQS to distribute tasks and Auto Scaling to adjust the number of nodes depending on the length of the queu C. Configure ELB with HTIPS listeners, and place the Web servers behind it.
K. 53 to host videos with Lifecycle Management to archive all files to Glacier after a few day D. Configure your web servers as the origins for a Cloud Front distributio
L. CIoudFront to serve HLS transcoded videos from Glacier. E. Use custom SSL certificates on your Cloud Front distribution.
Answer: C Answer: AB
Passing Certification Exams Made Easy visit - https://www.surepassexam.com Passing Certification Exams Made Easy visit - https://www.surepassexam.com
Recommend!! Get the Full AWS-Solution-Architect-Associate dumps in VCE and PDF From SurePassExam Recommend!! Get the Full AWS-Solution-Architect-Associate dumps in VCE and PDF From SurePassExam
https://www.surepassexam.com/AWS-Solution-Architect-Associate-exam-dumps.html (1487 New Questions) https://www.surepassexam.com/AWS-Solution-Architect-Associate-exam-dumps.html (1487 New Questions)
You are designing an intrusion detection prevention (IDS/IPS) solution for a customer web application in a single VPC. You are considering the options for NEW QUESTION 290
implementing IOS IPS protection for traffic coming from the Internet. You are developing a new mobile application and are considering storing user preferences in AWS.2w This would provide a more uniform cross-device experience
Which of the following options would you consider? (Choose 2 answers) to users using multiple mobile devices to access the application. The preference data for each user is estimated to be SOKB in size Additionally 5 million
customers are expected to use the application on a regular basis. The solution needs to be
A. Implement IDS/IPS agents on each Instance running In VPC cost-effective, highly available, scalable and secure, how would you design a solution to meet the above requirements?
B. Configure an instance in each subnet to switch its network interface card to promiscuous mode and analyze network traffic.
C. Implement Elastic Load Balancing with SSL listeners In front of the web applications A. Setup an RDS MySQL instance in 2 availability zones to store the user preference dat
D. Implement a reverse proxy layer in front of web servers and configure IDS/ IPS agents on each reverse proxy server. B. Deploy a public facing application on a server in front of the database to manage security and access credentials
C. Setup a DynamoDB table with an item for each user having the necessary attributes to hold the user preference
Answer: BD D. The mobile application will query the user preferences directly from the DynamoDB tabl
E. Utilize ST
F. Web Identity Federation, and DynamoDB Fine Grained Access Control to authenticate and authorize access.
NEW QUESTION 276 G. Setup an RDS MySQL instance with multiple read replicas in 2 availability zones to store the user preference data .The mobile application will query the user
You are designing a photo sharing mobile app the application will store all pictures in a single Amazon 53 bucket. preferences from the read replica
Users will upload pictures from their mobile device directly to Amazon 53 and will be able to view and download their own pictures directly from Amazon 53. H. Leverage the MySQL user management and access prMlege system to manage security and access credentials.
You want to configure security to handle potentially millions of users in the most secure manner possible. What should your server-side application do when a new I. Store the user preference data in 53 Setup a DynamoDB table with an item for each user and an item attribute pointing to the user' 53 objec
user registers on the photo sharing mobile application? J. The mobile application will retrieve the 53 URL from DynamoDB and then access the 53 object directly utilize STS, Web identity Federation, and 53 ACLs to
authenticate and authorize access.
A. Create a set of long-term credentials using AWS Security Token Service with appropriate permissions Store these credentials in the mobile app and use them to
access Amazon 53. Answer: B
B. Record the user's Information in Amazon RDS and create a role in IAM with appropriate permission
C. When the user uses their mobile app create temporary credentials using the AWS Security Token Service 'Assume Role' function Store these credentials in the
mobile app's memory and use them to access Amazon 53 Generate new credentials the next time the user runs the mobile app. NEW QUESTION 294
D. Record the user's Information In Amazon DynamoD ......
E. When the user uses their mobile app create temporary credentials using AWS Security Token Service with appropriate permissions Store these credentials in
the mobile app's memory and use them to access Amazon 53 Generate new credentials the next time the user runs the mobile app.
F. Create IAM use
G. Assign appropriate permissions to the IAM user Generate an access key and secret key for the IAM user, store them in the mobile app and use these
credentials to access Amazon 53.
H. Create an IAM use
I. Update the bucket policy with appropriate permissions for the IAM user Generate an access Key and secret Key for the IAM user, store them In the mobile app
and use these credentials to access Amazon 53.
Answer: B
A. Use the AWS account access Keys the application retrieves the credentials from the source code of the application.
B. Create an IAM user for the application with permissions that allow list access to the 53 bucket launch the instance as the IAM user and retrieve the IAM user's
credentials from the EC2 instance user data.
C. Create an IAM role for EC2 that allows list access to objects in the 53 bucke
D. Launch the instance with the role, and retrieve the roIe's credentials from the EC2 Instance metadata
E. Create an IAM user for the application with permissions that allow list access to the 53 bucke
F. The application retrieves the IAM user credentials from a temporary directory with permissions that allow read access only to the application user.
Answer: C
Answer: ABD
A. Log clicks in weblogs by URL store to Amazon 53, and then analyze with Elastic MapReduce
B. Push web clicks by session to Amazon Kinesis and analyze behavior using Kinesis workers
C. Write click events directly to Amazon Redshift and then analyze with SQL
D. Publish web clicks by session to an Amazon SQS queue men periodically drain these events to Amazon RDS and analyze with sol
Answer: B
Explanation:
Reference: http:/ /www.slideshare.net/AmazonWebServices/aws-webcast-introduction-to-amazon-kinesis
Passing Certification Exams Made Easy visit - https://www.surepassexam.com Passing Certification Exams Made Easy visit - https://www.surepassexam.com