0% found this document useful (0 votes)
63 views6 pages

Question 047 Merged

The document presents a series of questions and answers related to AWS solutions for different scenarios, including a gaming application with VoIP capabilities, a weather forecasting company's data processing needs, and an ecommerce company's database migration. Each question outlines specific requirements and options, with community votes indicating the preferred solutions. The correct answers highlight the most effective AWS services for achieving high availability, low latency, and cost efficiency in various use cases.

Uploaded by

DARSH BAKSHI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
63 views6 pages

Question 047 Merged

The document presents a series of questions and answers related to AWS solutions for different scenarios, including a gaming application with VoIP capabilities, a weather forecasting company's data processing needs, and an ecommerce company's database migration. Each question outlines specific requirements and options, with community votes indicating the preferred solutions. The correct answers highlight the most effective AWS services for achieving high availability, low latency, and cost efficiency in various use cases.

Uploaded by

DARSH BAKSHI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Question #647 Topic 1

A gaming company is building an application with Voice over IP capabilities. The application will serve traffic to users across the world. The

application needs to be highly available with an automated failover across AWS Regions. The company wants to minimize the latency of users

without relying on IP address caching on user devices.

What should a solutions architect do to meet these requirements?

A. Use AWS Global Accelerator with health checks.

B. Use Amazon Route 53 with a geolocation routing policy.

C. Create an Amazon CloudFront distribution that includes multiple origins.

D. Create an Application Load Balancer that uses path-based routing.

Correct Answer: A

Community vote distribution


A (96%) 4%

  potomac Highly Voted  11 months ago

Selected Answer: A

Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases that
specifically require static IP addresses or deterministic, fast regional failover.
upvoted 9 times

  pentium75 Highly Voted  9 months ago

Selected Answer: A

A - does exactly what is required


Not B - Would rely on DNS caching (as it should not)
Not C - CloudFront is not for VoIP
Not D - ALB does not address any of the issues and would not support VoIP
upvoted 6 times

  KennethNg923 Most Recent  3 months, 2 weeks ago

Selected Answer: A

automated failover across AWS Region + minimize latency -> Global Accelerator
upvoted 1 times

  Murtadhaceit 9 months, 3 weeks ago

Selected Answer: A

VoIP ==> UDP ==> Global Accelerator.


upvoted 2 times

  kaleemanjum 9 months, 4 weeks ago

Selected Answer: A

AWS Global Accelerator: AWS Global Accelerator is a service that uses static IP addresses (Anycast IPs) to provide a global entry point for your
applications. It routes traffic over the AWS global network to the optimal AWS endpoint based on health, geography, and routing policies.

Health Checks: AWS Global Accelerator supports health checks, allowing it to route traffic only to healthy endpoints. This helps in achieving high
availability and automated failover across AWS Regions.
upvoted 1 times

  SHAAHIBHUSHANAWS 10 months ago


A
https://aws.amazon.com/global-
accelerator/faqs/#:~:text=Global%20Accelerator%20is%20a%20good,AWS%20Shield%20for%20DDoS%20protection.
upvoted 1 times

  ekisako 10 months, 4 weeks ago

Selected Answer: A

https://docs.aws.amazon.com/global-accelerator/latest/dg/introduction-benefits-of-migrating.html
upvoted 2 times

  cciesam 10 months, 4 weeks ago

Selected Answer: A

Global Accelerator is the answer as it can handle both TCP and UDP
upvoted 2 times

  Sugarbear_01 11 months ago

Selected Answer: C

This answer should be C


upvoted 1 times

  pentium75 9 months ago


CloudFront is not for VoIP (which usually uses UDP).
upvoted 1 times

Question #648 Topic 1

A weather forecasting company needs to process hundreds of gigabytes of data with sub-millisecond latency. The company has a high

performance computing (HPC) environment in its data center and wants to expand its forecasting capabilities.

A solutions architect must identify a highly available cloud storage solution that can handle large amounts of sustained throughput. Files that are

stored in the solution should be accessible to thousands of compute instances that will simultaneously access and process the entire dataset.

What should the solutions architect do to meet these requirements?

A. Use Amazon FSx for Lustre scratch file systems.

B. Use Amazon FSx for Lustre persistent file systems.

C. Use Amazon Elastic File System (Amazon EFS) with Bursting Throughput mode.

D. Use Amazon Elastic File System (Amazon EFS) with Provisioned Throughput mode.

Correct Answer: B

Community vote distribution


B (100%)

  potomac Highly Voted  11 months ago

Selected Answer: B

Option A (Amazon FSx for Lustre scratch file systems) is designed for temporary data storage and does not provide the data persistence required
for this scenario.
upvoted 9 times

  KennethNg923 Most Recent  3 months, 2 weeks ago

Selected Answer: B

HPC + the entire dataset -> FSx Lustre presistence


upvoted 1 times

  awsgeek75 8 months, 3 weeks ago

Selected Answer: B

https://docs.aws.amazon.com/fsx/latest/LustreGuide/using-fsx-lustre.html

Both AB can handle the processing requirements but B is Highly Available which is also a requirement not met by A.

CD won't mee the performance requirements


upvoted 4 times

  TariqKipkemei 10 months ago

Selected Answer: B

high performance computing, highly available cloud storage solution = Amazon FSx for Lustre persistent file systems
upvoted 3 times
Question #649 Topic 1

An ecommerce company runs a PostgreSQL database on premises. The database stores data by using high IOPS Amazon Elastic Block Store

(Amazon EBS) block storage. The daily peak I/O transactions per second do not exceed 15,000 IOPS. The company wants to migrate the database

to Amazon RDS for PostgreSQL and provision disk IOPS performance independent of disk storage capacity.

Which solution will meet these requirements MOST cost-effectively?

A. Configure the General Purpose SSD (gp2) EBS volume storage type and provision 15,000 IOPS.

B. Configure the Provisioned IOPS SSD (io1) EBS volume storage type and provision 15,000 IOPS.

C. Configure the General Purpose SSD (gp3) EBS volume storage type and provision 15,000 IOPS.

D. Configure the EBS magnetic volume type to achieve maximum IOPS.

Correct Answer: C

Community vote distribution


C (100%)

  BillaRanga Highly Voted  7 months, 3 weeks ago

GP2 - • Size of the volume and IOPS are linked, max IOPS is 16,000
GP3 - Can increase IOPS up to 16,000 and throughput up to 1000 MiB/s independently

GP3 is 20% cheaper than GP2


upvoted 5 times

  TariqKipkemei Most Recent  10 months ago

Selected Answer: C

MOST cost-effective =GP3


upvoted 3 times

  SHAAHIBHUSHANAWS 10 months ago


C
https://aws.amazon.com/ebs/general-purpose/
upvoted 3 times

  Oblako 10 months, 2 weeks ago

Selected Answer: C

Both gp2 and gp3 can provision up to 16.000 IOPS. gp3 is cheaper than gp2.
upvoted 4 times

  lagorb 10 months, 3 weeks ago


gp2 and gp3 can provision up to 16.000 IOPS, and gp3 is cheaper than gp2
upvoted 2 times

  potomac 11 months ago


Selected Answer: C

GP3 is better and cheaper than GP2


upvoted 3 times
Question #650 Topic 1

A company wants to migrate its on-premises Microsoft SQL Server Enterprise edition database to AWS. The company's online application uses the

database to process transactions. The data analysis team uses the same production database to run reports for analytical processing. The

company wants to reduce operational overhead by moving to managed services wherever possible.

Which solution will meet these requirements with the LEAST operational overhead?

A. Migrate to Amazon RDS for Microsoft SOL Server. Use read replicas for reporting purposes

B. Migrate to Microsoft SQL Server on Amazon EC2. Use Always On read replicas for reporting purposes

C. Migrate to Amazon DynamoDB. Use DynamoDB on-demand replicas for reporting purposes

D. Migrate to Amazon Aurora MySQL. Use Aurora read replicas for reporting purposes

Correct Answer: A

Community vote distribution


A (100%)

  superalaga Highly Voted  9 months, 2 weeks ago

Selected Answer: A

You can migrate with both A&B but option A is LEAST operational overhead/
A: https://aws.amazon.com/tutorials/move-to-managed/migrate-sql-server-to-amazon-rds/
B: https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-a-microsoft-sql-server-database-to-aurora-mysql-by-using-aws-
dms-and-aws-sct.html
upvoted 6 times

  BillaRanga Highly Voted  7 months, 3 weeks ago

Selected Answer: A

B - Not the LEAST operational Overhead.


C - It is No-Sql - Not compatible with SQL server which is SQL
D - MS Sql Server to MySQL may miss out some SQL Server functionalities.

A - Read replicas for RDS is easy to create and also it is Asynchronous which should not be a problem for the analytics teams as they can bear 2-3
minutes delay
upvoted 6 times

  Firdous586 Most Recent  8 months, 2 weeks ago


A is the correct answer since RDS supports OLAP
And aurora OLTP
upvoted 4 times

  TariqKipkemei 10 months ago

Selected Answer: A

Only Amazon RDS allows the creation of readable standby DB instances.


upvoted 3 times

  potomac 11 months ago


Selected Answer: A

A is the only choice


upvoted 5 times
Question #651 Topic 1

A company stores a large volume of image files in an Amazon S3 bucket. The images need to be readily available for the first 180 days. The

images are infrequently accessed for the next 180 days. After 360 days, the images need to be archived but must be available instantly upon

request. After 5 years, only auditors can access the images. The auditors must be able to retrieve the images within 12 hours. The images cannot

be lost during this process.

A developer will use S3 Standard storage for the first 180 days. The developer needs to configure an S3 Lifecycle rule.

Which solution will meet these requirements MOST cost-effectively?

A. Transition the objects to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 180 days. S3 Glacier Instant Retrieval after 360 days, and

S3 Glacier Deep Archive after 5 years.

B. Transition the objects to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 180 days. S3 Glacier Flexible Retrieval after 360 days, and

S3 Glacier Deep Archive after 5 years.

C. Transition the objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 180 days, S3 Glacier Instant Retrieval after 360 days, and S3

Glacier Deep Archive after 5 years.

D. Transition the objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 180 days, S3 Glacier Flexible Retrieval after 360 days, and S3

Glacier Deep Archive after 5 years.

Correct Answer: C

Community vote distribution


C (82%) Other

  TariqKipkemei Highly Voted  10 months ago

Selected Answer: C

Images cannot be lost = high availability.


Transition the objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 180 days, S3 Glacier Instant Retrieval after 360 days, and S3 Glacier
Deep Archive after 5 years.
upvoted 9 times

  dilaaziz Highly Voted  11 months ago

Selected Answer: C

https://aws.amazon.com/s3/storage-classes/glacier/
upvoted 5 times

  Linuslin Most Recent  4 months, 3 weeks ago

Selected Answer: C

"The developer needs to configure an S3 Lifecycle rule."--->One Zone-IA can't transfer to Glacier Instant Retrieval--->A is out.
Check - Unsupported lifecycle transitions
https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-transition-general-considerations.html
"Images cannot be lost = high availability"--->Can't be One Zone-IA--->B is out.
"the images need to be archived but must be available instantly upon request"--->Can't be "Flexible" Retrieval--->D is out.

Only C is the correct answer.


upvoted 1 times

  Neung983 7 months ago

Selected Answer: A

A.
Here's why this option is the most cost-effective:
+S3 One Zone-IA (after 180 days): Offers lower storage costs compared to S3 Standard for infrequently accessed data (180 - 360 days) while
maintaining good availability for retrieval.
+S3 Glacier Instant Retrieval (after 360 days): Provides immediate access to archived images (360 - 5 years) at a significantly lower cost than S3
Standard storage. Retrieval costs are incurred but typically lower than keeping the data in S3 Standard.
+S3 Glacier Deep Archive (after 5 years): Offers the lowest storage cost for long-term archival (beyond 5 years) with retrieval times within 12 hours,
meeting the auditor access requirement and minimizing ongoing storage costs.
upvoted 1 times

  Antitouch 9 months ago

Selected Answer: B

https://aws.amazon.com/s3/storage-
classes/glacier/#:~:text=S3%20Glacier%20Flexible%20Retrieval%20delivers,year%20and%20is%20retrieved%20asynchronously.
S3 Glacier Flexible Retrieval delivers low-cost storage, up to 10% lower cost than S3 Glacier Instant Retrieval. Flexible retrieval is cheaper than
Instant retrieval.
S3 Glacier Flexible retrieval storage class provides minutes to 12 hours retrieval of data. Which is within the required time by auditors.
--> We should select flexible retrieval.

The design is not caring about the high availability. The design is caring about cost. One zone-IA is cheaper than standard IA.
--> We should select One Zone IA.
upvoted 1 times

  awsgeek75 8 months, 3 weeks ago


"The images cannot be lost during this process" is a core requirement.
The design cares about data loss and 5 years is a long time and AZ failure will result in data loss.
upvoted 3 times

  pentium75 9 months ago

Selected Answer: C

A, B impose risk of the images being lost in case of AZ failure


D does not allow instant access after 180 days
upvoted 3 times

  ale_brd_111 9 months, 1 week ago


Selected Answer: C

Images cannot be lost = high availability. A exposes images to risk


upvoted 2 times

  Alex1atd 10 months, 2 weeks ago

Selected Answer: C

The images cannot be lost during this process.


upvoted 3 times

  1rob 10 months, 2 weeks ago

Selected Answer: C

"The images cannot be lost during this process" , imho this rules out S3 One zone infrequent access. S3 Glacier Instant Retrieval gives immediate
access. S3 Glacier Flexible Retrieval does not give immediate access. so C.
upvoted 5 times

  EdenWang 10 months, 2 weeks ago


Selected Answer: A

high availability is not mentioned, thus I go for A


upvoted 1 times

  pentium75 9 months ago


"The images cannot be lost during this process."
upvoted 4 times

  TheLaPlanta 6 months, 2 weeks ago


That's not HA
upvoted 1 times

  cciesam 11 months ago

Selected Answer: A

I'll go for A as it doesn't talk about High availability. Considering cost. I'll go for A
upvoted 3 times

  ekisako 10 months, 4 weeks ago


"The images cannot be lost during this process."
upvoted 4 times

You might also like