0% found this document useful (0 votes)
20 views

AWS Study Notes

Uploaded by

mithusur
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

AWS Study Notes

Uploaded by

mithusur
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 30

AWS Config is a service that enables you to assess, audit, and evaluate the configurations of

your AWS resources. Config continuously monitors and records your AWS resource
configurations and allows you to automate the evaluation of recorded configurations against
desired configurations. With Config, you can review changes in configurations and
relationships between AWS resources, dive into detailed resource configuration histories, and
determine your overall compliance against the configurations specified in your internal
guidelines. This enables you to simplify compliance auditing, security analysis, change
management, and operational troubleshooting.

AWS Database Migration Service (AWS DMS) is a cloud service that makes it easy to
migrate relational databases, data warehouses, NoSQL databases, and other types of data
stores. You can use AWS DMS to migrate your data into the AWS Cloud, between on-
premises instances (through an AWS Cloud setup) or between combinations of cloud and on-
premises setups. With AWS DMS, you can perform one-time migrations, and you can
replicate ongoing changes to keep sources and targets in sync.

You can migrate data to Amazon S3 using AWS DMS from any of the supported database
sources. When using Amazon S3 as a target in an AWS DMS task, both full load and change
data capture (CDC) data is written to comma-separated value (.csv) format by default.

The comma-separated value (.csv) format is the default storage format for Amazon S3 target
objects. For more compact storage and faster queries, you can instead use Apache Parquet
(.parquet) as the storage format.

You can encrypt connections for source and target endpoints by using Secure Sockets Layer
(SSL). To do so, you can use the AWS DMS Management Console or AWS DMS API to
assign a certificate to an endpoint. You can also use the AWS DMS console to manage your
certificates.

You have three mutually exclusive options depending on how you choose to manage the
encryption keys:

1. Use Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)


2. Use Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS)
3. Use Server-Side Encryption with Customer-Provided Keys (SSE-C)

AWS DataSync makes it simple and fast to move large amounts of data online between on-
premises storage and Amazon S3, Amazon Elastic File System (Amazon EFS), or Amazon
FSx for Windows File Server. Manual tasks related to data transfers can slow down
migrations and burden IT operations. DataSync eliminates or automatically handles many of
these tasks, including scripting copy jobs, scheduling, and monitoring transfers, validating
data, and optimizing network utilization. The DataSync software agent connects to your
Network File System (NFS), Server Message Block (SMB) storage, and your self-managed
object storage, so you don’t have to modify your applications.

DataSync can transfer hundreds of terabytes and millions of files at speeds up to 10 times
faster than open-source tools, over the Internet or AWS Direct Connect links. You can use
DataSync to migrate active data sets or archives to AWS, transfer data to the cloud for timely
analysis and processing, or replicate data to AWS for business continuity. Getting started
with DataSync is easy: deploy the DataSync agent, connect it to your file system, select your
AWS storage resources, and start moving data between them. You pay only for the data you
move.

Although you can copy data from on-premises to AWS with Storage Gateway, it is not
suitable for transferring large sets of data to AWS. Storage Gateway is mainly used in
providing low-latency access to data by caching frequently accessed data on-premises
while storing archive data securely and durably in Amazon cloud storage services.
Storage Gateway optimizes data transfer to AWS by sending only changed data and
compressing data.

Amazon EMR ( Amazon Elastic Map Reduce) is a managed cluster platform that
simplifies running big data frameworks, such as Apache Hadoop and Apache Spark, on AWS
to process and analyze vast amounts of data. By using these frameworks and related open-
source projects, such as Apache Hive and Apache Pig, you can process data for analytics
purposes and business intelligence workloads. Additionally, you can use Amazon EMR to
transform and move large amounts of data into and out of other AWS data stores and
databases.

Amazon Redshift is the most widely used cloud data warehouse. It makes it fast, simple, and
cost-effective to analyze all your data using standard SQL and your existing Business
Intelligence (BI) tools. It allows you to run complex analytic queries against terabytes to
petabytes of structured and semi-structured data, using sophisticated query
optimization, columnar storage on high-performance storage, and massively parallel
query execution.

Decoupling:

Amazon Simple Queue Service (SQS) and Amazon Simple Workflow Service (SWF) are
the services that you can use for creating a decoupled architecture in AWS. Decoupled
architecture is a type of computing architecture that enables computing components or layers
to execute independently while still interfacing with each other.

Amazon SQS offers reliable, highly-scalable hosted queues for storing messages while they
travel between applications or microservices. Amazon SQS lets you move data between
distributed application components and helps you decouple these components. Amazon SWF
is a web service that makes it easy to coordinate work across distributed application
components.

AWS WAF is a web application firewall that helps protect your web applications or APIs
against common web exploits that may affect availability, compromise security, or consume
excessive resources. AWS WAF gives you control over how traffic reaches your applications
by enabling you to create security rules that block common attack patterns, such as SQL
injection or cross-site scripting, and rules that filter out specific traffic patterns you define.
You can deploy AWS WAF on Amazon CloudFront as part of your CDN solution, the
Application Load Balancer that fronts your web servers or origin servers running on EC2, or
Amazon API Gateway for your APIs.

AWS Organization is a service that allows you to manage multiple AWS accounts easily.
With this service, you can effectively consolidate billing and manage your resources across
multiple accounts. AWS IAM Identity Center can be integrated with your corporate directory
service for centralized authentication. This means you can sign in to multiple AWS accounts
with just one set of credentials. This integration helps to streamline the authentication process
and makes it easier for companies to switch between accounts.
In addition to this, you can also configure a service control policy (SCP) to manage your
AWS accounts. SCPs help you enforce policies across your organization and control the
services and features accessible to your other account. This way, you can ensure that your
organization's resources are used only as intended and prevent unauthorized access. You can
provide secure and centralized management of your AWS accounts by setting up AWS
Organization, integrating AWS IAM Identity Center with your corporate directory service,
and configuring SCPs. This simplifies your management process and helps you maintain
better control over your resources.

An Amazon EBS volume is a durable, block-level storage device that you can attach to a
single EC2 instance. You can use EBS volumes as primary storage for data that requires
frequent updates, such as the system drive for an instance or storage for a database
application. You can also use them for throughput-intensive applications that perform
continuous disk scans. EBS volumes persist independently from the running life of an EC2
instance.

Here is a list of important information about EBS Volumes:

- When you create an EBS volume in an Availability Zone, it is automatically replicated


within that zone to prevent data loss due to a failure of any single hardware component.

- An EBS volume can only be attached to one EC2 instance at a time.


- After you create a volume, you can attach it to any EC2 instance in the same Availability
Zone

- An EBS volume is off-instance storage that can persist independently from the life

of an instance. You can specify not to terminate the EBS volume when you terminate the
EC2 instance during instance creation.

- EBS volumes support live configuration changes while in production which means that you
can modify the volume type, volume size, and IOPS capacity without service interruptions.

- Amazon EBS encryption uses 256-bit Advanced Encryption Standard algorithms (AES-
256)

- EBS Volumes offer 99.999% SLA.

First, the Network ACL should be properly set to allow communication between the two
subnets. The security group should also be properly configured so that your web server can
communicate with the database server.
AWS IAM Identity Center (successor to AWS Single Sign-On) provides single sign-on
access for all of your AWS accounts and cloud applications. It connects with Microsoft
Active Directory through AWS Directory Service to allow users in that directory to sign in to
a personalized AWS access portal using their existing Active Directory user names and
passwords. From the AWS access portal, users have access to all the AWS accounts and
cloud applications that they have permission for.

Users in your self-managed directory in Active Directory (AD) can also have single sign-on
access to AWS accounts and cloud applications in the AWS access portal.
AWS Security Token Service (AWS STS) is the service that you can use to create and
provide trusted users with temporary security credentials that can control access to your AWS
resources. Temporary security credentials work almost identically to the long-term access key
credentials that your IAM users can use.

In this diagram, IAM user Alice in the Dev account (the role-assuming account) needs to
access the Prod account (the role-owning account). Here’s how it works:

1. Alice in the Dev account assumes an IAM role (WriteAccess) in the Prod account by
calling AssumeRole.
2. STS returns a set of temporary security credentials.
3. Alice uses the temporary security credentials to access services and resources in the
Prod account. Alice could, for example, make calls to Amazon S3 and Amazon EC2,
which are granted by the WriteAccess role.
Amazon EKS (Amazon Elastic Kubernetes Service) provisions and scales the Kubernetes
control plane, including the API servers and backend persistence layer, across multiple AWS
availability zones for high availability and fault tolerance. Amazon EKS automatically detects
and replaces unhealthy control plane nodes and provides patching for the control plane.
Amazon EKS is integrated with many AWS services to provide scalability and security for
your applications. These services include Elastic Load Balancing for load distribution, IAM
for authentication, Amazon VPC for isolation, and AWS CloudTrail for logging.

Amazon SNS is a fully managed pub/sub messaging service. With Amazon SNS, you can
use topics to simultaneously distribute messages to multiple subscribing endpoints such as
Amazon SQS queues, AWS Lambda functions, HTTP endpoints, email addresses, and
mobile devices (SMS, Push).
Amazon SQS is a message queue service used by distributed applications to exchange
messages through a polling model. It can be used to decouple sending and receiving
components without requiring each component to be concurrently available.

A fanout scenario occurs when a message published to an SNS topic is replicated and pushed
to multiple endpoints, such as Amazon SQS queues, HTTP(S) endpoints, and Lambda
functions. This allows for parallel asynchronous processing.
For example, you can develop an application that publishes a message to an SNS topic
whenever an order is placed for a product. Then, two or more SQS queues that are subscribed
to the SNS topic receive identical notifications for the new order. An Amazon Elastic
Compute Cloud (Amazon EC2) server instance attached to one of the SQS queues can handle
the processing or fulfillment of the order. And you can attach another Amazon EC2 server
instance to a data warehouse for analysis of all orders received.

By default, an Amazon SNS topic subscriber receives every message published to the topic.
You can use Amazon SNS message filtering to assign a filter policy to the topic subscription,
and the subscriber will only receive a message that they are interested in. Using Amazon SNS
and Amazon SQS together, messages can be delivered to applications that require immediate
notification of an event. This method is known as fanout to Amazon SQS queues.

Amazon API Gateway is a fully managed service that makes it easy for developers to create,
publish, maintain, monitor, and secure APIs at any scale. With a few clicks in the AWS
Management Console, you can create an API that acts as a “front door” for applications to
access data, business logic, or functionality from your back-end services, such as workloads
running on Amazon Elastic Compute Cloud (Amazon EC2), code running on AWS Lambda,
or any web application. Since it can use AWS Lambda, you can run your APIs without
servers.
Amazon API Gateway handles all the tasks involved in accepting and processing up to
hundreds of thousands of concurrent API calls, including traffic management, authorization
and access control, monitoring, and API version management. Amazon API Gateway has no
minimum fees or startup costs. You pay only for the API calls you receive and the amount of
data transferred out.

Amazon Kinesis Video Streams makes it easy to securely stream video from connected
devices to AWS for analytics, machine learning (ML), playback, and other processing.
Kinesis Video Streams automatically provisions and elastically scales all the infrastructure
needed to ingest streaming video data from millions of devices.
Amazon Rekognition Video can detect objects, scenes, faces, celebrities, text, and
inappropriate content in videos. You can also search for faces appearing in a video using your
own repository or collection of face images.
In Auto Scaling, the following statements are correct regarding the cooldown period:

It ensures that the Auto Scaling group does not launch or terminate additional EC2 instances
before the previous scaling activity takes effect.

Its default value is 300 seconds.

It is a configurable setting for your Auto Scaling group.

The following options are incorrect:

- It ensures that before the Auto Scaling group scales out, the EC2 instances have ample
time to cooldown.
- It ensures that the Auto Scaling group launches or terminates additional EC2
instances without any downtime.
- Its default value is 600 seconds.

These statements are inaccurate and don't depict what the word "cooldown" actually means
for Auto Scaling. The cooldown period is a configurable setting for your Auto Scaling group
that helps to ensure that it doesn't launch or terminate additional instances before the previous
scaling activity takes effect. After the Auto Scaling group dynamically scales using a simple
scaling policy, it waits for the cooldown period to complete before resuming scaling
activities.

AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access
to virtually unlimited cloud storage. Customers use Storage Gateway to simplify storage
management and reduce costs for key hybrid cloud storage use cases. These include moving
backups to the cloud, using on-premises file shares backed by cloud storage, and providing
low latency access to data in AWS for on-premises applications.

SNI Custom SSL relies on the SNI extension of the Transport Layer Security protocol, which
allows multiple domains to serve SSL traffic over the same IP address by including the
hostname which the viewers are trying to connect to.

You can host multiple TLS-secured applications, each with its own TLS certificate, behind a
single load balancer. In order to use SNI, all you need to do is bind multiple certificates to the
same secure listener on your load balancer. ALB will automatically choose the optimal TLS
certificate for each client. These features are provided at no additional charge.

AWS Glue is a powerful tool that enables data engineers to build and manage ETL (extract,
transform, load) pipelines for processing and analyzing large amounts of data. With AWS
Glue, you can create and manage jobs that extract data from various sources, transform it into
the desired format, and load it into a target data store.

One of the features that make AWS Glue especially useful is job bookmarking. Job
bookmarking is a mechanism that allows AWS Glue to keep track of where a job is left off in
case it gets interrupted or fails for any reason. This way, when the job is restarted, it can pick
up from where it left off instead of starting from scratch.

Service control policies (SCPs) are a type of organization policy that you can use to manage
permissions in your organization. SCPs offer central control over the maximum available
permissions for all accounts in your organization. SCPs help you to ensure your accounts stay
within your organization’s access control guidelines.
Amazon Simple Queue Service (SQS) is a fully managed message queuing service that
makes it easy to decouple and scale microservices, distributed systems, and serverless
applications. Building applications from individual components that each perform a discrete
function improves scalability and reliability and is best practice design for modern
applications. SQS makes it simple and cost-effective to decouple and coordinate the
components of a cloud application. Using SQS, you can send, store, and receive messages
between software components at any volume without losing messages or requiring other
services to be always available.

Multi-factor authentication (MFA) in AWS is a simple best practice that adds an extra layer
of protection on top of your user name and password. With MFA enabled, when a user signs
in to an AWS Management Console, they will be prompted for their user name and password
(the first factor—what they know), as well as for an authentication code from their AWS
MFA device (the second factor—what they have). Taken together, these multiple factors
provide increased security for your AWS account settings and resources. You can create an
IAM Policy to restrict access to AWS services for AWS Identity and Access Management
(IAM) users. The IAM Policy that enforces MFA authentication can then be attached to an
IAM Group to quickly apply to all IAM Users.

AWS Cost Explorer is a service provided by Amazon Web Services (AWS) that helps you
visualize, understand, and analyze your AWS costs and usage. It provides a comprehensive
set of tools and features to help you monitor and manage your AWS spending.

Amazon API Gateway lets you create an API that acts as a "front door" for applications to
access data, business logic, or functionality from your back-end services, such as code
running on AWS Lambda. Amazon API Gateway handles all of the tasks involved in
accepting and processing up to hundreds of thousands of concurrent API calls, including
traffic management, authorization, and access control, monitoring, and API version
management. Amazon API Gateway has no minimum fees or startup costs.
AWS Lambda scales your functions automatically on your behalf. Every time an event
notification is received for your function, AWS Lambda quickly locates free capacity within
its compute fleet and runs your code. Since your code is stateless, AWS Lambda can start as
many copies of your function as needed without lengthy deployment and configuration
delays.
Active-Active Failover

Use this failover configuration when you want all of your resources to be available the
majority of the time. When a resource becomes unavailable, Route 53 can detect that it's
unhealthy and stop including it when responding to queries.

In active-active failover, all the records that have the same name, the same type (such as A or
AAAA), and the same routing policy (such as weighted or latency) are active unless Route 53
considers them unhealthy. Route 53 can respond to a DNS query using any healthy record.

Hence, Configuring an Active-Active Failover with Weighted routing policy is correct.


Active-Passive Failover

Use an active-passive failover configuration when you want a primary resource or group of
resources to be available the majority of the time and you want a secondary resource or group
of resources to be on standby in case all the primary resources become unavailable. When
responding to queries, Route 53 includes only the healthy primary resources. If all the
primary resources are unhealthy, Route 53 begins to include only the healthy secondary
resources in response to DNS queries.

Configuring an Active-Passive Failover with Weighted Records and configuring an


Active-Passive Failover with Multiple Primary and Secondary Resources are incorrect
because an Active-Passive Failover is mainly used when you want a primary resource or
group of resources to be available most of the time and you want a secondary resource or
group of resources to be on standby in case all the primary resources become unavailable. In
this scenario, all of your resources should be available all the time as much as possible which
is why you have to use an Active-Active Failover instead.
Configuring an Active-Active Failover with One Primary and One Secondary
Resource is incorrect because you cannot set up an Active-Active Failover with One Primary
and One Secondary Resource. Remember that an Active-Active Failover uses all available
resources all the time without a primary nor a secondary resource.

AWS Systems Manager is a collection of capabilities to help you manage your applications
and infrastructure running in the AWS Cloud. Systems Manager simplifies application and
resource management, shortens the time to detect and resolve operational problems, and helps
you manage your AWS resources securely at scale.
Parameter Store provides secure, hierarchical storage for configuration data and secrets
management. You can store data such as passwords, database strings, Amazon Elastic
Compute Cloud (Amazon EC2) instance IDs and Amazon Machine Image (AMI) IDs, and
license codes as parameter values. You can store values as plain text or encrypted data. You
can then reference values by using the unique name you specified when you created the
parameter. Parameter Store is also integrated with Secrets Manager. You can retrieve Secrets
Manager secrets when using other AWS services that already support references to Parameter
Store parameters.

Expedited retrievals allow you to quickly access your data when occasional urgent requests
for a subset of archives are required. For all but the largest archives (250 MB+), data
accessed using Expedited retrievals are typically made available within 1–5 minutes.
Provisioned Capacity ensures that retrieval capacity for Expedited retrievals is available
when you need it.

To make an Expedited, Standard, or Bulk retrieval, set the Tier parameter in the Initiate Job
(POST jobs) REST API request to the option you want, or the equivalent in the AWS CLI or
AWS SDKs. If you have purchased provisioned capacity, then all expedited retrievals are
automatically served through your provisioned capacity.

Provisioned capacity ensures that your retrieval capacity for expedited retrievals is available
when you need it. Each unit of capacity provides that at least three expedited retrievals can be
performed every five minutes and provides up to 150 MB/s of retrieval throughput. You
should purchase provisioned retrieval capacity if your workload requires highly reliable and
predictable access to a subset of your data in minutes. Without provisioned capacity
Expedited retrievals are accepted, except for rare situations of unusually high demand.
However, if you require access to Expedited retrievals under all circumstances, you must
purchase provisioned retrieval capacity.

To enable the connection to a service running on an instance, the associated network ACL
must allow both inbound traffic on the port that the service is listening on as well as outbound
traffic from ephemeral ports. When a client connects to a server, a random port from the
ephemeral port range (1024-65535) becomes the client's source port.

The designated ephemeral port then becomes the destination port for return traffic from the
service, so outbound traffic from the ephemeral port must be allowed in the network ACL.
By default, network ACLs allow all inbound and outbound traffic. If your network ACL is
more restrictive, then you need to explicitly allow traffic from the ephemeral port range.

Attach the kms:decrypt permission to the Lambda function’s execution role. Add a
statement to the AWS KMS key’s policy that grants the function’s execution role
the kms:decrypt permission.

If you need more instances, complete the Amazon EC2 limit increase request form with your
use case, and your limit increase will be considered. Limit increases are tied to the region
they were requested for.

Hence, the correct answer is: There is a vCPU-based On-Demand Instance limit per
region which is why subsequent requests failed. Just submit the limit increase form to
AWS and retry the failed requests once approved.

AWS Database Migration Service helps you migrate your databases to AWS with virtually
no downtime. All data changes to the source database that occur during the migration are
continuously replicated to the target, allowing the source database to be fully operational
during the migration process.
You can set up a DMS task for either one-time migration or ongoing replication. An ongoing
replication task keeps your source and target databases in sync. Once set up, the ongoing
replication task will continuously apply source changes to the target with minimal latency.

Fargate allocates the right amount of compute, eliminating the need to choose instances and
scale cluster capacity. You only pay for the resources required to run your containers, so
there is no over-provisioning and paying for additional servers.

By default, Fargate tasks are given a minimum of 20 GiB of free ephemeral storage, which
meets the storage requirement in the scenario.

If you stopped an EBS-backed EC2 instance, the volume is preserved, but the data in any
attached instance store volume will be erased. Keep in mind that an EC2 instance has an
underlying physical host computer. If the instance is stopped, AWS usually moves the
instance to a new host computer. Your instance may stay on the same host computer if there
are no problems with the host computer

In this scenario, the best option is to Create a new IAM group and then add the users that
require access to the S3 bucket. Afterward, apply the policy to the IAM group. This will
enable you to easily add, remove, and manage the users instead of manually adding a policy
to each and every 100 IAM users

Use Server-Side Encryption – You request Amazon S3 to encrypt your object before saving
it on disks in its data centers and decrypt it when you download the objects.

Use Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)

Use Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS)

Use Server-Side Encryption with Customer-Provided Keys (SSE-C)

Use Client-Side Encryption – You can encrypt data client-side and upload the encrypted
data to Amazon S3. In this case, you manage the encryption process, the encryption keys, and
related tools.

Use Client-Side Encryption with AWS KMS–Managed Customer Master Key (CMK)

Use Client-Side Encryption Using a Client-Side Master Key

AWS Glue is a powerful tool that enables data engineers to build and manage ETL (extract,
transform, load) pipelines for processing and analyzing large amounts of data. With AWS
Glue, you can create and manage jobs that extract data from various sources, transform it into
the desired format, and load it into a target data store.

One of the features that make AWS Glue especially useful is job bookmarking. Job
bookmarking is a mechanism that allows AWS Glue to keep track of where a job is left off in
case it gets interrupted or fails for any reason.

Service control policies (SCPs) are a type of organization policy that you can use to manage
permissions in your organization. SCPs offer central control over the maximum available
permissions for all accounts in your organization. SCPs help you to ensure your accounts stay
within your organization’s access control guidelines.

SCPs alone is not sufficient to grant permissions to the accounts in your organization. No
permissions are granted by an SCP. An SCP defines a guardrail or sets limits on the actions
that the account's administrator can delegate to the IAM users and roles in the affected
accounts.

RDS Storage Auto Scaling continuously monitors actual storage consumption, and scales
capacity up automatically when actual utilization approaches provisioned storage capacity.
Auto Scaling works with new and existing database instances. You can enable Auto Scaling
with just a few clicks in the AWS Management Console.

Amazon API Gateway lets you create an API that acts as a "front door" for applications to
access data, business logic, or functionality from your back-end services, such as code
running on AWS Lambda. Amazon API Gateway handles all of the tasks involved in
accepting and processing up to hundreds of thousands of concurrent API calls, including
traffic management, authorization, and access control, monitoring, and API version
management. Amazon API Gateway has no minimum fees or startup costs.
AWS Lambda scales your functions automatically on your behalf. Every time an event
notification is received for your function, AWS Lambda quickly locates free capacity within
its compute fleet and runs your code. Since your code is stateless, AWS Lambda can start as
many copies of your function as needed without lengthy deployment and configuration
delays.

Active-Active Failover

Use this failover configuration when you want all of your resources to be available the
majority of the time. When a resource becomes unavailable, Route 53 can detect that it's
unhealthy and stop including it when responding to queries.
In active-active failover, all the records that have the same name, the same type (such as A or
AAAA), and the same routing policy (such as weighted or latency) are active unless Route 53
considers them unhealthy. Route 53 can respond to a DNS query using any healthy record.

AWS Systems Manager is a collection of capabilities to help you manage your applications
and infrastructure running in the AWS Cloud. Systems Manager simplifies application and
resource management, shortens the time to detect and resolve operational problems, and helps
you manage your AWS resources securely at scale.
Parameter Store provides secure, hierarchical storage for configuration data and secrets
management. You can store data such as passwords, database strings, Amazon Elastic
Compute Cloud (Amazon EC2) instance IDs and Amazon Machine Image (AMI) IDs, and
license codes as parameter values. You can store values as plain text or encrypted data. You
can then reference values by using the unique name you specified when you created the
parameter. Parameter Store is also integrated with Secrets Manager. You can retrieve Secrets
Manager secrets when using other AWS services that already support references to Parameter
Store parameters.

Hence, the correct answer is: Create an AWS Backup plan to take daily snapshots with a
retention period of 90 days.
The option that says: Configure an automated backup and set the backup retention
period to 90 days is incorrect because the maximum backup retention period for automated
backup is only 35 days.
Many companies that distribute content over the Internet want to restrict access to documents,
business data, media streams, or content that is intended for selected users, for example, users
who have paid a fee. To securely serve this private content by using CloudFront, you can do
the following:

- Require that your users access your private content by using special CloudFront signed
URLs or signed cookies.

- Require that your users access your Amazon S3 content by using CloudFront URLs, not
Amazon S3 URLs. Requiring CloudFront URLs isn't necessary, but it is recommended to
prevent users from bypassing the restrictions that you specify in signed URLs or signed
cookies. You can do this by setting up an origin access identity (OAI) for your Amazon S3
bucket. You can also configure the custom headers for a private HTTP server or an Amazon
S3 bucket configured as a website endpoint.

All objects and buckets by default are private. The pre-signed URLs are useful if you want
your user/customer to be able to upload a specific object to your bucket, but you don't require
them to have AWS security credentials or permissions.

You can associate the CreationPolicy attribute with a resource to prevent its status from
reaching create complete until AWS CloudFormation receives a specified number of success
signals or the timeout period is exceeded. To signal a resource, you can use the cfn-signal
helper script or SignalResource API. AWS CloudFormation publishes valid signals to the
stack events so that you track the number of signals sent.
The creation policy is invoked only when AWS CloudFormation creates the associated
resource. Currently, the only AWS CloudFormation resources that support creation policies
are AWS::AutoScaling::AutoScalingGroup , AWS::EC2::Instance ,
and AWS::CloudFormation::WaitCondition .

Use the CreationPolicy attribute when you want to wait on resource configuration actions
before stack creation proceeds. For example, if you install and configure software
applications on an EC2 instance, you might want those applications to be running before
proceeding. In such cases, you can add a CreationPolicy attribute to the instance and then
send a success signal to the instance after the applications are installed and configured.
Hence, the option that says: Configure a CreationPolicy attribute to the instance in the
CloudFormation template. Send a success signal after the applications are installed and
configured using the cfn-signal helper script is correct.

Many companies that distribute content over the Internet want to restrict access to documents,
business data, media streams, or content that is intended for selected users, for example, users
who have paid a fee. To securely serve this private content by using CloudFront, you can do
the following:

- Require that your users access your private content by using special CloudFront signed
URLs or signed cookies.

- Require that your users access your Amazon S3 content by using CloudFront URLs, not
Amazon S3 URLs. Requiring CloudFront URLs isn't necessary, but it is recommended to
prevent users from bypassing the restrictions that you specify in signed URLs or signed
cookies. You can do this by setting up an origin access identity (OAI) for your Amazon S3
bucket. You can also configure the custom headers for a private HTTP server or an Amazon
S3 bucket configured as a website endpoint.
All objects and buckets by default are private. The pre-signed URLs are useful if you want
your user/customer to be able to upload a specific object to your bucket, but you don't require
them to have AWS security credentials or permissions.

You can generate a pre-signed URL programmatically using the AWS SDK for Java or the
AWS SDK for .NET. If you are using Microsoft Visual Studio, you can also use AWS
Explorer to generate a pre-signed object URL without writing any code. Anyone who
receives a valid pre-signed URL can then programmatically upload an object.

Hence, the correct answers are:

- Restrict access to files in the origin by creating an origin access identity (OAI) and give
it permission to read the files in the bucket.
- Require the users to access the private content by using special CloudFront signed
URLs or signed cookies.

Amazon Pinpoint is an AWS service that you can use to engage with your customers
across multiple messaging channels. You can use Amazon Pinpoint to send push
notifications, in-app notifications, emails, text messages, voice messages, and
messages over custom channels.

If you got your certificate from a third-party CA, import the certificate into ACM or upload it
to the IAM certificate store. Hence, AWS Certificate Manager and IAM certificate
store are the correct answers.

Amazon Kinesis DataStream


Two important requirements that the chosen AWS service should fulfill is that data should not
go missing, is durable, and streams data in the sequence of arrival. Kinesis can do the job just
fine because of its architecture. A Kinesis data stream is a set of shards that has a sequence of
data records, and each data record has a sequence number that is assigned by Kinesis Data
Streams. Kinesis can also easily handle the high volume of messages being sent to the
service.

To collect logs from your Amazon EC2 instances and on-premises servers into CloudWatch
Logs, AWS offers both a new unified CloudWatch agent, and an older CloudWatch Logs
agent. It is recommended to use the unified CloudWatch agent which has the following
advantages:

- You can collect both logs and advanced metrics with the installation and configuration of
just one agent.

- The unified agent enables the collection of logs from servers running Windows Server.

- If you are using the agent to collect CloudWatch metrics, the unified agent also enables the
collection of additional system metrics, for in-guest visibility.

- The unified agent provides better performance.

CloudWatch Logs Insights enables you to interactively search and analyze your log data in
Amazon CloudWatch Logs. You can perform queries to help you quickly and effectively
respond to operational issues. If an issue occurs, you can use CloudWatch Logs Insights to
identify potential causes and validate deployed fixes.

CloudWatch Logs Insights includes a purpose-built query language with a few simple but
powerful commands. CloudWatch Logs Insights provides sample queries, command
descriptions, query autocompletion, and log field discovery to help you get started quickly.
Sample queries are included for several types of AWS service logs.

Lambda@Edge is a feature of Amazon CloudFront that lets you run code closer to users of
your application, which improves performance and reduces latency. With Lambda@Edge,
you don't have to provision or manage infrastructure in multiple locations around the world.
You pay only for the compute time you consume - there is no charge when your code is not
running.

DynamoDB auto scaling uses the AWS Application Auto Scaling service to dynamically
adjust provisioned throughput capacity on your behalf, in response to actual traffic patterns.
This enables a table or a global secondary index to increase its provisioned read and write
capacity to handle sudden increases in traffic, without throttling. When the workload
decreases, Application Auto Scaling decreases the throughput so that you don't pay for
unused provisioned capacity.

Create a standby replica in another availability zone by enabling Multi-AZ


deployment because it meets both of the requirements.
You can run an Amazon RDS DB instance in several AZs with Multi-AZ deployment.
Amazon automatically provisions and maintains a secondary standby DB instance in a
different AZ. Your primary DB instance is synchronously replicated across AZs to the
secondary instance to provide data redundancy, failover support, eliminate I/O freezes, and
minimize latency spikes during systems backup.

Amazon EBS provides three volume types to best meet the needs of your workloads: General
Purpose (SSD), Provisioned IOPS (SSD), and Magnetic.

General Purpose (SSD) is the new, SSD-backed, general purpose EBS volume type that is
recommended as the default choice for customers. General Purpose (SSD) volumes are
suitable for a broad range of workloads, including small to medium-sized databases,
development and test environments, and boot volumes.

Provisioned IOPS (SSD) volumes offer storage with consistent and low-latency performance
and are designed for I/O intensive applications such as large relational or NoSQL databases.
Magnetic volumes provide the lowest cost per gigabyte of all EBS volume types.

Magnetic volumes are ideal for workloads where data are accessed infrequently, and
applications where the lowest storage cost is important. Take note that this is a Previous
Generation Volume. The latest low-cost magnetic storage types are Cold HDD (sc1) and
Throughput Optimized HDD (st1) volumes.

AWS Control Tower offers a straightforward way to set up and govern an AWS multi-
account environment, following prescriptive best practices. AWS Control Tower orchestrates
the capabilities of several other AWS services, including AWS Organizations, AWS Service
Catalog, and AWS Single Sign-On, to build a landing zone in less than an hour. It offers a
dashboard to see provisioned accounts across your enterprise, guardrails enabled for policy
enforcement, guardrails enabled for continuous detection of policy non-conformance, and
non-compliant resources organized by accounts and OUs.

AWS Transit Gateway is a service that enables customers to connect their Amazon Virtual
Private Clouds (VPCs) and their on-premises networks to a single gateway. As you grow the
number of workloads running on AWS, you need to be able to scale your networks across
multiple accounts and Amazon VPCs to keep up with the growth.
With AWS Transit Gateway, you only have to create and manage a single connection from the
central gateway to each Amazon VPC, on-premises data center, or remote office across your
network. Transit Gateway acts as a hub that controls how traffic is routed among all the
connected networks which act like spokes. This hub and spoke model significantly simplifies
management and reduces operational costs because each network only has to connect to the
Transit Gateway and not to every other network. Any new VPC is simply connected to the
Transit Gateway and is then automatically available to every other network that is connected
to the Transit Gateway. This ease of connectivity makes it easy to scale your network as you
grow.

A storage optimized instance is designed for workloads that require high, sequential read and
write access to very large data sets on local storage. They are optimized to deliver tens of
thousands of low-latency, random I/O operations per second (IOPS) to applications. Some
instance types can drive more I/O throughput than what you can provision for a single EBS
volume. You can join multiple volumes together in a RAID 0 configuration to use the
available bandwidth for these instances.

AWS Step Functions provides useful guarantees around task assignments. It ensures that a
task is never duplicated and is assigned only once. Thus, even though you may have multiple
workers for a particular activity type (or a number of instances of a decider), AWS Step
Functions will give a specific task to only one worker (or one decider instance). Additionally,
AWS Step Functions keeps at most one decision task outstanding at a time for workflow
execution. Thus, you can run multiple decider instances without worrying about two
instances operating on the same execution simultaneously. These facilities enable you to
coordinate your workflow without worrying about duplicate, lost, or conflicting tasks.

Therefore, the correct answer is: Use the AWS Config managed rule to check if the IAM
user access keys are not rotated within 90 days. Create an Amazon EventBridge
(Amazon CloudWatch Events) rule for the non-compliant keys, and define a target to
invoke a custom Lambda function to deactivate and delete the keys. Amazon
EventBridge (Amazon CloudWatch Events) can check for AWS Config non-compliant events
and then trigger a Lambda for remediation.

A security group acts as a virtual firewall for your instance to control inbound and outbound
traffic. When you launch an instance in a VPC, you can assign up to five security groups to
the instance. Security groups act at the instance level, not the subnet level. Therefore, each
instance in a subnet in your VPC can be assigned to a different set of security groups.

You have to design a solution to detect new entries in the DynamoDB table then
automatically trigger a Lambda function to run some tests to verify the processed data.

Amazon DynamoDB is integrated with AWS Lambda so that you can create triggers—pieces
of code that automatically respond to events in DynamoDB Streams. With triggers, you can
build applications that react to data modifications in DynamoDB tables.

If you enable DynamoDB Streams on a table, you can associate the stream ARN with a
Lambda function that you write. Immediately after an item in the table is modified, a new
record appears in the table's stream. AWS Lambda polls the stream and invokes your Lambda
function synchronously when it detects new stream records.

You can create a Lambda function which can perform a specific action that you specify, such
as sending a notification or initiating a workflow. For instance, you can set up a Lambda
function to simply copy each stream record to persistent storage, such as EFS or S3, to create
a permanent audit trail of write activity in your table.
In RDS, the Enhanced Monitoring metrics shown in the Process List view are organized as
follows:

RDS child processes – Shows a summary of the RDS processes that support the DB
instance, for example aurora for Amazon Aurora DB clusters and mysqld for MySQL DB
instances. Process threads appear nested beneath the parent process. Process threads show
CPU utilization only as other metrics are the same for all threads for the process. The console
displays a maximum of 100 processes and threads. The results are a combination of the top
CPU-consuming and memory-consuming processes and threads. If there are more than 50
processes and more than 50 threads, the console displays the top 50 consumers in each
category. This display helps you identify which processes are having the greatest impact on
performance.
RDS processes – Shows a summary of the resources used by the RDS management agent,
diagnostics monitoring processes, and other AWS processes that are required to support RDS
DB instances.
OS processes – Shows a summary of the kernel and system processes, which generally have
minimal impact on performance.

You can configure Amazon Redshift to copy snapshots for a cluster to another region. To
configure cross-region snapshot copy, you need to enable this copy feature for each cluster
and configure where to copy snapshots and how long to keep copied automated snapshots in
the destination region. When a cross-region copy is enabled for a cluster, all new manual and
automatic snapshots are copied to the specified region.

If your identity store is not compatible with SAML 2.0 then you can build a custom identity
broker application to perform a similar function. The broker application authenticates users,
requests temporary credentials for users from AWS, and then provides them to the user to
access AWS resources.

The application verifies that employees are signed into the existing corporate network's
identity and authentication system, which might use LDAP, Active Directory, or another
system. The identity broker application then obtains temporary security credentials for the
employees.

To get temporary security credentials, the identity broker application calls


either AssumeRole or GetFederationToken to obtain temporary security credentials,
depending on how you want to manage the policies for users and when the temporary
credentials should expire. The call returns temporary security credentials consisting of an
AWS access key ID, a secret access key, and a session token. The identity broker application
makes these temporary security credentials available to the internal company application. The
app can then use the temporary credentials to make calls to AWS directly. The app caches the
credentials until they expire, and then requests a new set of temporary credentials.
AWS DataSync allows you to copy large datasets with millions of files, without having to
build custom solutions with open source tools or license and manage expensive commercial
network acceleration software. You can use DataSync to migrate active data to AWS, transfer
data to the cloud for analysis and processing, archive data to free up on-premises storage
capacity, or replicate data to AWS for business continuity.
AWS DataSync simplifies, automates, and accelerates copying large amounts of data to and
from AWS storage services over the internet or AWS Direct Connect. DataSync can copy
data between Network File System (NFS), Server Message Block (SMB) file servers, self-
managed object storage, or AWS Snowcone, and Amazon Simple Storage Service (Amazon
S3) buckets, Amazon EFS file systems, and Amazon FSx for Windows File Server file
systems.

AWS Application Migration Service (AWS MGN) is the primary migration service
recommended for lift-and-shift migrations to AWS. AWS encourages customers who are
currently using AWS Elastic Disaster Recovery to switch to AWS MGN for future
migrations. AWS MGN enables organizations to move applications to AWS without having
to make any changes to the applications, their architecture, or the migrated servers.
AWS Application Migration Service minimizes time-intensive, error-prone manual
processes by automatically converting your source servers from physical, virtual machines,
and cloud infrastructure to run natively on AWS.

The service simplifies your migration by enabling you to use the same automated process for
a wide range of applications. By launching non-disruptive tests before migrating, you can be
confident that your most critical applications such as SAP, Oracle, and SQL Server, will
work seamlessly on AWS.

Implementation begins by installing the AWS Replication Agent on your source servers.
When you launch Test or Cutover instances, AWS Application Migration Service
automatically converts your source servers to boot and runs natively on AWS.

AWS License Manager is a service that makes it easier for you to manage your software
licenses from software vendors (for example, Microsoft, SAP, Oracle, and IBM) centrally
across AWS and your on-premises environments. This provides control and visibility into the
usage of your licenses, enabling you to limit licensing overages and reduce the risk of non-
compliance and misreporting.

As you build out your cloud infrastructure on AWS, you can save costs by using Bring Your
Own License model (BYOL) opportunities. That is, you can re-purpose your existing license
inventory for use with your cloud resources.

Predictive scaling uses machine learning to predict capacity requirements based on historical
data from CloudWatch. The machine learning algorithm consumes the available historical
data and calculates capacity that best fits the historical load pattern, and then continuously
learns based on new data to make future forecasts more accurate.

An Elastic Fabric Adapter (EFA) is a network device that you can attach to your Amazon
EC2 instance to accelerate High Performance Computing (HPC) and machine learning
applications. EFA enables you to achieve the application performance of an on-premises HPC
cluster with the scalability, flexibility, and elasticity provided by the AWS Cloud.

AWS Storage Gateway connects an on-premises software appliance with cloud-based


storage to provide seamless integration with data security features between your on-premises
IT environment and the AWS storage infrastructure. You can use the service to store data in
the AWS Cloud for scalable and cost-effective storage that helps maintain data security.

Amazon Athena is an interactive query service that makes it easy to analyze data directly in
Amazon Simple Storage Service (Amazon S3) using standard SQL. With a few actions in the
AWS Management Console, you can point Athena at your data stored in Amazon S3 and
begin using standard SQL to run ad-hoc queries and get results in seconds.

Use Server-Side Encryption with Customer Master Keys (CMKs) Stored in AWS Key
Management Service (SSE-KMS) – Similar to SSE-S3, but with some additional benefits
and charges for using this service. There are separate permissions for the use of a CMK that
provides added protection against unauthorized access of your objects in Amazon S3. SSE-
KMS also provides you with an audit trail that shows when your CMK was used and by
whom. Additionally, you can create and manage customer-managed CMKs or use AWS
managed CMKs that are unique to you, your service, and your Region.

A company has a web-based ticketing service that utilizes Amazon SQS and a fleet of EC2
instances. The EC2 instances that consume messages from the SQS queue are configured to
poll the queue as often as possible to keep end-to-end throughput as high as possible. The
Solutions Architect noticed that polling the queue in tight loops is using unnecessary CPU
cycles, resulting in increased operational costs due to empty responses.

In this scenario, what should the Solutions Architect do to make the system more cost-
effective?

Long polling helps reduce your cost of using Amazon SQS by reducing the number of empty
responses when there are no messages available to return in reply to
a ReceiveMessage request sent to an Amazon SQS queue and eliminating false empty
responses when messages are available in the queue but aren't included in the response.
- Long polling reduces the number of empty responses by allowing Amazon SQS to wait
until a message is available in the queue before sending a response. Unless the connection
times out, the response to the ReceiveMessage request contains at least one of the available
messages, up to the maximum number of messages specified in the ReceiveMessage action.

- Long polling eliminates false empty responses by querying all (rather than a limited
number) of the servers. Long polling returns messages as soon any message becomes
available.

In Amazon Kinesis, the producers continually push data to Kinesis Data Streams and the
consumers process the data in real-time. Consumers (such as a custom application running
on Amazon EC2, or an Amazon Kinesis Data Firehose delivery stream) can store their results
using an AWS service such as Amazon DynamoDB, Amazon Redshift, or Amazon S3.
Internet Gateway: The Amazon VPC side of a connection to the public Internet. NAT
Gateway: A highly available, managed Network Address Translation (NAT) service for your
resources in a private subnet to access the Internet. Virtual private gateway: The Amazon
VPC side of a VPN connection.

Amazon EC2 provides enhanced networking capabilities through the Elastic Network
Adapter (ENA). It supports network speeds of up to 100 Gbps for supported instance types.
Elastic Network Adapters (ENAs) provide traditional IP networking features that are required
to support VPC networking.

An Elastic Fabric Adapter (EFA) is simply an Elastic Network Adapter (ENA) with added
capabilities. It provides all of the functionality of an ENA, with additional OS-bypass
functionality. OS-bypass is an access model that allows HPC and machine learning
applications to communicate directly with the network interface hardware to provide low-
latency, reliable transport functionality.
The OS-bypass capabilities of EFAs are not supported on Windows instances. If you attach
an EFA to a Windows instance, the instance functions as an Elastic Network Adapter without
the added EFA capabilities.

Amazon EC2 provides enhanced networking capabilities through the Elastic Network
Adapter (ENA). It supports network speeds of up to 100 Gbps for supported instance types.
Elastic Network Adapters (ENAs) provide traditional IP networking features that are required
to support VPC networking.

An Elastic Fabric Adapter (EFA) is simply an Elastic Network Adapter (ENA) with added
capabilities. It provides all of the functionality of an ENA, with additional OS-bypass
functionality. OS-bypass is an access model that allows HPC and machine learning
applications to communicate directly with the network interface hardware to provide low-
latency, reliable transport functionality.

The OS-bypass capabilities of EFAs are not supported on Windows instances. If you attach
an EFA to a Windows instance, the instance functions as an Elastic Network Adapter without
the added EFA capabilities.

AWS Network Firewall is a stateful, managed, network firewall, and intrusion detection and
prevention service for your virtual private cloud (VPC). With Network Firewall, you can
filter traffic at the perimeter of your VPC. This includes traffic going to and coming from an
internet gateway, NAT gateway, or over VPN or AWS Direct Connect. Network Firewall
uses Suricata — an open-source intrusion prevention system (IPS) for stateful inspection.

The diagram below shows an AWS Network firewall deployed in a single availability zone
and traffic flow for a workload in a public subnet:

If you do not have an Amazon Aurora Replica (i.e., single instance) and are not running
Aurora Serverless, Aurora will attempt to create a new DB Instance in the same Availability
Zone as the original instance. This replacement of the original instance is done on a best-
effort basis and may not succeed, for example, if there is an issue that is broadly affecting the
Availability Zone.

Hence, the correct answer is the option that says: Aurora will attempt to create a new DB
Instance in the same Availability Zone as the original instance and is done on a best-
effort basis.
AWS Batch is a powerful tool for developers, scientists, and engineers who need to run a
large number of batch and ML computing jobs. By optimizing compute resources, AWS
Batch enables you to focus on analyzing outcomes and resolving issues, rather than worrying
about the technical details of running jobs.

You can authenticate to your DB instance using AWS Identity and Access Management
(IAM) database authentication. IAM database authentication works with MySQL and
PostgreSQL. With this authentication method, you don't need to use a password when you
connect to a DB instance.
An authentication token is a string of characters that you use instead of a password. After
you generate an authentication token, it's valid for 15 minutes before it expires. If you try to
connect using an expired token, the connection request is denied.
AWS Systems Manager Run Command lets you remotely and securely manage the
configuration of your managed instances. A managed instance is any Amazon EC2 instance
or on-premises machine in your hybrid environment that has been configured for Systems
Manager. Run Command enables you to automate common administrative tasks and perform
ad hoc configuration changes at scale. You can use Run Command from the AWS console,
the AWS Command Line Interface, AWS Tools for Windows PowerShell, or the AWS
SDKs. Run Command is offered at no additional cost.
Amazon SQS automatically deletes messages that have been in a queue for more than the
maximum message retention period. The default message retention period is 4 days. Since the
queue is configured to the default settings and the batch job application only processes the
messages once a week, the messages that are in the queue for more than 4 days are deleted.
This is the root cause of the issue.
To fix this, you can increase the message retention period to a maximum of 14 days using
the SetQueueAttributes action.

AppSync pipeline resolvers offer an elegant server-side solution to address the common
challenge faced in web applications—aggregating data from multiple database tables. Instead
of invoking multiple API calls across different data sources, which can degrade application
performance and user experience, AppSync pipeline resolvers enable easy retrieval of data
from multiple sources with just a single call. By leveraging Pipeline functions, these resolvers
streamline the process of consolidating and presenting data to end-users.
AWS Trusted Advisor draws upon best practices learned from serving hundreds of
thousands of AWS customers. Trusted Advisor inspects your AWS environment, and then
makes recommendations when opportunities exist to save money, improve system
availability and performance, or help close security gaps. If you have a Basic or Developer
Support plan, you can use the Trusted Advisor console to access all checks in the Service
Limits category and six checks in the Security category.

AWS has an example of the implementation of Quota Monitor CloudFormation template that
you can deploy on your AWS account. The template uses an AWS Lambda function that runs
once every 24 hours. The Lambda function refreshes the AWS Trusted Advisor Service
Limits checks to retrieve the most current utilization and quota data through API calls.
Amazon CloudWatch Events captures the status events from Trusted Advisor. It uses a set of
CloudWatch Events rules to send the status events to all the targets you choose during initial
deployment of the solution: an Amazon Simple Queue Service (Amazon SQS) queue, an
Amazon Simple Notification Service (Amazon SNS) topic or a Lambda function for Slack
notifications.

AWS Wavelength combines the high bandwidth and ultralow latency of 5G networks with
AWS compute and storage services so that developers can innovate and build a new class of
applications.

Wavelength Zones are AWS infrastructure deployments that embed AWS compute and
storage services within telecommunications providers’ data centers at the edge of the 5G
network, so application traffic can reach application servers running in Wavelength Zones
without leaving the mobile providers’ network. This prevents the latency that would result
from multiple hops to the internet and enables customers to take full advantage of 5G
networks. Wavelength Zones extend AWS to the 5G edge, delivering a consistent developer
experience across multiple 5G networks around the world. Wavelength Zones also allow
developers to build the next generation of ultra-low latency applications using the same
familiar AWS services, APIs, tools, and functionality they already use today.

You might also like