0% found this document useful (0 votes)
83 views177 pages

Cheat Sheet AWS Solutions Architect Professional

Uploaded by

Ken Ken
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
83 views177 pages

Cheat Sheet AWS Solutions Architect Professional

Uploaded by

Ken Ken
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 177

AWS Solutions Architect Professional

Cheat Sheet
Quick Bytes for you before the exam!
The information provided in the Cheat Sheet is for educational purposes only; created in our efforts to
help aspirants prepare for the AWS Solutions Architect Professional Certification. Though references have
been taken from Google Cloud documentation, it’s not intended as a substitute for the official docs. The
document can be reused, reproduced, and printed in any form; ensure that appropriate sources are
credited and required permissions are received.

Are you Ready for


AWS Solutions Architect Professional?

Self-assess yourself with


Whizlabs FREE TEST

750+ Hands-on-Labs
Hands-on Labs - AWS, GCP, Azure (Whizlabs)

Cloud Sandbox environments


Cloud Sandbox - AWS, Azure, GCP & Power BI

– Back to Index – 1
Index
Section Name Topic Names Page No
Amazon Athena 6

Amazon EMR 7

AWS Glue 8

Amazon Kinesis Data Analytics 9

Amazon Data Firehose 10


Analytics
Amazon Kinesis Data Streams 11

AWS Lake Formation 12

Amazon Managed Streaming for Apache Kafka 13

Amazon OpenSearch Service 14

Amazon QuickSight 15
AWS Simple Workflow Service 17
AWS AppSync 18
Amazon EventBridge 19
Application Integration
Amazon Simple Notification Service 22
Amazon Simple Queue Service 24
AWS Step Functions 26
AWS Budgets 28
Cloud Financial
AWS Cost and Usage Report 30
Management
AWS Cost Explorer 31
AWS Auto Scaling 32
AWS Batch 34
AWS EC2 36
Amazon EC2 Auto Scaling 38
Compute AWS Elastic Beanstalk 39
AWS Fargate 42
AWS Lambda 43
AWS Outposts 46
AWS Wavelength 47

– Back to Index – 2
Amazon Elastic Container Registry 48
Container Amazon Elastic Container Service 49
Amazon Elastic Kubernetes Service(EKS) 52
Amazon Aurora 54
Amazon DocumentDB 55
Amazon ElastiCache 56
Database Amazon Keyspaces (for Apache Cassandra) 58
Amazon Neptune 59
Amazon RDS 60
Amazon Redshift 64

End User Computing Amazon WorkSpaces 66

Frontend Web and Mobile AWS Amplify 67


Amazon API Gateway 68
AWS IoT Analytics 71
AWS IoT Core 72

Internet of Things (IoT) AWS IoT Events 73

AWS IoT Greengrass 74

Amazon Polly 75
Machine Learning
Amazon SageMaker 75

Amazon Transcribe 76

Amazon Comprehend 77

Amazon Rekognition 79

Amazon Lex 81
AWS CloudFormation 82
AWS CloudTrail 84
Amazon CloudWatch 86

Management and Amazon CloudWatch Logs 88


Governance AWS Compute Optimizer 89
AWS Config 90
AWS Health Dashboard 93

– Back to Index – 3
AWS Control Tower 94
AWS License Manager 95
AWS Management Console 96
AWS Organizations 98
AWS Systems Manager 100
AWS Trusted Advisor 102

AWS Application Discovery Service 103

AWS Database Migration Service 104

AWS DataSync 105


Migration and Transfer
AWS Migration Hub 107

AWS Transfer Family 108

Seven common migration strategies (7Rs) 109


Amazon CloudFront 110
AWS Direct Connect 111
Elastic Load Balancing 113
AWS PrivateLink 115
Networking and Content
Amazon Route 53 118
Delivery
AWS Transit Gateway 120
Amazon VPC 123
AWS Certificate Manager 125
Amazon Cognito 129
Amazon Detective 131
AWS Directory Service 133
Amazon GuardDuty 134
AWS Identity and Access Management 136
Security, Identity, and
Amazon Inspector 137
Compliance
AWS Key Management Service 139
AWS Resource Access Manager 140
AWS Secrets Manager 142
AWS Security Hub 144
AWS Security Token Service (AWS STS) 147
AWS WAF 148

– Back to Index – 4
AWS Backup 149

Amazon Elastic Block Store 150

Amazon Elastic File System 152


Storage
Amazon FSx for Windows File Server 154

Amazon FSx for Lustre 155

Amazon S3 156

Amazon S3 Glacier 159

AWS Storage Gateway 161

AWS Elastic Disaster Recovery 164

AWS CodeCommit 165

AWS CodeBuild 166

AWS CodeDeploy 167


Developer
Tools AWS CodePipeline 168

AWS Cloud9 170

AWS CodeArtifact 172

AWS CodeStar 173

AWS X-Ray 174

AWS CoreGuru 175

Media Services Amazon Elastic Transcoder 176

Blockchain Amazon Managed Blockchain 177

– Back to Index – 5
Amazon Athena
What is Amazon Athena?
Amazon Athena is an interactive serverless service used to analyze data directly in Amazon
Simple Storage Service using standard SQL ad-hoc queries.
Pricing Details:
● Charges are applied based on the amount of data scanned by each query at standard S3
rates for storage, requests, and data transfer.
● Canceled queries are charged based on the amount of data scanned
● No charges are applied for Data Definition Language (DDL) statements
● Charges are applied for canceled queries also based on the amount of data scanned.
● Additional costs can be reduced if data gets compressed, partitioned, or converted into a
columnar format.
Functions of Athena:
● It helps to analyze different kinds of data (unstructured, semi-structured, and structured)
stored in Amazon S3
● Using Athena, ad-hoc queries can be executed using ANSI SQL without actually loading
the data into Athena.
● It can be integrated with Amazon Quick Sight for data visualization and helps generate
reports with business intelligence tools.
● It helps to connect SQL clients with a JDBC or an ODBC driver
● It executes multiple queries in parallel, so no need to worry about compute resources.
● It supports various standard data formats, such as CSV, JSON, ORC, Avro, and Parquet.

[Source: AWS Documentation]

– Back to Index – 6
Amazon EMR
What is Amazon EMR?
Amazon EMR (Elastic Map Reduce) is a service used to process and analyze large amounts of
data in the cloud using Apache Hive, Hadoop, Apache Flink, Spark, etc.

● The main component of EMR is a cluster that collects Amazon EC2 instances (also
known as nodes in EMR).
● It decouples the compute and storage layer by scaling independently and storing cluster
data on Amazon S3.
● It also controls network access for the instances by configuring instance firewall
settings.
● It offers basic functionalities for maintaining clusters such as monitoring, replacing
failed instances, bug fixes, etc.
● It analyzes machine learning workloads using Apache Spark MLlib and TensorFlow,
clickstream workloads using Apache Spark and Apache Hive, and real-time streaming
workloads from Amazon Kinesis using Apache Flink.

It provides more than one compute instance or container to process the workloads and can be
executed on the following AWS services:
● Amazon EC2
● Amazon EKS
● AWS Outposts
Amazon EMR can be accessed in the following ways:
● EMR Console
● AWS Command Line Interface (AWS CLI)
● Software Development Kit (SDK)
● Web Service API
It offers basic functionalities for maintaining clusters such as
● Monitoring
● Replacing failed instances
● Bug fixes

– Back to Index – 7
AWS Glue
What is AWS Glue?
AWS Glue is a serverless ETL (extract, transform, and load) service used to categorize data and
move them between various data stores and streams.

AWS Glue works with the following services:

● Redshift - for data warehouses


● S3 - for data lakes
● RDS or EC2 instances - for data stores

Properties of AWS Glue:


● It supports data integration, preparing and combining data for analytics, machine
learning, and other application development
● It has a central repository known as the AWS Glue Data Catalog that automatically
generates Python or Scala code
● It processes semi-structured data using a simple ‘dynamic’ frame in the ETL scripts
similar to an Apache Spark data frame that organizes data into rows and columns.
● It helps execute the Apache Spark environment’s ETL jobs by discovering data and
storing the associated metadata in the AWS Glue Data Catalog.
● AWS Glue and Spark can be used together by converting dynamic frames and Spark data
frames to perform all kinds of analysis
● It allows organizations to work together and perform data integration tasks, like
extraction, normalization, combining, loading, and running ETL workloads

– Back to Index – 8
Amazon Kinesis Data Analytics
What is Amazon Kinesis Data Analytics?
Amazon Kinesis Data Analytics is a cloud-native offering within the AWS ecosystem, designed
to simplify the processing and analysis of real-time streaming data. It is an integral component
of the broader Amazon Kinesis family, which is tailored to streamline operations involving
streaming data.

Features
Real-time Data Processing: Kinesis Data Analytics can ingest and process data streams in
real-time, making it well-suited for applications that require immediate insights and responses
to streaming data, such as IoT (Internet of Things) applications, clickstream analysis, and more.

SQL-Based Programming: You can write SQL queries to transform, filter, aggregate, and analyze
streaming data without the need for low-level coding. It may not support very complex SQL
queries or advanced analytical functions found in traditional databases.

Integration with Other AWS Services: Kinesis Data Analytics can easily integrate with other
AWS services like Kinesis Data Streams (for data ingestion), Lambda (for serverless computing),
and various data storage and analytics tools like Amazon S3, Amazon Redshift, and more.

Real-time Analytics Applications: You can use Kinesis Data Analytics to build real-time analytics
applications, perform anomaly detection, generate alerts based on streaming data patterns, and
even create real-time dashboards to visualize your insights.

Scalability: Kinesis Data Analytics is designed to scale automatically based on the volume of
data you're processing, ensuring that your analytics application can handle growing workloads
without manual intervention.
Limitations:
Data Retention: Data retention in Kinesis Data Analytics is generally limited. You may need to
store your data in another AWS service (e.g., Amazon S3) if you require long-term storage of
streaming data.
Throughput: There are limits on the maximum throughput that Kinesis Data Analytics can
handle. If you need to process extremely high volumes of streaming data, you may need to
consider partitioning your data streams and scaling your application accordingly. Resource
Allocation: AWS manages the underlying infrastructure for Kinesis Data Analytics, but you may
have limited control over the resource allocation. This means that you might not be able to
fine-tune the resources allocated to your application.

– Back to Index – 9
Amazon Data Firehose
What is Amazon Data Firehose?
Amazon Data Firehose is a serverless service used to capture, transform, and load streaming
data into data stores and analytics services.

● It synchronously replicates data across three AZs while delivering them to the
destinations.
● It allows real-time analysis with existing business intelligence tools and helps to
transform, batch, compress, and encrypt the data before delivering it.
● It creates a Kinesis Data Firehose delivery stream to send data. Each delivery stream
keeps data records for one day.
● It has 60 seconds minimum latency or a minimum of 32 MB of data transfer at a time.
● Kinesis Data Streams and CloudWatch events can be considered as the source(s) to
Kinesis Data Firehose.

It delivers streaming data to the following services:


● Amazon S3
● Amazon Redshift
● Amazon Elasticsearch Service
● Splunk
● AWS Kinesis Data Firehose

– Back to Index – 10
Amazon Kinesis Data Streams

What is Amazon Kinesis?


Amazon Kinesis is a service used to collect, process, and analyze real-time streaming data. It
can be an alternative to Apache Kafka.

What are Amazon Kinesis Data Streams?


Amazon Kinesis Data Streams (KDS) is a scalable real-time data streaming service. It captures
gigabytes of data from sources like website clickstreams, events streams (database and
location-tracking), and social media feeds

● The Kinesis family consists of Kinesis Data Streams, Kinesis Data Analytics, Kinesis
Data Firehose, and Kinesis Video Streams.
● The Real-time data can be fetched from Producers which are Kinesis Streams API,
Kinesis Producer Library (KPL), and Kinesis Agent.
● It allows building custom applications known as Kinesis Data Streams applications
(Consumers), which reads data from a data stream as data records.

● Data Streams are divided into Shards / Partitions whose data retention is 1 day
(by default) and can be extended to 7 days
● Each shard provides a capacity of 1MB per second of input data and 2MB per
second of output data

– Back to Index – 11
AWS Lake Formation
A data lake is a secure repository that stores all the data in its original form and is used for
analysis.

What is AWS Lake Formation?


AWS Lake Formation is a cloud service that is used to create, manage, and secure data lakes. It
automates the complex manual steps required to create data lakes.

AWS Lake Formation integrates with:


● Amazon CloudWatch
● Amazon CloudTrail
● Amazon Glue: Both use the same Data Catalog
● Amazon Redshift Spectrum
● Amazon EMR
● AWS Key Management Service
● Amazon Athena: Athena's users can query the AWS Glue catalog which has Lake
Formation permissions on them.

Lake Formation is pointed at the data sources, then crawls the sources and moves the data into
the new Amazon S3 data lake.

It integrates with AWS Identity and Access Management (IAM) to provide fine-grained access to
the data stored in data lakes using a simple grant/revoke process

Pricing Details:
Charges are applied based on the service integrations (AWS Glue, Amazon S3, Amazon EMR,
Amazon Redshift) at a standard rate

– Back to Index – 12
Amazon Managed Streaming for Apache Kafka
(Amazon MSK)

It helps to populate machine learning applications, analytical applications, and data lakes, and
stream changes to and from databases using Apache Kafka APIs.

What is Amazon MSK?


Amazon MSK is a managed cluster service used to build and execute Apache Kafka
applications for processing streaming data.

It provides multiple kinds of security for Apache Kafka clusters, including:


● AWS IAM for API Authorization
● Encryption at Rest
● Apache Kafka Access Control Lists (ACLs)
It easily configures applications by removing all the manual tasks used to configure.

The steps that Amazon MSK manages are:

● Replacing servers during failures


● Handling server patches and upgrades with no downtime
● Maintenance of Apache Kafka clusters
● Maintenance of Apache ZooKeeper
● Multi-AZ replication for Apache Kafka clusters
● Planning scaling events

Amazon MSK Integrates with:


● AWS Glue: To execute Apache Spark job on Amazon MSK cluster
● Amazon Kinesis Data Analytics: To execute the Apache Flink job on the Amazon MSK
cluster
● Lambda Functions

– Back to Index – 13
Amazon OpenSearch Service

OpenSearch Service is a free and open-source search engine for all types of data like textual,
numerical, geospatial, structured, and unstructured.

Amazon OpenSearch Service can be integrated with the following services:


● Amazon CloudWatch
● Amazon CloudTrail
● Amazon Kinesis
● Amazon S3
● AWS IAM
● AWS Lambda
● Amazon DynamoDB

What is Amazon OpenSearch Service?


Amazon OpenSearch Service is a managed service that allows users to deploy, manage, and
scale Elasticsearch clusters in the AWS Cloud. Amazon ES provides direct access to the
Elasticsearch APIs.

Amazon OpenSearch Service with Kibana (visualization) & Logstash (log ingestion) provides an
enhanced search experience for applications and websites to find relevant data quickly

Amazon OpenSearch Service launches the Elasticsearch cluster’s resources detects the failed
Elasticsearch nodes and replaces them.

The OpenSearch Service cluster can be scaled with a few clicks in the console.
Pricing Details:

● Charges are applied for each hour of use of EC2 instances and storage volumes
attached to the instances
● Amazon OpenSearch Service does not charge for data transfer between availability
zones

– Back to Index – 14
Amazon QuickSight
What is Amazon QuickSight?
● Amazon QuickSight: A scalable cloud-based BI service providing clear insights to
collaborators worldwide.
● Connects to various data sources, consolidating them into single data
dashboards.
● Fully managed with enterprise-grade security, global availability, and built-in
redundancy.
● User management tools support scaling from 10 users to 10,000 without
infrastructure deployment.
● Empowers decision-makers to explore and interpret data interactively.
● Securely accessible from any network device, including mobile devices.

Features:
● Automatically generate accurate forecasts.
● Automatically detect anomalies.
● Uncover latent trends.
● Take action based on critical business factors.
● Transform data into easily understandable narratives, such as headline tiles for your
dashboard.

The platform offers enterprise-grade security with authentication for federated users
and groups via IAM Identity Center, supporting single sign-on with SAML, OpenID
Connect, and AWS Directory Service. It ensures fine-grained permissions for AWS data

– Back to Index – 15
access, row-level security, and robust encryption for data at rest. Users can access both
AWS and on-premises data within Amazon Virtual Private Cloud for enhanced security.

Benefits:
● Achieve a 74% cost reduction in BI solutions over three years, with up to a 300%
increase in analytics usage.
● Enjoy no upfront licensing costs and minimal total cost of ownership (TCO).
● Enable collaborative analytics without application installation.
● Aggregate diverse data sources into single analyses and share them as
dashboards.
● Manage dashboard features, permissions, and simplify database permissions
management for viewers accessing shared content.

Amazon Q in QuickSight:
Amazon Q within QuickSight enhances business productivity by leveraging Generative BI
capabilities to expedite decision-making. New dashboard authoring features empower
analysts to swiftly build, discover, and share insights using natural language prompts.
Amazon Q simplifies data comprehension with executive summaries, an improved
context-aware Q&A experience, and customizable interactive data stories.
Pricing:
● QuickSight offers flexible pricing based on user roles, allowing selection of the
model that aligns with business requirements.
● A low $3/month reader fee enables organization-wide access to interactive
analytics and natural language capabilities.
● Choose between per-user pricing and capacity pricing based on business needs.

Links: https://docs.aws.amazon.com/quicksight/latest/user/welcome.html
Amazon QuickSight - Business Intelligence Tools
https://aws.amazon.com/quicksight/pricing/

– Back to Index – 16
Amazon Simple Workflow Service (Amazon SWF)

Amazon Simple Workflow Service (Amazon SWF) is used to coordinate work amongst
distributed application components.

What is Amazon Simple Workflow Service?


Amazon Simple Workflow Service A task is a logical representation of work performed by a
component of the application.

Tasks are performed by implementing workers and execute either on Amazon EC2 or on
on-premise servers (which means it is not a serverless service).

● Amazon SWF stores tasks and assigns them to workers during execution.
● It controls task implementation and coordination, such as tracking and maintaining the
state using API.
● It helps to create distributed asynchronous applications and supports sequential and
parallel processing.
● It is best suited for human-intervened workflows.
● Amazon SWF is a less-used service, so AWS Step Functions is the better option than
SWF

– Back to Index – 17
AWS AppSync
What is AWS AppSync?
● AWS AppSync is a serverless service used to build GraphQL API with real-time data
synchronization and offline programming features.
● GraphQL is a data language built to allow apps to fetch data from servers.

● It replaces the functionality of Cognito Sync by providing offline data


synchronization.
● It improves performance by providing data caches, provides subscriptions to
support real-time updates, and provides client-side data stores to keep off-line
clients in sync.
● It offers certain advantages over GraphQL, such as enhanced coding style and
seamless integration with modern tools and frameworks like iOS and Android.
● AppSync interface provides a live GraphQL API feature that allows users to test
and iterate on GraphQL schemas and data sources quickly.
● Along with AppSync, AWS provides an Amplify Framework that helps build mobile
and web applications using GraphQL APIs.

The different data sources supported by AppSync are:

– Back to Index – 18
Amazon EventBridge
What is Amazon EventBridge?

● A serverless event bus service for Software-as-a-Service (SAAS) and AWS services.

● In simple words, Amazon EventBridge provides an easy solution to integrate SAAS,


custom-build applications with more than 17+ AWS services with the delivery of
real-time data from different event sources. Users can easily set up the routing rules to
determine the target web-service, and multiple target locations (such as AWS Lambda or
AWS SNS) can be selected at once.

● It is a fully managed service that takes care of event ingestion, delivery, security,
authorization, error handling, and required infrastructure management tasks to set up
and run a highly scalable serverless event bus. EventBridge was formerly called Amazon
CloudWatch Events, and it uses the same CloudWatch Event API.

Key Concepts
Event Buses

An event bus receives events. When a user creates a rule, which will be associated with a
specific event bus, the rule matches only to the event received by the event bus. Each user’s
account has one default event bus, which receives events from AWS services. We can also
create our custom event buses.
Events

An event indicates a change in the environment. By creating rules, you can have AWS services
that act automatically when changes occur in other AWS services, in SaaS applications, or user’s
custom applications.

Shema Registry

A Schema Registry is a container for schemas. Schemas are available for the events for all AWS
services on Amazon EventBridge. Users can always create or update their schemas or
automatically infer schemas from events running on event buses. Each schema will have
multiple versions. Users can use the latest schema or select earlier versions.
Rules

A rule matches incoming events and routes them to targets for processing. A single rule can
route an event (JSON format) to multiple targets. All pointed targets will be processed in parallel
and in no particular order.

– Back to Index – 19
Targets
A target processes events and receives events in JSON format. A rule’s target must be in the
same region as a rule.

Features:
● Fully managed, pay-as-you-go.
● Native integration with SaaS providers.
● 90+ AWS services as sources.
● 17 AWS services as targets.
● $1 per million events put into the bus.
● No additional cost for delivery.
● Multiple target locations for delivery.
● Easy to scale and manage.

As shown above, this service receives input from different sources (such as custom apps, SaaS
applications, and AWS services). Amazon EventBridge contains an event source for a SaaS
application responsible for authentication and security of the source. EventBridge has a schema
registry, event buses (default, custom, and partner), and rules for the target services.

– Back to Index – 20
Pricing

● There are no additional charges for rules or event delivery.


● The users only pay for events published to your event bus, events ingested for Schema
Discovery, and Event Replay.
Custom events: Charge $1.00 per million requests.
Third-party events (SaaS): Charge $1.00 per million requests.
Cross-account events: $1.00 per million.

– Back to Index – 21
AWS SNS (Simple Notification Service)
What is AWS SNS?

Amazon Simple Notification Service (Amazon SNS) is a web service that makes it easy to set
up, operate, and send notifications from the cloud.

It provides developers with a highly scalable, flexible, and cost-effective approach to publishing
messages from an application and delivering them to subscribers or other applications. It
provides push notifications directly to mobile devices and delivers notifications by SMS text
messages, email to Amazon Simple Queue Service (SQS), or any HTTP client.

It allows developers to group multiple recipients using topics.


It consists of topics and subscribers.

A topic is an access point for allowing recipients to get identical copies for the same
notification. One topic can support deliveries to multiple end-points – for example - we can
group together to android, IOS, and SMS text messages.
Two types of topics can be defined in the AWS SNS service.
1. Standard topic is used when incoming messages are not in order. In other words,
messages can be delivered as they are received.
2. FIFO topic is designed to maintain order of the messages between the applications,
especially when the events are critical. Duplication will be avoided in this case.

Features
● Instantaneous, push-based delivery.
● Simple API and easy integration with AWS services.
● Flexible message delivery over multiple message protocols.
● Cost-effective – as pay as pay-as-you-go model.
● Fully managed and durable with automatic scalability.
Use cases
● SNS application to person: below use cases show SNS service publishes messages to
topic, sending messages to each customer’s cell phone. This is an example of an AWS
application to personal service.

– Back to Index – 22
● SNS Application to Application: In this type of service, where SNS topic would interact
with different AWS services such as AWS Lambda, Node JS app, and SQS services. For
example, AWS S3 service has only configuration with AWS SNS service, which will be
responsible for sending identical messages to other AWS services.

Pricing
● Standard Topics: First 1 million Amazon SNS requests per month are free. There will be a
cost associated with $0.50 per 1 million requests.
● FIFO Topics: Amazon SNS FIFO topic pricing is based on the number of published
messages, the number of subscribed messages, and their respective amount of payload
data.

– Back to Index – 23
Amazon Simple Queue Service (SQS)
What is Amazon Simple Queue Service (SQS)?
Amazon Simple Queue Service (SQS) is a serverless service used to decouple (loose couple)
serverless applications and components.
The queue represents a temporary repository between the producer and consumer of
messages.
It can scale up to 1-10000 messages per second.
The default retention period of messages is four days and can be extended to fourteen days.
SQS messages get automatically deleted after being consumed by the consumers. SQS
messages have a fixed size of 256KB.

There are two SQS Queue types:

Standard Queue -
● The unlimited number of transactions per second.
● Messages get delivered in any order.
● Messages can be sent twice or multiple times.

FIFO Queue -
● 300 messages per second.
● Support batches of 10 messages per operation, results in 3000 messages per second.
Messages get consumed only once.

Delay Queue is a queue that allows users to postpone/delay the delivery of messages to a
queue for a specific number of seconds. Messages can be delayed for 0 seconds (default) -15
(maximum) minutes.

Dead-Letter Queue is a queue for those messages that are not consumed successfully. It is
used to handle message failure. Visibility Timeout is the amount of time during which SQS
prevents other consumers from receiving (poll) and processing the messages.

– Back to Index – 24
● Default visibility timeout - 30 seconds
● Minimum visibility timeout - 0 seconds
● Maximum visibility timeout - 12 hours

– Back to Index – 25
AWS Step Functions
What are Step functions?
Step functions allow developers to offload application orchestration into fully managed AWS
services. This means you can just modularize your code to “Steps” and let AWS worry about
handling partial failure cases, retries, or error handling scenarios.

Types of step functions:


1. Standard workflow: Standard workflow can be used for long-running, durable, and
auditable workflows.
2. Express Workflow: Express workflow is designed for high volume, and event processing
workloads.
Features:
● Allow to create workflow which follows a fixed or dynamic sequence.
● Inbuilt “Retry” and error handling functionality.
● Support integration with AWS native Lambda, SNS, ECS, AWS Fargate, etc.
● Support GUI audit workflow process, input/output, etc., well.
● GUI provides support to analyze the running process and detect the failures immediately.
● High availability, High scalability and low cost.
● Manages the states of the application during workflow execution.
● Step function is based on the concepts of tasks and state machines.
● Tasks can be defined by using an activity or an AWS Lambda function.
● State machines can express an algorithm that contains relations, input/output

Best Practices:

● Set time-outs in state machine definitions, which help in better task response when
something goes wrong in getting a response from an activity.
Example:
"ActivityState": {
"Type": "Task",
"Resource":
"arn:aws:states:us-east-1:123456789012:activity:abc",
"TimeoutSeconds": 900,
"HeartbeatSeconds": 40,
"Next": "State2" }
● Always provide the Amazon S3 arn (amazon resource name) instead of large payloads to
the state machine when passing input to Lambda function.
Example:
{

– Back to Index – 26
"Data": "arn:aws:s3:::MyBucket/data.json"
}
● Handle errors in state machines while invoking AWS lambda functions.
Example:
"Retry": [ {
"ErrorEquals": [ "Lambda.CreditServiceException"]
"IntervalSeconds": 2,
"MaxAttempts": 3,
"BackoffRate": 2
}]
● It has a hard quota of 25K entries during execution history. To avoid this for long-running
executions, implement a pattern using the AWS lambda function.

It supports below AWS services:

● Lambda
● AWS Batch
● DynamoDB
● ECS/Fargate
● SNS
● SQS
● SageMaker
● EMR

Pricing:
● With Step Functions Express Workflows, you pay only for what you use. You are charged
based on the number of requests for your workflow and its duration.
● $0.025 per 1,000 state transitions (For Standardworkflows)
● $1.00 per 1M requests (For Express workflows)

– Back to Index – 27
AWS Budgets
What is AWS Budgets?
AWS Budgets enables the customer to set custom budgets to track cost and usage from the
simplest to the complex use cases.

● AWS Budgets can be used to set reservation utilization or coverage targets allowing you to get
alerts by email or SNS notification when the metrics reach the threshold.
● Reservation alerts feature is provided to Amazon EC2, Amazon RDS, Amazon Redshift,
Amazon ElastiCache, and Elasticsearch.
● The Budgets can be filtered based on specific dimensions such as Service, Linked Account,
Tags, Availability Zone, API Operation, and Purchase Option (i.e., “Reserved”) and be notified
using SNS.
● AWS Budgets can be accessed from the AWS Management Console’s service links and within
the AWS Billing Console. Budgets API or CLI (command-line interface) can also be used to
create, edit, delete, and view up to 20,000 budgets per payer account.
● AWS Budgets can be integrated with other AWS services such as AWS Cost Explorer, AWS
Chatbot, Amazon Chime room, and AWS Service Catalog.
● AWS Budgets can now be created monthly, quarterly, or annual budgets for the AWS resource
usage or the AWS costs.

The following types of budgets can be created using AWS Budgets:


● Cost budgets
● Usage budgets
● RI utilization budgets
● RI coverage budgets
● Savings Plans utilization budgets
● Savings Plans coverage budgets

Best Practices:
● Users can set up to five alerts for each budget. But the most important are:
○ Alerts when current monthly costs exceed the budgeted amount.
○ Alerts when current monthly costs exceed 80% of the budgeted amount.
○ Alerts when forecasted monthly costs exceed the budgeted amount.
● When creating budgets using Budgets API, a separate IAM user should be made for allowing
access or IAM role for each user, if multiple users need access to Budgets API.
● If using consolidated billing in an organization is handled by a master account, IAM policies
can control access to budgets by member accounts. Member account owners can create their
budgets but cannot change or edit budgets of Master accounts.

– Back to Index – 28
● Two of the related managed policies are provided for budget actions. One policy allows a user
to pass a role to the budgets service, and the other allows budgets to execute the action.
● Budget actions are not effective enough to control costs with Auto Scaling groups.

Price details:
● Monitoring the budgets and receiving notifications are free of charge.
● Each subsequent action-enabled budget will experience a $0.10 daily cost after the free quota
ends.

– Back to Index – 29
AWS Cost and Usage Report
What is AWS Cost and Usage Report?

AWS Cost & Usage Report is a service that allows users to access the detailed set of AWS cost
and usage data available, including metadata about AWS resources, pricing, Reserved Instances,
and Savings Plans.

AWS Cost and Usage Reports functions:

❑ It sends report files to your Amazon S3 bucket.


❑ It updates reports up to three times a day.

Access to the Cost and Usage Reports

✔ For viewing, reports can be downloaded from the Amazon S3 console; for analyzing the
report, Amazon Athena can be used, or upload the report into Amazon Redshift or Amazon
QuickSight.
✔ Users with IAM permissions or IAM roles can access and view the reports.
✔ If a member account in an organization owns or creates a Cost and Usage Report, it can have
access only to billing data when it has been a member of the Organization.
✔ If the master account of an AWS Organization wants to block access to the member
accounts to set-up a Cost and Usage Report, Service Control Policy (SCP) can be used

– Back to Index – 30
AWS Cost Explorer
What is AWS Cost Explorer?
AWS Cost Explorer is a UI-tool that enables users to analyze the costs and usage with the help
of a graph, the Cost Explorer cost and usage reports, and/or the Cost Explorer RI report. It can
be accessed from the Billing and Cost Management console

It provides default reports for analysis with some filters and constraints to create the reports.
Analysis using Cost Explorer can be saved as a bookmark, CSV file download, or save them as a
report.

The default reports provided by Cost Explorer are:


● Cost and Usage Report
● It provides the following data for understanding the costs:-
AWS Marketplace
Daily costs
Monthly costs by linked account
Monthly costs by service
Monthly EC2 running hours costs and usage

● Reserved Instance Reports:


It provides the following reports for understanding the reservations:-
● RI utilization reports: It gives information about how much costs are saved or
overspent by using Reserved Instances (RIs)
● RI Coverage Reports: It gives information about how many hours are saved or
overspent by using Reserved Instances (RIs).

● The first time that the user signs up for Cost Explorer, it directs through the main parts of
the console. It prepares the data regarding costs & usage and displays up to 12 months
of historical data (might be less if less used), current month data, and then calculates
the forecast data for the next 12 months.
● It uses the same set of data that is used to generate the AWS Cost and Usage Reports
and the billing reports.
● It provides a custom time period to view the data at a monthly or daily interval.
● It provides a feature of Savings Plans which provides savings of up to 72% on the AWS
compute usage.
● It provides a way to access the data programmatically using the Cost Explorer API

– Back to Index – 31
AWS Auto Scaling
What is AWS Auto Scaling?
● AWS Auto Scaling keeps on monitoring your Application and automatically adjusts the
capacity required for steady and predictable performance.
● By using auto scaling it's very easy to set up the scaling of the application automatically with
no manual intervention.
● It allows you to create scaling plans for the resources like EC2 Instances, Amazon EC2 tasks,
Amazon DynamoDB, Amazon Aurora Read Replicas.
● It balances Performance Optimization and cost.

Terminologies related to AWS Autoscaling Groups:


Launch Configuration vs Launch Template
o EC2 Auto Scaling uses two types of instance configuration templates: launch configurations
and launch templates.
o We recommend that you use launch templates to make sure that you're getting the latest
features from Amazon EC2.
o For example, you must use launch templates to use Dedicated Hosts, which enable you to
bring your eligible software licenses from vendors, including Microsoft, and use them on EC2.
o If you intend to use a launch configuration with EC2 Auto Scaling, be aware that not all Auto
Scaling group features are available.
o If you want to launch on-demand and spot both instances you have to choose a launch
template.

Auto Scaling Lifecycle Hooks:


● The Lifecycle hook will pause your EC2 instance.
● The paused instances will remain in the wait state until the action is completed.
● The Wait state will remain active till the timeout period ends.

Monitoring:
● Health Check: Keep on checking the health of the instance and remove the unhealthy instance
out of Target Group.
● CloudWatch Events: AutoScaling can submit events to Cloudwatch for any type of action to
perform in the autoscaling group such as a launch or terminate an instance.

– Back to Index – 32
● CloudWatch Metrics: It shows you the statistics of whether your application is performing as
expected.
● Notification Service: Autoscaling can send a notification to your email if the autoscaling group
launches or the instance gets terminated.

Charges:
● AWS will not charge you additionally for the Autoscaling Group.
● You will be paying for the AWS Resources that you will use.

– Back to Index – 33
AWS Batch

What is AWS Batch?

AWS Batch allows developers, scientists, and engineers to run thousands of computing jobs in
the AWS platform. It is a managed service that dynamically maintains the optimal compute
resources like CPU, Memory based on the volume of submitted jobs.The User just has to focus
on the applications (like shell scripts, Linux codes or java programs).
It executes workloads on EC2 (including Spot instances) and AWS Fargate.

Components:

● Jobs - are the fundamental applications running on Amazon EC2 machines in


containerised form
● Job Definitions – define how the job is meant to be run. Like the associated IAM role,
vCPU requirement, and container properties.
● Job Queues – Jobs reside in the Job queue where they wait till they are scheduled.
● Compute Environments – Each job queue is linked to a computing environment which in
itself contains the EC2 instance to run containerized applications
● There are two types of environments: Managed where the user gives min and max vCPU,
EC2 instance type and AWS runs it on your behalf and Unmanaged where you have your
own ECS agent
● Scheduler – maintains the execution of jobs submitted to the queue as time and
dependencies.

Best Practices:

● Use Fargate if you want to run the application without getting into EC2 infrastructure
details. Let the AWS batch manage it.
● Use EC2 if your work scale is very large and you want to get into machine specifications
like memory, CPU, GPU.
● Jobs running on Fargate are faster on startup as there is no time lag in scale-out
operation, unlike EC2 where launching new instances may take time.

Use Cases:
● Stock markets and Trading – The trading business involves daily processing of large
scale data and loading them into a Data warehouse for analytics. So that your
predictions and decisions are quick enough to make a business grow on a regular basis.

– Back to Index – 34
● Media houses and the Entertainment industry – Here a large amount of data in the
forms of audio, video and photos are being processed daily to cater to their customers.
These application workloads can be moved to containers on AWS Batch.

Pricing:
● There is no charge for AWS Batch rather you pay for the resources like EC2 and Fargate
you use.

– Back to Index – 35
AWS EC2
What is AWS EC2?
● EC2 stands for Elastic Compute Cloud.
● Amazon EC2 is the virtual machine in the Cloud Environment.
● Amazon EC2 provides scalable capacity. Instances can scale up and down automatically
based on the traffic.
● You do not have to invest in the hardware.
● You can launch as many servers as you want and you will have complete control over the
servers and can manage security, networking, and storage.

Instance Type:
● Instance type is providing a range of instance types for various use cases.
● The instance is the processor and memory of your EC2 instance.

EBS Volume:
● EBS Stands for Elastic Block Storage.
● It is the block-level storage that is assigned to your single EC2 Instance.
● It persists independently from running EC2.
➤ Types of EBS Storage
➤ General Purpose (SSD)
➤ Provisioned IOPS (SSD)
➤ Throughput Optimized Hard Disk Drive
➤ Cold Hard Disk Drive
➤ Magnetic

Instance Store:
Instance store is the ephemeral block-level storage for the EC2 instance.
● Instance stores can be used for faster processing and temporary storage of the application.

AMI:
AMI Stands for Amazon Machine Image.
● AMI decides the OS, installs dependencies, libraries, data of your EC2 instances.
● Multiple instances with the same configuration can be launched using a single AMI.

Security Group:
A Security group acts as a virtual firewall for your EC2 Instances.
● It decides the type of port and kind of traffic to allow.

– Back to Index – 36
● Security groups are active at the instance level whereas Network ACLs are active at the subnet
level.
● Security Groups can only allow but can’t deny the rules.
● The Security group is considered stateful.
● By default, in the outbound rule all traffic is allowed and needs to define the inbound rules.

Key Pair:
A key pair, consisting of a private key and a public key, is a set of security credentials that you
can use to prove your identity while connecting to an instance.
● Amazon EC2 instances use two keys, one is the public key which is attached to your EC2
instance.
● Another is the private key which is with you. You can get access to the EC2 instance only if
these keys get matched.
● Keep the private key in a secure place.

Tags:
Tag is a key-value name you assign to your AWS Resources.
● Tags are the identifier of the resource.
● Resources can be organized well using the tags.

Pricing:
● You will get different pricing options such as On-Demand, Savings Plan, Reserved Instances,
and Spot Instances.

– Back to Index – 37
Amazon EC2 Auto Scaling

What is Amazon EC2 Auto Scaling?


Amazon EC2 Auto Scaling is a region-specific service used to maintain application availability
and enables users to automatically add or remove EC2 instances according to the compute
workloads.

Features

● The Auto Scaling group is a collection of the minimum number of EC2 used for high
availability.
● It enables users to use Amazon EC2 Auto Scaling features such as fault tolerance, health
check, scaling policies, and cost management.
● The scaling of the Auto Scaling group depends on the size of the desired capacity. It is
not necessary to keep DesiredCapacity and MaxSize equal.
● EC2 Auto Scaling supports automatic Horizontal Scaling (increases or decreases the
number of EC2 instances) rather than Vertical Scaling (increases or decreases EC2
instances like large, small, medium).
● It scales across multiple Availability Zones within the same AWS region.
E.g., DesiredCapacity: '2' - There will be total 2 EC2 instances
MinSize: '1'
MaxSize: ‘2

– Back to Index – 38
AWS Elastic Beanstalk

What is Amazon Elastic Beanstalk?


● Beanstalk is a compute service for deploying and scaling applications developed in many
popular languages.
● Developers can focus on writing code and don’t need to worry about the underlying
infrastructure required to run the application.
● AWS Elastic Beanstalk is the best way to deploy your application in the fastest and simplest
way.
● It provides the user interface/dashboard to monitor your application.
● It gives you the flexibility to choose AWS resources such as Amazon EC2 Instance along with
the pricing options which suit your application needs.

AWS Elastic Beanstalk supports two types of Environment:

● Web Tier Environment

This application hosted on the Web Server Environment handles the HTTP and HTTPS requests
from the users.
o Beanstalk Environment: When an environment is launched, Beanstalk automatically
assigns various resources to run the application successfully.
o Elastic Load Balancer: Request is received from the user via Route53 which forwards
the request to ELB. Then ELB distributes the request among various EC2 Instances of
the Autoscaling group.
o Auto Scaling Group: Auto Scaling will automatically add or remove EC2 Instance based
on the load in the application.
o Host Manager: Software components inside every EC2 Instance which is responsible
for the following:
▪ Log files generation
▪ Monitoring
▪ Events in Instance

– Back to Index – 39
● Worker Environment
○ A worker is a background process that helps applications for handling heavy resource
and time-intensive operations.
○ It is responsible for database clean up, report generation that helps to remain up and
running. ○ In the Worker Environment, Beanstalk installs a Daemon on each EC2 Instance
in the Auto Scaling Group.
○ Daemon pulls requests from the SQS queue and executes the task based on the
message received.
○ After execution, SQS will delete the message, and in case of failure, it will retry to send
the message.

Platform Supported
● .Net (on Linux or Windows)
● Docker
● GlassFish
● Go
● Java
● Node.js
● Python
● Ruby
● Tomcat

Deployment Models:

– Back to Index – 40
All at Once: Deployment will start taking place in all the instances at the same time. It means all
your EC2 Instances will be out of service for a short time. Your application will be completely
down for the same duration.
Rolling: Deploy the new version in batches; unlike all at once, one group of instances will run
the old version of the application. That means there will not be complete downtime just like all
at once.
Rolling with additional batch: Deploy the new version in batches. But before that, provision an
additional group of instances to compensate for the updating one.
Immutable: Deploy the new version to a separate group of instances, and the update will be
immutable.
Traffic splitting: Deploy the new version to a separate group of instances and split the incoming
traffic between the older and the new ones.

Pricing:

● Amazon will not charge you for AWS Elastic Beanstalk.


● Instead, you will be paying for the resources such as EC2 Instance, ELB and Auto Scaling
group where your application is hosted.

– Back to Index – 41
AWS Fargate

What is AWS Fargate?


AWS Fargate is a serverless compute service that is used for containers by Amazon Elastic
Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS).
● It eliminates the tasks required to provision, configure, or scale groups of virtual machines like
Amazon EC2 to run containers.
● It packages the application in containers, by just specifying the CPU and memory
requirements with IAM policies. Fargate task does not share its underlying kernel, memory
resources, CPU resources, or elastic network interface (ENI) with another task.
● It does not support all the task definition parameters that are available in Amazon ECS tasks.
Only a few are valid for Fargate tasks with some limitations.
● Kubernetes can be integrated with AWS Fargate by using controllers. These controllers are
responsible for scheduling native Kubernetes pods onto Fargate.
● Security groups for pods in EKS can not be used when pods running on Fargate.

● The following storage types are supported for Fargate tasks:


○ Amazon EFS volumes for persistent storage
○ Ephemeral storage for nonpersistent storage

Benefits:

● Fargate allows users to focus on building and operating the applications rather than focusing
on securing, scaling, patching, and managing servers.
● Fargate automatically scales the compute environment that matches the resource
requirements for the container.
● Fargate provides built-in integrations with other AWS services like Amazon CloudWatch
Container Insights.

Price details:

● Charges are applied for the amount of vCPU and memory consumed by the containerized
applications.
● Fargate’s Savings Plans provide savings of up to 50% in exchange for one or three-year long
term commitment.
● Additional charges will be applied if containers are used with other AWS services.

– Back to Index – 42
AWS Lambda
What is AWS Lambda?
● AWS Lambda is a serverless compute service through which you can run your code without
provisioning any Servers.
● It only runs your code when needed and also scales automatically when the request count
increases.
● AWS Lambda follows the Pay per use principle – it means there is no charge when your code
is not running.
● Lambda allows you to run your code for any application or backend service with zero
administration.
● Lambda can run code in response to the events. Example – update in DynamoDB Table or
change in S3 bucket.
● You can even run your code in response to HTTP requests using Amazon API Gateway.

What is Serverless computing?


● Serverless computing is a method of providing backend services on a pay per use basis.
● Serverless/Cloud vendor allows you to write and deploy code without worrying about the
underlying infrastructure.
● Servers are still there, but you are not managing them, and the vendor will charge you based
on usage.

When do you use Lambda?


● When using AWS Lambda, you are only responsible for your code.
● AWS Lambda manages the memory, CPU, Network, and other resources.
● It means you cannot log in to the compute instances or customize the operating system.
● If you want to manage your own compute resources, you can use other compute services
such as EC2, Elastic Beanstalk.
● There will be a level of abstraction which means you cannot log in to the server or customize
the runtime. How does Lambda work?

– Back to Index – 43
How does Lambda work?

Lambda Functions

● A function is a block of code in Lambda.


● You upload your application/code in the form of single or multiple functions.
● You can upload a zip file, or you can upload a file from the S3 bucket as well.
● After deploying the Lambda function, Lambda automatically monitors functions on your
behalf, reporting metrics through Amazon CloudWatch.

Lambda Layers

● A Lambda layer is a container/archive which contains additional code such as libraries,


dependencies, or custom runtimes.
● AWS Lambda allows five layers in a function.
● Layers are immutable.
● A new version will be added if you publish a new layer.
● Layers are by default private but can be shared and made public explicitly.

Lambda Event

● Lambda Event is an entity that invokes the lambda function.


● Lambda supports synchronous invocation of Lambda Functions.
● Lambda supports the following sources as an event:
o AWS DynamoDB
o AWS SQS
o AWS SNS
o CloudWatch Event
o API Gateway

– Back to Index – 44
o AWS IoT
o Kinesis
o CloudWatch Logs

Language Supported in AWS Lambda

● NodeJS
● Go
● Java
● Python
● Ruby

Lambda@Edge

● It is the feature of Amazon CloudFront which allows you to run your code closer to the
location of Users of your application.
● It improves performance and reduces latency.
● Just like lambda, you don’t have to manage and provision the infrastructure around the
world.
● Lambda@Edge runs your code in response to the event created by the CDN.

Pricing:
● Charges will be calculated based on the number of requests for the function executed
in a particular duration.
● Duration will be counted on a per 100-millisecond basis.
● Lambda Free tier usage includes 1 million free requests per month.
● It also comes with 400,000 GB-Seconds of compute time per month.

– Back to Index – 45
AWS Outposts
What is AWS Outposts?
AWS Outposts enables running AWS services locally and accessing a variety of services
within the local AWS Region. Host applications on-premises using familiar AWS tools
and APIs, ensuring seamless integration. It supports low-latency access for workloads
needing local data processing, data residency compliance, and migration of
applications with local system dependencies.

Features:
● Deploy AWS Services locally to meet low latency and data residency requirements,
enabling on-premises data processing.
● Benefit from a fully managed infrastructure, minimizing the resources, time, and
operational risk involved in managing IT infrastructure.
● Achieve a consistent hybrid experience by utilizing identical hardware, APIs, tools, and
management controls available both on-premises and in the cloud.
● Utilizes Local Gateway, necessitating Border Gateway Protocol (BGP) over a routed
network for connectivity.
● Outposts racks are provided by AWS in a fully assembled state, simplifying the
installation process.
● Installation involves simply plugging the racks into power and network, with
centralized redundant power conversion units and a direct current (DC) distribution
system managed by line mate connectors in the backplane.

Use Cases
● Hybrid Cloud Deployment: Organizations with strict data residency requirements or
latency-sensitive workloads can deploy AWS Outposts to run AWS services locally while
still leveraging the broader capabilities of the AWS cloud.
● Edge Computing: Outposts enables edge computing by bringing AWS infrastructure
and services closer to where data is generated, allowing for real-time processing and
analysis of data at the edge.
● Data Processing at Remote Locations: Companies operating in remote locations with
limited or intermittent connectivity can use Outposts to process data locally and
synchronize with the cloud when connectivity is available.
● Application Modernization: Outposts supports application modernization efforts by
providing a consistent infrastructure platform across on-premises and cloud
environments, facilitating the migration of legacy applications to AWS services.

– Back to Index – 46
AWS Wavelength
What is AWS Wavelength?
Amazon Wavelength is used to create mobile applications with exceptionally low latencies.
Wavelength integrates storage and computing resources directly into the 5G edge networks of
communications service providers (CSPs). By expanding a virtual private cloud (VPC) to one or
more Wavelength Zones, developers can utilize AWS resources like Amazon EC2 instances to
run applications requiring ultra-low latency and connectivity to AWS services within the Region.

Features
● AWS Wavelength provides infrastructure tailored for mobile edge computing
applications, offering Wavelength Zones within CSP 5G networks.
● Wavelength Zones embed AWS compute and storage services directly into CSP 5G
networks, allowing application traffic from 5G devices to reach servers within these
zones without traversing the internet.
● This setup minimizes latency by eliminating the need for application traffic to pass
through multiple internet hops, leveraging the low latency and high bandwidth of
modern 5G networks.
● In Wavelength Zones, users can create EC2 instances, EBS volumes, VPC subnets, and
carrier gateways, enabling the deployment of various AWS services.
● Systems Manager, CloudWatch, CloudTrail, CloudFormation, and ALB, can also be
utilized.
● Wavelength services are integrated into a VPC connected via a reliable,
high-bandwidth connection to an AWS Region.

Use Cases:
● Develop media and entertainment applications leveraging AWS Wavelength to
provide high-resolution live video streaming, high-quality audio, and immersive
AR/VR experiences.
● Accelerate machine learning inference tasks at the edge by running AI and
ML-driven video and image analytics, enhancing 5G applications in medical
diagnostics, retail environments, and smart factories.
● Create connected vehicle applications for advanced driver assistance systems,
autonomous driving functionalities, and in-vehicle entertainment experiences.

– Back to Index – 47
Amazon Elastic Container Registry
What is Amazon Elastic Container Registry?

Amazon Elastic Container Registry (ECR) is a managed service that allows users to store,
manage, share, and deploy container images and artifacts. It is mainly integrated with Amazon
Elastic Container Service (ECS), for simplifying the production workflow.

Features:

● It stores both the containers which are created, and any container software bought through
AWS Marketplace.
● It is integrated with Amazon Elastic Container Service (ECS), Amazon Elastic Kubernetes
Service (EKS), and AWS Lambda, and AWS Fargate for easy deployments.
● AWS Identity and Access Management (IAM) enables resource-level control of each repository
within ECR.
● It supports public and private container image repositories. It allows sharing container
applications privately within the organization or publicly for anyone to download.
● A separate portal called Amazon ECR Public Gallery, helps to access all public repositories
hosted on Amazon ECR Public.
● It stores the container images in Amazon S3 because S3 provides 99.999999999% (11 9’s) of
data durability.
● It allows cross-region and cross-account replication of the data for high availability
applications.
● Encryption can be done via HTTPS while transferring container images. Images are also
encrypted at rest using Amazon S3 server-side encryption or by using customer keys managed
by AWS KMS.
● It is integrated with continuous integration and continuous delivery and also with third-party
developer tools.
● Lifecycle policies are used to manage the lifecycle of the images.

Pricing details:

● Using AWS Free Tier, new customers get 500 MB-month of storage for one year for private
repositories and 50 GB-month of free storage for public repositories.
● Without Sign-up, 500 GB of data can be transferred to the internet for free from a public
repository each month.
● By signing-up to an AWS account, or authenticating to ECR with an existing AWS Account, 5
TB of data can be transferred to the internet for free from a public repository each month

– Back to Index – 48
Amazon Elastic Container Service
What is Amazon ECS?
Amazon Elastic Container Service (Amazon ECS) is a regional container orchestration service
like Docker that allows to execute, stop, and manage containers on a cluster.

A container is a standard unit of software development that combines code, its dependencies,
and system libraries so that the application runs smoothly from one environment to another.

Images are created from a Dockerfile (text format), which specifies all of the components that
are included in the container. These images are then stored in a registry from where they can
then be downloaded and executed on the cluster.

All the containers are defined in a task definition that runs a single task or tasks within a service.
The task definitions (JSON format) defines which container images should run across the
clusters. A service is a configuration that helps to run and maintain several tasks
simultaneously in a cluster.

ECS cluster is a combination of tasks or services that can be executed on EC2 Instances or AWS
Fargate, a serverless compute for containers. When using Amazon ECS for the first time, a
default cluster is created.

The container agent runs on each instance within an Amazon ECS cluster. It sends data on the
resource's current running tasks and resource utilization to Amazon ECS. It starts and stops the
tasks whenever it receives a request from Amazon ECS. A task is the representation of a task
definition.

The number of tasks to run on your cluster is specified after the task definition is created within
Amazon ECS. The task scheduler is responsible for attaching tasks within your cluster based on
the task definitions.

– Back to Index – 49
Application Load Balancers offer some attractive features:
● It enables containers to use dynamic host port mapping. For that, multiple tasks from the
same service are allowed per container instance.
● It supports path-based routing and priority rules due to which multiple services can use the
same listener port on a single Application Load Balancer.

– Back to Index – 50
Amazon ECS can be integrated with:

● AWS Identity and Access Management


● Amazon EC2 Auto Scaling
● Elastic Load Balancing
● Amazon Elastic Container Registry
● AWS CloudFormation

➢ It decreases time consumption by eliminating user tasks to install, operate, and scale
cluster management infrastructure.With API calls, Docker-enabled applications can be
launched and stopped.
➢ It powers other services such as Amazon SageMaker, AWS Batch, Amazon Lex. It also
integrates with AWS App Mesh, to provide rich observability, controls traffic and security
features to the applications.
Use Cases:

The two main use cases in Amazon ECS are:


● Microservices - They are built by the architectural method that decomposes or
decouples complex applications into smaller and independent services.
● Batch Jobs - Docker containers are best suited for batch job workloads. Batch jobs are
short-lived packages processed under Docker image. So they can be deployed anywhere,
such as in an Amazon ECS task
.
Pricing details:

● Amazon ECS provides two charge models:


○ Fargate Launch Type Model - pay for the amount of vCPU and memory resources.
○ EC2 Launch Type Model - pay for the AWS resources created to store and run the
application.

– Back to Index – 51
Amazon Elastic Kubernetes Service(EKS)
What is Amazon Elastic Kubernetes Service(EKS)?
Amazon Elastic Kubernetes Service (Amazon EKS) is a service that enables users to manage
Kubernetes applications in the AWS cloud or on-premises. Any standard Kubernetes application
can be migrated to EKS without altering the code.

The EKS cluster consists of two components:


● Amazon EKS control plane
● Amazon EKS nodes

The Amazon EKS control plane consists of nodes that run the Kubernetes software, such as
etcd and the Kubernetes API server. Amazon EKS runs its own Kubernetes control plane without
sharing control plane infrastructure across other clusters or AWS accounts.

To ensure high availability, Amazon EKS runs Kubernetes control plane instances across
multiple Availability Zones. It automatically replaces unhealthy control plane instances and
provides automated upgrades and patches for the new control planes.

The two methods for creating a new Kubernetes cluster with nodes in Amazon EKS:
● eksctl – A command-line utility that consists of kubectl for creating and managing
Kubernetes clusters on Amazon EKS.
● AWS Management Console and AWS CLI

There are methods that Amazon EKS cluster uses to schedule pods using single or combined
node groups:

● Self-managed nodes - consist of one or more Amazon EC2 instances that are deployed
in an Amazon EC2 Auto Scaling group
● Amazon EKS Managed node groups - helps to automate the provisioning and lifecycle
management of nodes.
● AWS Fargate - run Kubernetes pods on AWS Fargate

Amazon Elastic Kubernetes Service is integrated with many AWS services for unique
capabilities:

● Images - Amazon ECR for container images


● Load distribution - AWS ELB (Elastic Load Balancing)
● Authentication - AWS IAM
● Isolation - Amazon VPC

– Back to Index – 52
Use Cases:
● Using Amazon EKS, Kubernetes clusters and applications can be managed across
hybrid environments.
● EKS with kubeflow can model machine learning workflows using the latest EC2
GPU-powered instances.
● Users can execute batch workloads on the EKS cluster using the Kubernetes Jobs API
across AWS compute services such as Amazon EC2, Fargate, and Spot Instances.

Price details:
● $0.10 per hour is charged for each Amazon EKS cluster created.
● Using EKS with EC2 - Charged for AWS resources (e.g. EC2 instances or EBS volumes).
● Using EKS with AWS Fargate - Charged for CPU and memory resources starting from
the time to download the container image until the Amazon EKS pod terminates.

– Back to Index – 53
Amazon Aurora
What is Amazon Aurora?
Aurora is the fully managed RDS services offered by AWS. It’s only compatible with
PostgreSQL/MySQL. As per AWS, Aurora provides 5 times throughput to traditional MySQL and
3 times throughput to PostgreSQL.

Features:
● It is only supported by regions which have minimum 3 availability zones.
● High availability of 99.99%. Data in Aurora is kept as 2 copies in each AZ with a minimum 3
AZ’s making a total of 6 copies.
● It can have up to 15 Read replicas (RDS has only 5).
● It can scale up to 128 TB per database instance.
● Aurora DB cluster comprises two instances:
○ Primary DB instance – It supports both read/write operations and one primary DB
instance is always present in the DB cluster.
○ Aurora Replica – It supports only read operation. Aurora automatically fails over to its
replica in less time in case a primary DB instance is not available.

Read replicas fetch the same result as the primary instance with a lag of not more than
100 ms.
● Data is highly secure as it resides in VPC. Encryption at rest is done through AWS KMS
and encryption in transit is done by SSL.
● Aurora Global Database - helps to span in multiple AWS regions for low latency access
across the globe. This can also be utilised as backup in case the whole region has gone
over an outage or disaster.

– Back to Index – 54
● Aurora Multi master – is a new feature only compatible only with MySQL edition. It
gives the ability to scale out write operations over multiple AZ. So there is no single point
of failure in the cluster and applications can perform both read/write at any node.
● Aurora Serverless - gives you the flexibility to scale in and out on the basis of database
load. The user has to only specify the minimum (2 GB of RAM), maximum (488 GB of
RAM) capacity units. This feature of Aurora is highly beneficial if the user has
intermittent or unpredictable workload. It is available for both MySQL and PostgreSQL.

● Fault tolerance and Self-Healing feature- In Aurora, each set of data is replicated in six
copies over 3 AZ. So that it can handle the loss up to 2 copies without impacting the
write feature and up to 3 copies without impacting the read feature. Aurora storage is
also self-healing which means disks are continuously scanned for errors and repaired

Best practices:
● If the user is not sure about the workload of the database then prefer Aurora
Serverless. If you have a team of developers and testers who hit the database only
during particular hours of day and it remains minimal during night, again prefer Aurora
Serverless.
● If write operations and DDL are crucial requirements, choose Multi-master Aurora for
MySQL. In this manner, all writer nodes are equally functional and failure one doesn’t
impact the other.
● Aurora Global database is best for industries in finance, gaming as one single DB
instance provides a global footprint. The application enjoys low latency read operation
on such databases.

Pricing:
● There are no up-front fees.
● On-demand instances are costlier than reserved instances. There is no additional fee
for backup if the retention period is less than a day.
● Data transfer between Aurora DB instance and EC2 in the same AZ is free.
● All data transfer IN to Database is free of charge.
● Data transfer OUT of Database through the internet is chargeable if it exceeds 1
GB/month.

Amazon DocumentDB
What is Amazon DocumentDB?

– Back to Index – 55
DocumentDB is a fully managed document database service by AWS which supports MongoDB
workloads. It is highly recommended for storing, querying, and indexing JSON Data.

Features:
● It is compatible with MongoDB versions 3.6 and 4.0.
● All on-premise MongoDB or EC2 hosted MongoDB databases can be migrated to DocumentDB
by using DMS (Database Migration Service).
● All database patching is automated in a stipulated time interval.
● DocumentDB storage scales automatically in increments of 10GB and maximum up to 64TB.
● Provides up to 15 Read replicas with single-digit millisecond latency.
● All database instances are highly secure as they reside in VPCs which only allow a given set of
users to access through security group permissions.
● It supports role-based access control (RBAC).
● Minimum 6 read copies of data is created in 3 availability zones making it fault-tolerant.
● Self-healing – Data blocks and disks are continuously scanned and repaired automatically.
● All cluster snapshots are user-initiated and stored in S3 till explicitly deleted.
Best Practices:
● It reserves 1/3rd RAM for its services, so choose your instance type with enough RAM so that
performance and throughput are not impacted.
● Setup Cloudwatch alerts to notify users when the database is reaching its maximum capacity.

Use Case:

● Highly beneficial for workloads that have flexible schemas.


● It removes the overhead of keeping two databases for operation and reporting. Store the
operational data and send them parallel to BI systems for reporting without having two
environments.

Pricing:

● Pricing is based on the instance hours, I/O requests, and backup storage.

Amazon ElastiCache
What is Amazon ElastiCache?

– Back to Index – 56
ElastiCache is a fully managed in-memory data store. It significantly improves latency and
performance for all read-heavy application workloads. In-memory caches are faster than
disk-based databases. It works with both Redis and Memcached protocol based engines.

Features:
● It is high availability as even the data center is under maintenance or outage; the data is still
retrieved from Cache.
● Unlike databases, data is retrieved in a key-value pair fashion.
● Data is stored in nodes which is a unit of network-attached RAM. Each node has its own Redis
or Memcached protocol running. Automatic replacement of failed nodes is configured.

● Memcached features –
○ Data is volatile.
○ Supports only simple data-type.
○ Supports multi-threading.
○ Scaling can be done by adding or removing nodes.
○ Nodes can span in different Availability Zones.
○ Multi-AZ failover is not supported.
● Redis features –
○ Data is non-volatile.
○ Supports complex Data types like strings, hashes, and geospatial-indexes.
○ Doesn’t support multi-threading.
○ Scaling can be done by adding shards and not nodes. A shard is a collection of
primary nodes and read-replicas.
○ Multi-AZ is possible by placing a read replica in another AZ.

– Back to Index – 57
○ In case of failover, can be switched to read replica in another AZ

Best practices:
● Storing web sessions. In web applications running behind a load balancer, use Redis
so if one the server is lost, data can still be retrieved.
● Caching Database results. Use Memcached in front of any RDS where repetitive
queries are being fired to improve latency and performance.
● Live Polling and gaming dashboards. Store frequently accessed data in Memcached to
fetch results quickly.
● Combination of RDS and Elasticache can be utilized to improve architecture on the
backend.

Pricing:
● Available only for on-demand and reserved nodes.
● Charged for per node hour.
● Partial node hours will be charged as full node hours.
● No charge for data exchange between Elasticache and EC2 within the same AZ.
https://aws.amazon.com/elasticache/pricing/

Amazon Keyspaces (for Apache Cassandra)


What is Amazon Keyspaces (for Apache Cassandra)?

– Back to Index – 58
Keyspaces is an Apache Cassandra compatible database in AWS. It is fully managed by AWS,
highly available, and scalable. Management of servers, patching is done by Amazon. It scales
based on incoming traffic with virtually unlimited storage and throughput.

Features:
● Keyspaces is compatible with Cassandra Query Language (CQL). So your application can be
easily migrated from on-premise to cloud.
● Two operation modes are available as below
1. The On-Demand capacity mode is used when the user is not certain about the
incoming load. So throughput and scaling are managed by Keyspaces itself. It’s costly
and you pay only for the resources you use.
2. The Provisioned capacity mode is used when you have predictable application traffic.
A user just needs to provide many max read/write per second in advance while
configuring the database. It’s less costly.
● There is no upper limit for throughput and storage.
● Keyspaces is integrated with Cloudwatch to measure the performance of the database with
incoming traffic.
● Data is replicated across 3 Availability Zones for high durability.
● Point-in-Time-recovery (PITR) is there to recover data lost due to accidental deletes. The data
can be recovered up to any second till 35 days.

Use Cases:

● Build Applications using open source Cassandra APIs and drivers. Users can use Java,
Python, .NET, Ruby, Perl.
● Highly recommended for applications that demand a low latency platform like trading.
● Use cloud trail to check the DDL operations. It gives brief information on who accessed, when,
what services were used and a response returned from AWS. Some hackers creeping into the
database firewall can be detected here.

Pricing:

● Users only pay for the read and write throughput, storage, and networking resources.

Amazon Neptune
What is Amazon Neptune?
Amazon Neptune is a graph database service used as a web service to build and run
applications that require connected datasets.

– Back to Index – 59
The graph database engine helps to store billions of connections and provides milliseconds
latency for querying them.

It offers a choice from graph models and languages for querying data.
● Property Graph (PG) model with Apache TinkerPop Gremlin graph traversal language,
● W3C standard Resource Description Framework (RDF) model with SPARQL Query
Language.

It is highly available across three AZs and automatically fails over any of the 15 low latency read
replicas.

It provides fault-tolerant storage by replicating two copies of data across three availability
zones.

It provides continuous backup to Amazon S3 and point-in-time recovery from storage failures.

It automatically scales storage capacity and provides encryption at rest and in transit.

Amazon RDS
What is Amazon RDS?
RDS (Relational Database System) in AWS makes it easy to operate, manage, and scale in the
cloud. It provides scalable capacity with a cost-efficient pricing option and automates manual
administrative tasks such as patching, backup setup, and hardware provisioning.

– Back to Index – 60
Engines supported by RDS are given below:

MySQL
● It is the most popular open-source DB in the world.
● Amazon RDS makes it easy to provision the DB in AWS Environment without worrying about
the physical infrastructure.
● In this way, you can focus on application development rather than Infra. Management. MS

SQL
● MS-SQL is a database developed by Microsoft.
● Amazon allows you to provision the DB Instance with provisioned IOPS or Standard Storage.

MariaDB
● MariaDB is also an open-source DB developed by MySQL developers.
● Amazon RDS makes it easy to provision the DB in AWS Environment without worrying about
the physical infrastructure.

PostgreSQL
● Nowadays, PostgreSQL has become the preferred open-source relational DB. Many
enterprises now have started using PostgreSQL powered database engines.

Oracle
● Amazon RDS also provides a fully managed commercial database engine like Oracle.
● Amazon RDS makes it easy to provision the DB in AWS Environment without worrying about
the physical infrastructure.
● You can run Oracle DB Engine with two different licensing models – “License Included” and
“Bring-Your-Own-License (BYOL).”

Amazon Aurora
● It is the relational database engine developed by AWS only.
● It is a MySQL and PostgreSQL-compatible DB engine.
● Amazon claims that it is five times faster than the standard MySQL DB engine and around
three times faster than the PostgreSQL engine.
● The cost of the aurora is also less than the other DB Engines.
● In Amazon Aurora, you can create up to 15 read replicas instead of 5 in other databases.

– Back to Index – 61
Multi AZ Deployment
● Enabling multi-AZ deployment creates a Replica (Copy) of the database in different availability
zones in the same Region.
● Multi-AZ synchronously replicates the data to the standby instance in different AZ.
● Each AZ runs on physically different and independent infrastructure and is designed for high
reliability.
● Multi-AZ deployment is for Disaster recovery not for performance enhancement.

Read Replicas
● Read Replicas allow you to create one or more read-only copies of your database in the same
or different regions.
● Read Replica is mostly for performance enhancement. We can now use Read-Replica with
Multi-AZ as a Part of DR (disaster recovery) as well.
● A Read Replica in another region can be used as a standby database in event of regional
failure/outage. It can also be promoted to the Production database.

– Back to Index – 62
Storage Type
● General Purpose (SSD): General Purpose storage is suitable for database workloads that
provide a baseline of 3 IOPS/GiB and the ability to burst to 3,000 IOPS.
● Provisioned IOPS (SSD): Provisioned IOPS storage is suitable for I/O-intensive database
workloads. I/O range is from 1,000 to 30,000 IOPS.
Monitoring
● By default, enhanced monitoring is disabled.
● Enabling enhanced monitoring incurs extra charges.
● Enhanced monitoring is not available in the AWS GovCloud(US) Region.
● Enhanced monitoring is not available for the instance class db.m1.small.
● Enhanced monitoring metrics include IOPS, Latency, Throughput, Queue Depth.
● Enhanced monitoring gathers information from an agent installed in DB Instance.
Backups & Restore
● The default backup retention period for automatic backup is 7 days if you use the console, for
CLI and RDS API it’s 1 day.
● Automatic backup can be retained for up to 35 days.
● The minimum Automatic backup retention period is 0 days, which will disable the automatic
backup for the instance.
● 100 Manual snapshots are allowed in a single region.
Charges:
You will be charged based on multiple factors:
● Active RDS Instances
● Storage
● Requests
● Backup Storage
● Enhanced monitoring
● Transfer Acceleration
● Data Transfer for cross-region replication

– Back to Index – 63
Amazon Redshift
What is Amazon Redshift?
Amazon redshift is a fast and powerful, fully managed, petabyte-scale data warehouse service
in the cloud. This service is highly scalable to a petabyte or more for $1000 per terabyte per year,
less than a tenth of most other data warehousing solutions.

Redshift can be configured as follows:


● Single node (160 GB)
● Multi-Node
o Leader Node (manages client connections and receives queries)
o Compute Node (store data and perform queries and computations). Up to 128
compute nodes.

Features:
● It employs multiple compression techniques and can often achieve significant compression
relative to traditional relational databases.
● It doesn’t require indexes or materialized views, so uses less space than traditional database
systems.
● Massively parallel processing (MPP): Amazon redshift automatically distributes data and
query load across all nodes. Amazon redshift makes it easy to add nodes to your data
warehouse and maintain fast query performance as data grows in future.
● Enabled by default with a 1-day retention period.
● Maximum retention period is 35 days.
● Redshift always maintain at least three copies of your data (the original and replica on the
compute nodes and a backup in Amazon S3)
● Redshift can also asynchronously replicate your snapshots to S3 in another region for disaster
recovery.
● It is only available in 1 AZ but can store snapshots to new AZs in the event of an outage.

Security Considerations
● Data encrypted in transit using SSL.
● Encrypted at rest using AES-256 encryption.
● By default, RedShift takes care of key management.
o Manager your own keys through HSM
o AWS key Management service.
Use cases
● If we want to copy data from EMR, S3, and DynamoDB to power a custom Business
intelligence tool. Using a third-party library, we can connect and query redshift for results.

– Back to Index – 64
Pricing:
● Compute Node Hours - total number of hours you run across your all compute nodes for the
billing period.
● You are billed for 1 unit per node per hour, so a 3-node data warehouse cluster running
persistently for an entire month would incur 2160 instance hours.
● You will not be charged for leader node hours, only compute nodes will incur charges.

– Back to Index – 65
Amazon WorkSpaces
What is Amazon WorkSpaces?
Amazon WorkSpaces is a managed service used to provision virtual Windows or Linux desktops
for users across the globe.

● Amazon WorkSpaces is a managed service used to provision virtual Windows or Linux


desktops for users across the globe.

✔ Android devices, iPads


✔ Windows, macOS, and Ubuntu Linux computers
✔ Chromebooks
✔ Teradici zero client devices -supported only with PCoIP

● For Amazon WorkSpaces, billing takes place either monthly or hourly.

Benefits

● It helps to eliminate the management of on-premise VDIs (Virtual Desktop


Infrastructure).
● It offers to choose PCoIP protocols (port 4172) or WorkSpaces Streaming Protocol
(WSP, port 4195) based on user’s requirements such as the type of devices used for
workspaces, operating system, and network conditions
● ​Amazon WorkSpaces Application Manager (Amazon WAM) helps to manage the
applications on Windows WorkSpaces.
● Multi-factor authentication (MFA) and AWS Key Management Service (AWS KMS) is used
for account and data security.
● Each WorkSpace is connected to a virtual private cloud (VPC) with two elastic network
interfaces (ENI) and AWS Directory Service.

– Back to Index – 66
AWS Amplify

What is AWS Amplify?


AWS Amplify is a set of tools and services provided by Amazon Web Services (AWS) that
enables developers to build scalable and secure web and mobile applications quickly. It offers a
variety of features, including authentication, storage, APIs, analytics, and more, all managed
through a unified console. With AWS Amplify, developers can streamline the development
process, reduce the need for backend infrastructure management, and focus more on building
features and delivering value to users. It supports popular frontend frameworks like React,
Angular, and Vue.js, making it easier for developers to integrate with their existing workflows.

Features
● Use Version Control: Always use version control (e.g., Git) to track changes to your AWS
Amplify projects.
● Modularize Your App: Organize your application into smaller, manageable modules.
● Infrastructure as Code: Leverage the AWS Amplify CLI to define your infrastructure as code
(IaC).
● Environment Management: Use environment variables and separate configurations for
development, testing, and production environments.
● Continuous Integration/Continuous Deployment (CI/CD): Implement CI/CD pipelines to
automate the building, testing, and deployment of your application.
● Data Validation and Sanitization: Always validate and sanitize user input to prevent security
vulnerabilities like cross-site scripting (XSS) and SQL injection.

Use Case
● Personal Blog Website: You want to create a personal blog website to publish articles and
showcase your writing skills.
● E-commerce Mobile App: You're building an e-commerce mobile app that needs features like
user registration, product catalog, and real-time updates for inventory changes.
● Event Scheduling Web App: You want to create a web application for scheduling and
managing events for a local community organization.

– Back to Index – 67
Amazon API Gateway
What is Amazon API Gateway?
Amazon API Gateway is a service which creates, publishes, maintains, monitors and secures
APIs at any scale.

● It helps to create Synchronous microservices with Load Balancers and forms the app-facing
part of the AWS serverless infrastructure with AWS Lambda.
● It handles the tasks involved in processing concurrent API calls.
● It combines with Amazon EC2, AWS Lambda or any web application (public or private
endpoints) to work as back-end services.

API Gateway creates RESTful APIs that:


● Are HTTP-based.
● Enable stateless and client-server communication.
● Create standard HTTP methods such as GET, POST, PUT, PATCH and DELETE. API

Gateway creates WebSocket APIs that:


● Follow WebSocket protocol and enable stateful, full-duplex communication between client and
server.
● Route incoming messages to the destination based on message content.

Endpoint Types for API Gateway:


Edge-optimized endpoint:
● It signifies reduced latency for requests all around the world.
● CloudFront is also used as the public endpoint.

– Back to Index – 68
Regional endpoint:
● It signifies reduced latency for requests that originate in the same region. It can also
configure the CDN and protect WAF

Private endpoint:
● It securely exposes the REST APIs to other services only within the VPC.

API Gateway - Securities:

– Back to Index – 69
● Resource-based policies
● IAM Permissions
● Lambda Authorizer (formerly Custom Authorizers)
● Cognito user pools

Features:
● It helps to create stateful (WebSocket) and stateless (HTTP and REST) APIs.
● It integrates with CloudTrail for logging and monitoring API usage and API changes.
● It integrates with CloudWatch metrics to monitor REST API execution and WebSocket API
execution.
● It integrates with AWS WAF to protect APIs against common web exploits.
● It integrates with AWS X-Ray for understanding and triaging performance latencies.

Price details:
● You pay for API Caching as it is not eligible for the AWS Free Tier.
● API requests are not charged for authorization and authentication failures.
● Method calls which consist of API keys are not charged if API keys are missing or invalid.
● API Gateway-throttled and plan-throttled requests are not charged if the request rate exceeds
the predefined limits.

– Back to Index – 70
AWS IoT Analytics
What is AWS IoT Analytics?
AWS IoT Analytics streamlines the intricate process of analyzing extensive amounts of IoT data,
eliminating the need for constructing a complex and costly IoT analytics platform.

Features:
● Collect: AWS IoT Analytics seamlessly gathers data from diverse sources, including AWS
IoT Core and other sources like Amazon S3 and Amazon Kinesis, simplifying the
ingestion process.
● Process: It cleanses, filters, transforms, and enriches data using customizable Lambda
functions and logical operations, ensuring data accuracy and relevance for analysis.
● Store: The service stores both raw and processed data in an optimized time-series data
store, offering robust data management capabilities, including access control and data
retention policies.
● Analyze: Users can run ad hoc or scheduled SQL queries for quick insights and perform
time-series analysis to monitor device performance and predict maintenance issues.
● Hosted Notebooks: AWS IoT Analytics supports hosted Jupyter Notebooks for advanced
analytics and machine learning tasks, including statistical classification, LSTM for
time-series prediction, and K-means clustering for device segmentation.
● Automated Execution: The service automates the execution of custom containers and
Jupyter Notebooks, allowing users to perform continuous analysis on scheduled
intervals.
● Incremental Data Capture: AWS IoT Analytics captures incremental data since the last
analysis, optimizing analysis efficiency and cost by focusing only on new data.
● Visualization: Integration with Amazon QuickSight enables users to visualize data sets in
interactive dashboards, while embedded Jupyter Notebooks provide visualization
options within the AWS IoT Analytics console.

Use Cases:
● Contextual Data Enrichment: Agricultural operators enhance moisture sensor
data with predicted rainfall to optimize irrigation equipment efficiency.
● Predictive Maintenance: Utilize prebuilt templates for predictive maintenance
models, such as predicting heating and cooling system failures in cargo vehicles.
● Proactive Supply Replenishment: Monitor inventory levels in IoT-enabled vending
machines and automate accurate merchandise reordering when supplies are low.

– Back to Index – 71
● Process Efficiency Monitoring: Improve efficiency by monitoring IoT applications,
such as identifying optimal truck loads for efficient loading guidelines.

AWS IoT Core

What is AWS IoT Core?


AWS IoT Core is a cloud service that enables users to connect IoT devices (wireless devices,
sensors, and smart appliances) to the AWS cloud without managing servers.

● It supports devices and clients that use the following protocol

Features

● It provides secure and bi-directional communication with all the devices, even when they
aren’t connected.
● It consists of a device gateway and a message broker that helps connect and process
messages and routes those messages to other devices or AWS endpoints.
● It helps developers to operate wireless LoRaWAN (low-power long-range Wide Area
Network) devices.
● It helps to create a persistent Device Shadow (a virtual version of devices) so that other
applications or devices can interact.
● It integrates with Amazon services like Amazon CloudWatch, AWS CloudTrail, Amazon
S3, Amazon DynamoDB, AWS Lambda, Amazon Kinesis, Amazon SageMaker, and
Amazon QuickSight to build IoT applications

– Back to Index – 72
AWS IoT Events

What is AWS IoT Events?


AWS IoT Events is a monitoring service that allows users to monitor and respond to devise
fleets’ events in IoT applications.

Features
It detects events from IoT sensors such as temperature, motor voltage, motion detectors,
humidity.
It builds event monitoring applications in the AWS Cloud that can be accessed through the AWS
IoT Events console.
It helps to create event logic using conditional statements and trigger alerts when an event
occurs.
AWS IoT Events accepts data from many IoT sources like sensor devices, AWS IoT Core, and
AWS IoT Analytics.

– Back to Index – 73
AWS IoT Greengrass
What is AWS IoT Greengrass?
AWS IoT Greengrass is a cloud service that groups, deploys, and manages software for all
devices at once and enables edge devices to communicate securely.

Features
● It is used on multiple IoT devices in homes, vehicles, factories, and businesses.
● It provides a pub/sub message manager that stores messages as a buffer to preserve
them in the cloud.

● It synchronizes data on the device using the following AWS services:

● The Greengrass Core is a device that enables the communication between AWS IoT Core
and the AWS IoT Greengrass.
● Devices with IoT Greengrass can process data streams without being online.
● It provides different programming languages, open-source software, and development
environments to develop and test IoT applications on specific hardware.
● It provides encryption and authentication for device data for cloud communications.
● It provides AWS Lambda functions and Docker containers as an environment for code
execution.

– Back to Index – 74
Amazon Polly
What is Amazon Polly?
Amazon Polly is a cloud service used to convert text into speech.

Features
● It requires no setup costs, only pay for the text converted.
● It supports many different languages, and Neural Text-to-Speech (NTTS) voices to create
speech-enabled applications.
● It offers caching and replays of Amazon Polly’s generated speech in a format like MP3.

Amazon SageMaker

What is Amazon SageMaker?

Amazon SageMaker is a cloud service that allows developers to prepare, build, train, deploy and
manage machine learning models.

Features
● It provides a secure and scalable environment to deploy a model using SageMaker
Studio or the SageMaker console.
● It has pre-installed machine learning algorithms to optimize and deliver 10X
performance.
● It scales up to petabytes level to train models and manages all the underlying
infrastructure.
● Amazon SageMaker notebook instances are created using Jupyter notebooks to write
code to train and validate the models.
● Amazon SageMaker gets billed in seconds based on the amount of time required to
build, train, and deploy machine learning models.

– Back to Index – 75
Amazon Comprehend
What is Amazon Comprehend?
Amazon Comprehend employs natural language processing (NLP) to extract insights from
document content. It generates insights by identifying entities, key phrases, language,
sentiments, and other common elements within documents. Utilize Amazon Comprehend to
develop new products that leverage document structure understanding.

Features:
● Discover valuable insights from text in various sources such as documents, customer
support tickets, product reviews, emails, and social media feeds.

● Streamline document processing workflows by extracting text, key phrases, topics,


sentiment, and other elements from documents like insurance claims.

● Distinguish your business by training a model to classify documents and recognize


terms without the need for machine learning expertise.

● Secure and manage access to sensitive data by identifying and redacting Personally
Identifiable Information (PII) from documents.

Use cases:
● Analyze business and call center data
● Index and search through product reviews
● Manage legal briefs
● Handle financial document processing
Pricing:
1. Comprehend services are priced based on 100-character units with a minimum
charge of 3 units per request for various APIs.
2. Custom Comprehend entails additional charges for model training and
management.
3. Topic modeling charges are determined by the total size of documents
processed per job.

Links- https://aws.amazon.com/comprehend/
https://aws.amazon.com/comprehend/pricing/

– Back to Index – 76
Amazon Rekognition
What is Amazon Rekognition?
Amazon Rekognition is a cloud-based service that employs advanced computer vision
technology to analyze images and videos without requiring expertise in machine
learning. Its intuitive API allows for quick analysis of images and videos stored in
Amazon S3, offering features like object and text detection, identifying unsafe content,
and analyzing faces. With its face recognition capabilities, Rekognition enables various
applications such as user verification, cataloguing, people counting, and public safety.
Features:
Image Analysis:

● Object, Scene, and Concept Detection: Detect and classify various objects,
scenes, concepts, and celebrities present in images.
● Text Detection: Identify both printed and handwritten text in images, supporting
multiple languages.

Video Analysis:

● Object, Scene, and Concept Detection: Categorize objects, scenes, concepts, and
celebrities appearing in videos.
● Text Detection: Recognize printed and handwritten text in videos in different
languages.
● People Tracking: Monitor individuals identified in videos as they move across
frames.
● Facial Analysis: Detect, analyze, and compare faces in both live streaming and
recorded videos.

Use cases:
● Simplify content retrieval with Amazon Rekognition's automatic analysis,
enabling easy searchability for images and videos.
● Enhance security with Rekognition's face liveness detection, preventing identity
spoofing beyond traditional passwords.
● Quickly locate individuals across your visual content using Rekognition's efficient
face search feature.
● Ensure content safety with Rekognition's ability to detect explicit, inappropriate,
and violent content, facilitating proactive filtering.

– Back to Index – 77
● Benefit from HIPAA Eligibility, making Amazon Rekognition suitable for handling
protected health information in healthcare applications.

Pricing:
With Amazon Rekognition there are 4 different types of usage, each with their own pricing
details.

● Amazon Rekognition Image pricing


● Amazon Rekognition Video pricing
● Amazon Rekognition Custom Labels pricing
● Amazon Rekognition Face Liveness pricing
● Amazon Rekognition Custom Moderation pricing

Links- https://docs.aws.amazon.com/rekognition/latest/dg/what-is.html
https://aws.amazon.com/rekognition/pricing/
https://aws.amazon.com/rekognition/

– Back to Index – 78
Amazon Lex
What Is Amazon Lex?
Amazon Lex, an AWS service, enables developers to build chatbots with natural
conversation capabilities, leveraging the technology behind Alexa. With seamless
integration and advanced language understanding, Lex simplifies speech recognition
and facilitates the creation of engaging chatbots for intuitive user interactions.
Features:
● Effortlessly integrate AI that comprehends intent, retains context, and automates
basic tasks across multiple languages.

● Design and deploy omnichannel conversational AI with a single click, without the
need to manage hardware or infrastructure.

● Seamlessly connect with other AWS services to access data, execute business
logic, monitor performance, and more.

● Pay only for speech and text requests without any upfront costs or minimum
fees.
Use Cases:
● Enable virtual agents and voice assistants: Provide users with self-service
options through virtual contact center agents and interactive voice
response (IVR), allowing them to perform tasks autonomously, like
scheduling appointments or changing passwords.
● Automate responses to FAQs: Develop conversational solutions that
answer common inquiries, enhancing Connect & Lex conversation flows
with natural language search for frequently asked questions powered by
Amazon Kendra.
● Improve productivity with application bots: Streamline user tasks within
applications using efficient chatbots, seamlessly integrating with
enterprise software through AWS Lambda and maintaining precise access
control via IAM.

– Back to Index – 79
● Extract insights from transcripts: Design chatbots using contact center
transcripts to maximize captured information, reducing design time and
expediting bot deployment from weeks to hours.

Pricing:

Product DESCRIPTION FREE TIER OFFER PRODUCT PRICING


DETAILS

Amazon Lex Amazon Lex 12 MONTHS FREE Request and


facilitates response interaction
Conversational AI for conversational 10,000 text pricing
Chatbots interface requests per month Streaming
conversation pricing
development for
5,000 speech Automated chatbot
applications via
designer pricing
voice and text. requests per month

Example of Request and response interaction pricing:

Charges for request and response interaction in Amazon Lex are based on the number
of speech or text requests processed by the bot, with speech requests at $0.004 each
and text requests at $0.00075, determining the total monthly charges.

Links-https://aws.amazon.com/pm/lex/?gclid=CjwKCAjw88yxBhBWEiwA7cm6pdHoQs2IU6oBU
12kk11fUisSIFF9DRDJYx6qxhFdUgnEMvPgpY3EWBoCRhcQAvD_BwE&trk=436e9c39-382a-42a
6-a49f-4cdbdfe8cadc&sc_channel=ps&ef_id=CjwKCAjw88yxBhBWEiwA7cm6pdHoQs2IU6oBU1
2kk11fUisSIFF9DRDJYx6qxhFdUgnEMvPgpY3EWBoCRhcQAvD_BwE:G:s&s_kwcid=AL!4422!3!65
2868433334!e!!g!!amazon%20lex!19910624536!147207932349

https://aws.amazon.com/lex/pricing/

https://docs.aws.amazon.com/lex/latest/dg/how-it-works.html

– Back to Index – 80
Amazon Transcribe

What is Amazon Transcribe?


Amazon Transcribe is a service used to convert audio (speech) to text using a Deep Learning
process known as automatic speech recognition (ASR).

Features
● It is best suited for customer service calls, live broadcasts, and media subtitling.
● Amazon Transcribe Medical is used to convert medical speech to text for clinical
documentation.
● It automatically matches the text quality similar to the manual transcription. For
transcribe, charges are applied based on the seconds of speech converted per month.

– Back to Index – 81
AWS CloudFormation
What is AWS CloudFormation?
AWS CloudFormation is a service that collects AWS and third-party resources and manages
them throughout their life cycles, by launching them together as a stack.

A template is used to create, update, and delete an entire stack as a single unit, without
managing resources individually.

It provides the capability to reuse the template to set the resources easily and repeatedly. It can
be integrated with AWS IAM for security.

It can be integrated with CloudTail to capture API calls as events.

Templates -
A JSON or YAML formatted text file used for building AWS resources.
Stack -
It is a single unit of resources.

– Back to Index – 82
Change sets -
It allows checking how any change to a resource might impact the running resources.
Stacks can be created using the AWS CloudFormation console and AWS Command Line
Interface (CLI).
Stack updates:
First the changes are submitted and compared with the current state of the stack and only the
changed resources get updated.
There are two methods for updating stacks:
● Direct update - when there is a need to quickly deploy the updates.
● Creating and executing change sets - they are JSON files, providing a preview option for the
changes to be applied.
StackSets are responsible for safely provisioning, updating, or deleting stacks.
Nested Stacks are stacks created within another stack by using the
AWS::CloudFormation::Stack resource.
When there is a need for common resources in the template, Nested stacks can be used by
declaring the same components instead of creating the components multiple times. The main
stack is termed as parent stack and other belonging stacks are termed as child stack, which can
be implemented by using ref variable ‘! Ref’.

AWS CloudFormation Registry helps to provision third-party application resources alongside


AWS resources. Examples of third-party resources are incident management, version control
tools.

Price details:
● AWS does not charge for using AWS CloudFormation, charges are applied for the services that
the CloudFormation template comprises.
● AWS CloudFormations supports the following namespaces: AWS::*, Alexa::*, and Custom::*. If
anything else is used except these namespaces, charges are applied per handler operation.
● Free tier - 1000 handler operations per month per account
● Handler operation - $0.0009 per handler operation

Example:
CloudFormation template for creating EC2 instance

EC2Instance:
Type: AWS::EC2::Instance
Properties:
ImageId: 1234xyz
KeyName: aws-keypair
InstanceType: t2.micro
SecurityGroups: - !Ref EC2SecurityGroup

– Back to Index – 83
BlockDeviceMappings: -
DeviceName: /dev/sda1
Ebs: VolumeSize: 50

AWS CloudTrail

What is AWS CloudTrail?


AWS CloudTrail is defined as a global service that permits users to enable operational and risk
auditing of the AWS account.
It allows users to view, search, download, archive, analyze, and respond to account activity
across the AWS infrastructure.
It records actions as an event taken by a user, role, or an AWS service in the AWS Management
Console, AWS Command Line Interface, and AWS SDKs and APIs.

AWS CloudTrail mainly integrates with:


● Amazon S3 can be used to retrieve log files.
● Amazon SNS can be used to notify about log file delivery to the bucket with Amazon Simple
Queue Service (SQS).
● Amazon CloudWatch for monitoring and AWS Identity and Access Management (IAM) for
security.

– Back to Index – 84
CloudTrail events of the past 90 days recorded by CloudTrail can be viewed in the CloudTrail
console and can be downloaded in CSV or JSON file.

Trail log files can be aggregated from multiple accounts to a single bucket and can be shared
between accounts.

AWS CloudTrail Insights enables AWS users to identify and respond to unusual activities of API
calls by analyzing CloudTrail management events.

There are three types of CloudTrail events:


● Management events or control plane operations
○ Example - Amazon EC2 CreateSubnet API operations and CreateDefaultVpc API
operations
● Data events
○ Example - S3 Bucket GetObject, DeleteObject, and PutObject API operations
● CloudTrail Insights events (unusual activity events)
○ Example - Amazon S3 deleteBucket API, Amazon EC2 AuthorizeSecurityGroupIngress
API

Example of CloudTrail log file:


IAM log file -
The below example shows that the IAM user Rohit used the AWS Management Console to call
the AddUserToGroup action to add Nayan to the administrator group.

{"Records": [{
"eventVersion": "1.0", "userIdentity": {
"type": "IAMUser", "principalId":
"PR_ID", "arn": "arn:aws:iam::210123456789:user/Rohit",
"accountId": "210123456789",
"accessKeyId": "KEY_ID",
"userName": "Rohit"
"eventTime": "2021-01-24T21:18:50Z",
"eventSource": "iam.amazonaws.com",
eventName": "CreateUser",
"awsRegion": "ap-south-2",
"sourceIPAddress": "176.1.0.1",
"userAgent": "aws-cli/1.3.2 Python/2.7.5 Windows/7",
"requestParameters": {"userName": "Nayan"},
"responseElements": {"user": {
"createDate": "Jan 24, 2021 9:18:50 PM",

– Back to Index – 85
"userName": "Nayan",
"arn": "arn:aws:iam::128x:user/Nayan",
"path": "/", "userId": "12xyz"
}}
}]}

CloudWatch monitors and manages the activity of AWS services and resources, reporting on
their health and performance. Whereas, CloudTrail resembles logs of all actions performed
inside the AWS environment.

Price details:
● Charges are applied based on the usage of Amazon S3.
● Charges are applied based on the number of events analyzed in the region.
● The first copy of Management events within a region is free, but charges are applied for
additional copies of management events at $2.00 per 100,000 events.
● Data events are charged at $0.10 per 100,000 events.
● CloudTrail Insights events provide visibility into unusual activity and are charged at $0.35 per
100,000 write management events analyzed.

Amazon CloudWatch
What is Amazon CloudWatch?
Amazon CloudWatch is a service that helps to monitor and manage services by providing data
and actionable insights for AWS applications and infrastructure resources.
It monitors AWS resources such as Amazon RDS DB instances, Amazon EC2 instances, Amazon
DynamoDB tables, and, as well as any log files generated by the applications.

Amazon CloudWatch can be accessed by the following methods:


● Amazon CloudWatch console
● AWS CLI
● CloudWatch API
● AWS SDKs

Amazon CloudWatch is used together with the following services:


● Amazon Simple Notification Service (Amazon SNS)
● Amazon EC2 Auto Scaling
● AWS CloudTrail
● AWS Identity and Access Management (IAM)

– Back to Index – 86
It collects monitoring data in the form of logs, metrics, and events from AWS resources,
applications, and services that run on AWS and on-premises servers. Some metrics are
displayed on the home page of the CloudWatch console. Additional custom dashboards to
display metrics can be created by the user.

Alarms can be created using CloudWatch Alarms that monitor metrics and send notifications or
make automatic changes to the resources based on actions whenever a threshold is breached.

CloudWatch console provides Cross-account functionality which provides cross-account


visibility to the dashboards, alarms, metrics, and dashboards without Sign-in and Sign-out of
different accounts. This functionality becomes more useful if the accounts are managed by
AWS Organizations.

CloudWatch Container Insights are used to collect and summarize metrics and logs from
containerized applications. These Insights are available for Amazon ECS, Amazon EKS, and
Kubernetes platforms on Amazon EC2.

CloudWatch Lambda Insights are used to collect and summarize system-level metrics including
CPU time, memory, disk, and network for serverless applications running on AWS Lambda.

CloudWatch agent is installed on the EC2 instance to provide the


following features:

● It collects system-level metrics from Amazon EC2 instances or on-premises servers


across operating systems.
● It collects custom metrics from the applications using the StatsD and collectd
protocols.
StatsD - supported on both Linux servers and Windows Server
collectd - supported only on Linux servers.
● The metrics from the CloudWatch agent can be collected and stored in CloudWatch
just like any other CloudWatch metrics.
● The default namespace for the CloudWatch agent metrics is CWAgent, and can be
changed while configuring the agent.

– Back to Index – 87
Amazon CloudWatch Logs

What is Amazon CloudWatch Logs?


Amazon CloudWatch Logs is a service provided by Amazon Web Services (AWS) that enables
you to monitor, store, and access log data from various AWS resources and applications. It is
designed to help you centralize and gain insights from logs generated by your AWS resources,
applications, and services in a scalable and cost-effective manner.

Features
● Log Collection: CloudWatch Logs allows you to collect log data from a wide range of AWS
resources and services, including Amazon EC2 instances, Lambda functions, AWS CloudTrail,
AWS Elastic Beanstalk, and custom applications running on AWS or on-premises.
● Log Storage: It provides a secure and durable repository for your log data.

– Back to Index – 88
● Real-time Monitoring: You can set up CloudWatch Alarms to monitor log data in real time and
trigger notifications or automated actions when specific log events or patterns are detected.
● Log Queries: CloudWatch Logs Insights allows you to run ad-hoc queries on your log data to
extract valuable information and troubleshoot issues. You can use a simple query language to
filter and analyze logs.
● Log Retention: You can define retention policies for your log data, specifying how long you
want to retain logs before they are automatically archived or deleted. This helps in cost
management and compliance with data retention policies.
● Log Streams: Within a log group, log data is organized into log streams, which represent
individual sources of log data. This organization makes it easy to distinguish between different
sources of log data.

Use Cases:
● Application Debugging: Developers want to troubleshoot and debug issues in a
microservices-based application.
● Cost Monitoring for EC2 Instances: An organization wants to track and control costs
associated with their Amazon EC2 instances.
● Security and Compliance Auditing: A company needs to monitor and audit user activities
across its AWS environment to ensure compliance with security policies.

AWS Compute Optimizer


What is AWS Compute Optimizer?
AWS Compute Optimizer assists in optimizing the provisioning of various AWS resources,
including Amazon EC2 instances, Amazon EBS volumes, ECS services on AWS Fargate, and
AWS Lambda functions. It analyzes utilization data to prevent both overprovisioning and
underprovisioning, ensuring that resources are efficiently allocated to meet workload demands.
Features:
● AWS Compute Optimizer address performance challenges by applying suggestions
that pinpoint underutilized resources.

– Back to Index – 89
●It utilize AI and ML-driven analytics provided by AWS Compute Optimizer to optimize
workload sizes based on your specific workload needs, resulting in potential cost
reductions of up to 25%.
● It enhance savings and gain insights into memory usage by activating Amazon
CloudWatch metrics to monitor resource utilization.
●It improve cost efficiency by automating license optimization recommendations
post-authentication to optimize licensing expenses.

Use Cases:

● Tailored Rightsizing Recommendations: AWS Compute Optimizer offers customizable


rightsizing recommendations to optimize EC2 and Auto Scaling group instances
according to your workload requirements.

● Utilize External Metrics: Enhance EC2 instance and Auto Scaling group optimization
by leveraging historical data and third-party metrics from your Application Performance
Monitoring (APM) tools.

● Facilitate Migration to AWS Graviton CPUs: Identify EC2 workloads that offer
significant benefits with minimal migration effort when transitioning to AWS Graviton
CPUs.

● Licensing Optimization: AWS Compute Optimizer provides automated license


recommendations for commercial applications like Microsoft SQL Server, helping to
reduce licensing costs effectively.

AWS Config
What is AWS Config?
AWS Config is a service that continuously monitors and evaluates the configurations of the AWS
resources (services).

It helps to view configuration changes performed over a specific period of time using AWS
Config console and AWS CLI.

– Back to Index – 90
It evaluates AWS resource configurations based on specific settings and creates a snapshot of
the configurations to provide a complete inventory of resources in the account.

It retrieves previous configurations of resources and generates notifications whenever a


resource is created, modified, or deleted.

It uses Config rules to evaluate configuration settings of the AWS resources. AWS Config also
checks any condition violation in the rules.
There can be 150 AWS Config rules per region.
● Managed Rules
● Custom Rules

It is integrated with AWS IAM, to create permission policies attached to the IAM role, Amazon S3
buckets, and Amazon Simple Notification Service (Amazon SNS) topics.

It is also integrated with AWS CloudTrail, which provides a record of user actions or an AWS
Service by capturing all API calls as events in AWS Config.

AWS Config provides an aggregator (a resource) to collect AWS Config configuration and
compliance data from:
● Multiple accounts and multiple regions.
● Single account and multiple regions.
● An organization in AWS Organizations
● The Accounts in the organization which have AWS Config enabled.

Use Cases:
● It enables the user to code custom rules in AWS Lambda that define the best guidelines for
resource configurations. Users can also automate the assessment of the resource configuration
changes to ensure compliance and self-governance across your AWS infrastructure.
● Data from AWS Config allows users to continuously monitor the configurations for potential
security weaknesses. After any security alert, Config allows the user to review the configuration
history and understand the risk factor.

– Back to Index – 91
Price details:
● Charges are applied based on the total number of configuration items recorded at the rate of
$0.003 per configuration item recorded per AWS Region in the AWS account.
● For Config rules, charges are applied based on the number of AWS Config rules evaluated.
● Additional charges are applied if AWS Config integrates with other AWS Services at a standard
rate.

– Back to Index – 92
AWS Health dashboard
What is AWS Health dashboard?
AWS Health dashboard keeps you informed about service events, scheduled modifications, and
account notifications, enabling effective management and action-taking. Access your
personalized health information and receive event updates through Amazon EventBridge or by
logging into the AWS Health Dashboard. Additionally, integrate AWS Health into your workflows
programmatically via the AWS Health API, accessible with AWS Premium Support.

Features:
● AWS Health serves as the central hub for event data, seamlessly integrating with over 200
AWS services, ensuring comprehensive coverage for both operational incidents and planned
changes.
● Get actionable insights promptly to troubleshoot issues and prepare for upcoming changes
effectively, facilitating swift resolution and proactive management.
● With AWS Health integrated into AWS Organizations, gain a consolidated view of service
health across your organization, streamlining operational management across teams.
● Effortlessly receive AWS Health events via Amazon EventBridge or programmatically integrate
with the AWS Health API, along with pre-built integrations with IT Service Management (ITSM)
tools for enhanced automation and efficiency.

Use Cases:
● Receive Proactive Notifications: Stay informed with timely alerts about events
impacting your resources, enabling you to proactively address any potential impact and
minimize disruptions.

● Prepare for Lifecycle Events: Gain visibility into upcoming planned lifecycle events and
monitor the progress of actions taken by your team at the resource level to ensure
uninterrupted operations of your applications.

● Efficient Event Monitoring: Streamline the monitoring and tracking of AWS Health
events across your organization using programmatic methods or pre-built integrations
with popular IT Service Management (ITSM) tools.

● Incident Troubleshooting: Quickly identify whether an application alarm is caused by


your application or underlying AWS resources, facilitating efficient incident
troubleshooting and resolution.

– Back to Index – 93
AWS Control Tower
What is AWS Control Tower?
AWS Control Tower is an extension to AWS Organizations providing additional controls. AWS
Control Tower helps create a Landing Zone which is a well-architected Multi-Account baseline
based on AWS best practices. An AWS Organization will be created if it does not already exist.

Features
● As a part of the Landing Zone, Control Tower sets up a series of OUs - Security OU, Sandbox
OU, and Production OU.
● Within the Security OU, the Control Tower creates the Audit & Log Archive accounts.
● The Sandbox & Production OUs does not contain any default accounts. Accounts related to
Development and Production environments can be added to these OUs.
● Control Tower integrates with AWS Identity Center. The directory sources for SSO can be AWS
Identity Center directories(default), SAML IdPs, and Microsoft AD.
● The Root user in the Management Account can perform actions that are disallowed by
Guardrails similar to AWS Organizations where SCPs cannot affect the Root user in the
Management Account.
● Control Tower comes with a Dashboard providing oversights into the Landing Zone and
central administrative views across all Accounts, OUs, Guardrails & policies.
● Control Tower offers Account Factory which is a configurable Account Template for
standardizing provisioning of new Accounts with Pre-approved Account configurations.

Use cases
● AWS Control Tower provides two configuration options
○ Launch AWS Control Tower in a new AWS Organization.
○ Launch AWS Control Tower in an existing AWS Organization.
● Guardrails created by AWS Control Tower for governance & compliance fall under the
following categories
○ Preventive Guardrails - These are based on SCPs that disallow certain API actions.
○ Detective Guardrails - Implemented using AWS Config & Lambda functions that
monitor & govern compliance.

– Back to Index – 94
AWS License Manager
What is AWS License Manager?
● AWS License Manager is a service that manages software licenses in AWS and on-premises
environments from vendors such as Microsoft, SAP, Oracle, and IBM.
● It supports Bring-Your-Own-License (BYOL) feature which means that users can manage their
existing licenses for third-party workloads (Microsoft Windows Server, SQL Server) to AWS.
● It enables administrators to create customized licensing rules that help to prevent licensing
violations (using more licenses than the agreement).
● The rules operate by stopping the instance from launching or by notifying administrators
about the infringement (violation of a law).
● Administrators use rule-based controls on the consumption of licenses, to set limits on new
and existing cloud deployments.
● Hard limit - does not allow the launch of non-compliant instances
● Soft limit - allow the launch of non-compliant instance but sends an alert to the administrators
● It provides control and visibility of all the licenses to the administrators with the help of the
AWS License Manager dashboard.
● It allows administrators to specify Dedicated Host management preferences for allocation
and capacity utilization.
● AWS License Manager’s managed entitlements provide built-in controls to software vendors
(ISVs) and administrators so that they can assign licenses to approved users and workloads.
● AWS Systems Manager can manage licenses on physical or virtual servers hosted outside of
AWS using AWS License Manager.
● AWS Systems Manager helps to discover software running on existing EC2 instances and then
rules can be attached and validated in EC2 instances allowing the licenses to be tracked using
the License Manager’s dashboard.
● AWS Organizations along with AWS License Manager helps to allow cross-account disclosure
of computing resources in the organization by using service-linked roles and enabling trusted
access between License Manager and Organizations.
AWS License Manager is integrated with the following services:
● AWS Marketplace
● Amazon EC2
● Amazon RDS
● AWS Systems Manager
● AWS Identity and Access Management (IAM)
● AWS Organizations
● AWS CloudFormation
● AWS X-Ray
Price details:
● Charges are applied at normal AWS rates only for the AWS resources integrated with AWS
License Manager.

– Back to Index – 95
AWS Management Console
What is AWS Management Console?

AWS Management Console is a web application that consists of many service consoles for
managing Amazon Web Services.
It can be visible at the time a user first signs in. It provides access to other service consoles and
a user interface for exploring AWS.

AWS Management Console provides a Services option on the navigation bar that allows
choosing services from the Recently visited list or the All services list.

On the navigation bar, there is an option to select Regions from.

– Back to Index – 96
On the navigation bar, there is a Search box to search any AWS services by entering all or part of
the name of the service. The Console is also available as an app for Android and iOS with
maximized Horizontal and vertical space and larger buttons for a better touch experience.

– Back to Index – 97
AWS Organizations
What are AWS Organizations?
AWS Organizations is a global service that enables users to consolidate and manage multiple
AWS accounts into an organization.

It includes account management and combined billing capabilities that help to meet the
budgetary, and security needs of the business better.
● The main account is the management account – it cannot be changed.
● Other accounts are member accounts that can only be part of a single organization.

AWS Organizations can be accessed in the following ways:


● AWS Management Console
● AWS Command Line Tools
○ AWS Command Line Interface (AWS CLI)
○ AWS Tools for Windows PowerShell.
● AWS SDKs
● AWS Organizations HTTPS Query API
Features:
● AWS Organizations provides security boundaries using multiple member accounts.
● It makes it easy to share critical common resources across the accounts.
● It organizes accounts into organizational units (OUs), which are groups of accounts that serve
specified applications.
● Service Control Policies (SCPs) can be created to provide governance boundaries for the OUs.
SCPs ensure that users in the accounts only perform actions that meet security requirements.
● Cost allocation tags can be used in individual AWS accounts to categorize and track the AWS
costs.

– Back to Index – 98
● It integrates with the following services:
○ AWS CloudTrail - Manages auditing and logs all events from accounts.
○ AWS Backup - Monitor backup requirements.
○ AWS Control Tower - to establish cross-account security audits and view policies
applied across accounts.
○ Amazon GuardDuty - Managed security services, such as detecting threats.
○ AWS Resource Access Manager (RAM) - Can reduce resource duplication by sharing
critical resources within the organization.
● Steps to be followed for migrating a member account:
○ Remove the member account from the old Organization.
○ Send an invitation to the member account from the new Organization.
○ Accept the invitation to the new Organization from the member account.

Price details:
● AWS Organizations is free. Charges are applied to the usage of other AWS resources.
● The management account is responsible for paying charges of all resources used by the
accounts in the organization.
● AWS Organizations provides consolidated billing that combines the usage of resources from
all accounts, and AWS allocates each member account a portion of the overall volume discount
based on the account's usage.

– Back to Index – 99
AWS Systems Manager
What is AWS Systems manager?
AWS Systems Manager is a service which helps users to manage EC2 and on-premises systems
at scale. It not only detects the insights about the state of the infrastructure but also easily
detects problems as well.
Additionally, we can patch automation for enhanced compliance. This AWS service works for
both Windows and Linux operating systems.

Features:
● Easily integrated with CloudWatch metrics/dashboards and AWS Config.
● It helps to discover and audit the software installed.
● Compliance management
● We can group more than 100 resource types into applications, business units, and
environments.
● It helps to view instance information such as operating system patch levels, install software
and see the compliance with the desired state.
● Act associate and configurations with resources and find out the discrepancies.
● Distribute multiple software versions safely across the instances.
● Increase the security area by running a command or maintaining scripts.
● Patch your instances of schedule to keep them compliant.
● Helps managers to automate workflows.
● It helps to reduce errors by securely applying configurable parameters into centralized service.

How does the System Manager work?


Firstly, User needs to install the SSM agent on the system they control. If an instance can’t be
controlled with SSM, it’s probably an issue with the SSM agent. Also, we need to make sure all
the EC2 instances have a proper IAM role to allow SSM actions.

– Back to Index – 100


Pricing:
● App Config:
○ Get Configuration API Calls: $0.2 per 1M Get Configuration calls
○ Configurations Received: $0.0008 per configuration received
● Parameter Store:
○ Standard: No additional charge.
○ Advanced: $0.05 per advance parameter per month.
● Change Manager:
○ Number of change requests: $0.296 per change request.
○ Get, described, Update, and GetoptsSummary API requests: $0.039 per 1000 requests.

– Back to Index – 101


AWS Trusted Advisor
What is AWS Trusted Advisor?
Trusted Advisor itself provides checks based on Best Practices in the Cost Optimization,
Security, Fault Tolerance, and Performance improvement categories.
● Cost Optimization - Provides recommendations to Organizations for saving money on their
AWS infrastructure by terminating unused & idle resources, using Reserved capacity for
continuous usage.
● Security - Provides recommendations to Organizations for improving the security of their
applications by Restricting access using SG/ NACL, Checking permissions on S3 Buckets, and
enabling various security features.
● Fault Tolerance - Increasing availability and redundancy of applications using Auto Scaling,
Performing Health checks, configuring Multi-AZ environments, and taking backups.
● Performance - Provides recommendations to Organizations for improving the performance of
applications by taking advantage of provisioned throughput, and monitoring of over-utilized
instances
● Service Limits - Notifies Organizations when their resource usage is more than 80%.

Use cases:

● Optimization of cost & efficiency - Trusted Advisor helps identify resources that are not used
to capacity or idle resources and provides recommendations to lower costs.
● Address Security Gaps - Trusted Advisor performs Security checks of your AWS environment
based on security best practices. It flags off errors or warnings depending on the severity of the
security threat e.g. Open SG/NACL ports for unrestricted external user access, and open access
permissions for S3 buckets in Accounts.
● Performance Improvement - Trusted Advisor checks for usage & configuration of your AWS
resources and provides recommendations that can improve performance e.g. it can check for
Provisioned IOPS EBS volumes on EC2 instances that are not EBS-optimized.

– Back to Index – 102


AWS Application Discovery Service
What is AWS Application Discovery Service?

Amazon Web Services Application Discovery Service (Application Discovery Service) helps you
plan application migration projects. It automatically identifies servers, virtual machines (VMs),
and network dependencies in your on-premises data centers.
Features:
● Agentless discovery using Amazon Web Services Application Discovery Service
Agentless Collector (Agentless Collector), which doesn't require you to install an agent
on each host.
● Agent-based discovery using the Amazon Web Services Application Discovery Agent
(Application Discovery Agent) collects a richer set of data than agentless discovery,
which you install on one or more hosts in your data center.
● Amazon Web Services Partner Network (APN) solutions integrate with Application
Discovery Service, enabling you to import details of your on-premises environment
directly into Amazon Web Services Migration Hub (Migration Hub) without using
Agentless Collector or Application Discovery Agent.

Use cases:

● Discover on-premises server and database inventory


● Map network communication patterns
● Mobilize for migration

Pricing:
You can use the AWS Application Discovery Service to discover your on-premises servers and plan
your migrations at no charge.

You only pay for the AWS resources (e.g., Amazon S3, Amazon Athena, or Amazon Kinesis Firehose)
that are provisioned to store your on-premises data. You only pay for what you use, as you use it;
there are no minimum fees and no upfront commitments.

– Back to Index – 103


AWS Database Migration Service
What is AWS Database Migration Service?
AWS Database Migration Service is a cloud service used to migrate relational databases from
on-premises, Amazon EC2, or Amazon RDS to AWS securely.

It does not stop the running application while performing the migration of databases, resulting
in downtime minimization.
It performs homogeneous as well as heterogeneous migrations between different database
platforms.
MySQL - MySQL (homogeneous migration)
MySQL - Amazon Aurora (heterogeneous migration)

AWS DMS supports the following data sources and targets engines for migration:
● Sources: Oracle, Microsoft SQL Server, PostgreSQL, Db2 LUW, SAP, MySQL, MariaDB,
MongoDB, and Amazon Aurora.
● Targets: Oracle, Microsoft SQL Server, PostgreSQL, SAP ASE, MySQL, Amazon Redshift,
Amazon S3, and Amazon DynamoDB.

It performs all the management steps required during the migration, such as monitoring, scaling,
error handling, network connectivity, replicating during failure, and software patching.
AWS DMS with AWS Schema Conversion Tool (AWS SCT) helps to perform heterogeneous
migration.

– Back to Index – 104


AWS DataSync
What is AWS DataSync?
AWS DataSync is a secure, reliable, managed migration Service that automates the movement
of data online between storage systems. AWS DataSync provides the capability to move data
between AWS storage, On-premises File Systems, Edge locations, and other Cloud Storage
services like Azure. AWS DataSync helps you simplify your migration planning and reduce costs
associated with the data transfer.

Features
● Data movement workloads using AWS DataSync support migration scheduling, bandwidth
throttling, task filtering, and logging.
● AWS DataSync provides enhanced performance using compression, and parallel transfers for
transferring data at speed.
● AWS DataSync supports In-Flight encryption using TLS and encryption at rest.
● AWS DataSync provides capabilities for Data Integrity Verification ensuring that all data is
transferred successfully.
● AWS DataSync integrates with AWS Management tools like CloudWatch, CloudTrail, and
EventBridge.
● With DataSync, you only pay for the data you transfer without any minimum cost.
● AWS DataSync can copy data to and from Amazon S3 buckets, Amazon EFS file systems, and
all Amazon FSx file system types.
● AWS DataSync supports Internet, VPN, and Direct Connect to transfer data between
On-premises data centers, Cloud environments & AWS

Use cases
● Application Data Migration residing on On-premises storage systems like Windows Server,
NAS file systems, Object storage to AWS.
● Archival of On-premises storage data to AWS to free capacity & reduce costs for continuously
investing in storage infrastructure.
● Continuous replication of data present On-premises or on existing Cloud platforms for Data
Protection and Disaster Recovery

Best Practices
● In General, when planning a Data Migration, migration tools need to be evaluated, check for
available bandwidth for online migration, and understand the source & destination migration
data sources.
● For using DataSync to transfer data from On-premises storage to AWS, an Agent needs to be
deployed and activated at On-premises locations. Use the Agent’s local console as a tool for
accessing various configurations

– Back to Index – 105


○ System resources
○ Network connectivity
○ Getting Agent activation key
○ View Agent ID & AWS region where the agent is activated
● A common pattern that can be used as a best practice is to use a combination of AWS ●
DataSync & AWS Storage Gateway. DataSync can be used to archive On-premises data to ● ● ●
AWS while Storage Gateway can be used to access commonly used data at On-premises.
● DataSync effectively manages the transfer of data between different storage devices without
you having to write migration scripts or keep track of data that is transferred
● AWS DataSync can also be triggered using a Lambda function in case a migration schedule is
not defined
● Data transfers between AWS services like S3 -> S3 or S3 -> EFS do not require the DataSync
Agent. It is used only for data transfers from On-premises to AWS
● You pay 1.25 Cents per GigaByte of data transferred.

– Back to Index – 106


AWS Migration Hub
What Is AWS Migration Hub?
AWS Migration Hub (Migration Hub) offers a centralized platform for discovering current
servers, planning migrations, and monitoring application migration progress. It provides
visibility into application portfolios, streamlining planning and tracking. Migration Hub
allows visualization of connections and status regardless of the migration tool used.
You can start migrating immediately or first discover servers, organizing them into
applications, and monitor progress from within the hub.
Benefits of AWS Migration Hub:
● Streamlined process: Discover, Assess, Analyze, Plan, Execute, and Manage all
from one central location.
● Guided expertise: Speed up migration and modernization projects with tailored
journey templates.
● Effective resources: Utilize specialized services proven to align with your
transformation objectives.
● No cost: Begin your migration planning or tracking for free using AWS Migration
Hub.
Use Cases:
● Assessment & Planning for Migration: Identify applications for migration and
modernization, and develop execution strategies.
● Executing Migration: Utilize pre-built guided migration journey templates and
specialized services while fostering collaboration across teams to achieve
migration objectives.
● Application Modernization: Accelerate application refactoring, streamline
development, and manage existing applications and microservices as a unified
application.

Pricing:
● AWS Migration Hub is free for collecting and storing discovery data, planning, or
tracking migrations to AWS in your home region.
● Costs for migration tools and AWS resource consumption are the user's
responsibility.
● Refactor Spaces, an optional feature, incurs usage-based charges without
upfront fees, based on hours of refactor environments and API requests.
● Users receive 2,160 free environment hours per month for 90 days, allowing for
running 3 Refactor Spaces environments free for 3 months.
● After this period, the price is $0.028 per environment per hour ($20 per month per
environment if run continuously).
● The service also costs $0.000002 per API request, with 500,000 API requests
free per month included in the AWS Free Tier indefinitely.

– Back to Index – 107


AWS Transfer Family

What is AWS Transfer Family?


AWS Transfer Family is a fully managed & secure service that enables transfer of files using
SFTP, FTPS & FTP. The destination storage services to which files are transferred are S3, and
EFS. It helps you to seamlessly migrate File Transfer workloads to AWS without having any
impact on existing application integrations or configuration.

Features
● AWS Transfer Family provides a fully managed endpoint for transferring files into and out of
S3, EFS.
● The Secure File Transport Protocol (SFTP) is a file transfer provided over SSH.
● File Transfer Protocol over SSL (FTPS is an FTP over a TLS-encrypted channel.
● Plain File Transfer Protocol (FTP) does not require a secure channel for transferring files.
● AWS Transfer Family exhibits high availability across the globe.
● AWS Transfer Family provides compliance with regulations within your Region.
● Using a pay-as-you-use model, the AWS Transfer Family service becomes cost-effective and is
simple to use.
● AWS Transfer Family has the ability to use custom Identity Providers using AWS API Gateway
& Lambda.

Use cases
● IAM Roles are used to grant access to S3 buckets for file transfer clients in a secure way.
● Users can use Route 53 to migrate an existing File Transfer hostname for use in AWS.
● SFTP & FTPS protocols can be set up to be accessible from the public internet while FTP is
limited for access from inside a VPC using VPC endpoints.

– Back to Index – 108


Seven common migration strategies (7Rs)
Migration strategies are approaches used to transition workloads to the AWS Cloud, typically
categorized as the 7 Rs:

Retire
Retain
Rehost
Relocate
Repurchase
Replatform
Refactor or re-architect.

Features:
● For large migrations, common strategies include Rehost, Replatform, Relocate, and Retire, as
they offer simpler and more efficient migration paths compared to Refactor, which involves
modernizing applications during migration and is more complex to manage.
● Rehosting, relocating, or replatforming applications initially, and then considering
modernization post-migration, is recommended for large-scale migrations to streamline the
process and reduce complexity.
● Choosing the right migration strategies is crucial for large-scale migrations and should be
based on careful assessment during the mobilize phase or initial portfolio evaluation. Each
strategy has its own set of use cases and considerations for implementation.
● Retain: This strategy involves keeping applications in the source environment
temporarily or for future migration, without making immediate changes. It's often
referred to as "lift and shift," where applications are moved to the AWS Cloud
without modifications.
● Relocate: Also known as "drop and shop," this strategy involves moving instances
or objects within the AWS environment, such as to a different VPC, Region, or
AWS account.
● Repurchase: In this strategy, the existing application is replaced with a new
version or product offering greater business value, such as improved
accessibility, maintenance-free infrastructure, and pay-as-you-go pricing.
● Replatform: This strategy, also known as "lift, tinker, and shift" or "lift and
reshape," involves migrating the application to the cloud while optimizing it for
efficiency, cost reduction, or leveraging cloud capabilities.
● Refactor or re-architect: This strategy focuses on modifying the application's
architecture to fully utilize cloud-native features, enhancing agility, performance,
and scalability. It's driven by business needs to scale, accelerate releases, and
reduce costs.

– Back to Index – 109


AWS Application Migration Service
What is AWS Transfer Family?
The AWS Application Migration Service streamlines and automates the conversion of your
source servers to operate seamlessly on AWS, reducing the need for labor-intensive and
error-prone manual tasks. Additionally, it simplifies the process of modernizing applications by
providing both built-in and customizable optimization choices.

Features
● Transition applications from various source infrastructures running supported
operating systems seamlessly.
● Enhance applications during the migration process by incorporating features like
disaster recovery and converting operating systems or licenses.
● Ensure uninterrupted business operations while replicating applications.
● Minimize expenses by utilizing a single tool capable of handling a diverse range of
applications, eliminating the requirement for specialized skills in individual applications.

Use cases
● Transfer on-premises applications like SAP, Oracle, and SQL Server from physical
servers, VMware vSphere, Microsoft Hyper-V, and other existing on-premises
infrastructure.
● Migrate cloud-based applications from other public cloud platforms to AWS,
accessing a vast array of over 200 services designed to reduce costs, enhance
availability, and foster innovation.
● Seamlessly move Amazon EC2 workloads between AWS Regions, Availability Zones,
or accounts to meet various business requirements, enhance resilience, and ensure
compliance.
● Modernize applications by applying tailored modernization actions or choosing from
pre-defined options such as cross-Region disaster recovery, Windows Server version
upgrade, and Windows MS-SQL BYOL to AWS license conversion.

– Back to Index – 110


Amazon CloudFront
What is Amazon CloudFront?
Amazon CloudFront is a content delivery network (CDN) service that securely delivers any kind
of data to customers worldwide with low latency, low network and high transfer speeds.

It uses edge locations (a network of small data centers) to cache copies of the data for the
lowest latency. If the data is not present at edge locations, the request is sent to the source
server, and data gets transferred from there.

It is integrated with AWS services such as


● Amazon S3,
● Amazon EC2,
● Elastic Load Balancing,
● Amazon Route 53,
● AWS Elemental Media Services.

The AWS origins from where CloudFront gets its traffic or requests are:
● Amazon S3
● Amazon EC2
● Elastic Load Balancing
● Customized HTTP origin

It provides the programmable and secure edge CDN computing feature through

– Back to Index – 111


AWS Lambda@Edge.

● It provides operations such as dynamic origin load-balancing, custom


bot-management computationally, or building serverless origins.
● It has a built-in security feature to protect data from side-channel attacks such
as Spectre and Meltdown.
● Field-level encryption with HTTPS - data remains encrypted throughout starting
from the upload of sensitive data. --Back to Index-- 85
● AWS Shield Standard - against DDoS attacks.
● AWS Shield Standard + AWS WAF + Amazon Route53 - against more complex
attacks than DDoS.

Amazon CloudFront Access Controls:


Signed URLs:
● Use this to restrict access to individual files.
Signed Cookies:
● Use this to provide access to multiple restricted files.
● Use this if the user does not want to change current URLs.
Geo Restriction:
● Use this to restrict access to the data based on the geographic location of the website
viewers.
Origin Access Identity (OAI):
● Outside access is restricted using signed URLs and signed cookies, but what if
someone tries to access objects using Amazon S3 URL, bypassing CloudFront signed
URL and signed cookies. To restrict that, OAI is used.
● Use OAI as a special CloudFront user, and associate it with your Cloudfront distribution
to secure Amazon S3 content.

Pricing Details:
● You pay for:
○ Data Transfer Out to Internet / Origin
○ A number of HTTP/HTTPS Requests.
○ Each custom SSL certificate associated with CloudFront distributions
○ Field-level encryption requests.
○ Execution of Lambda@Edge
● You do not pay for:
○ Data transfer between AWS regions and CloudFront.
○ AWS ACM SSL/TLS certificates and Shared CloudFront certificates.

– Back to Index – 112


AWS Direct Connect
What is AWS Direct Connect?
AWS Direct Connect is a cloud service that helps to establish a dedicated connection
from an on-premises network to one or more VPCs in the same region.

Private VIF with AWS Direct Connect helps to transfer business-critical data from the
data-center, office or colocation environment into AWS, bypassing your Internet service
provider and removing network traffic.

Private virtual interface: It helps to connect an Amazon VPC using private IP addresses.
Public virtual interface: It helps to connect AWS services located in any AWS region
(except China) from your on-premises data center using public IP addresses.

Methods of connecting to a VPC:


● AWS Managed VPN.
● AWS Direct Connect.
● AWS Direct Connect plus a VPN.
● AWS VPN CloudHub.
● Transit VPC.
● VPC Peering.
● AWS PrivateLink.
● VPC Endpoints.

– Back to Index – 113


Direct Connect gateway:
It is a globally available service used to connect multiple Amazon VPCs across different
regions or AWS accounts.

It can be integrated with either of the following gateways:


● Transit gateway - it is a network hub used to connect multiple VPCs to an on-premise
network in the same region.
● Virtual private gateway - It is a distributed edge routing function on the edges of VPC.

Features:
● AWS Management Console helps to configure AWS Direct Connect service quickly and
easily.
● It helps to choose the dedicated connection providing a more consistent network
experience over Internet-based connections.
● It works with all AWS services that are accessible over the Internet.
● It helps to scale by using 1Gbps and 10 Gbps connections based on the capacity
needed.

Price details:
● Pay only for what you use. There is no minimum fee.
● Charges for Dedicated Connection port hours are consistent across all AWS Direct
Connect locations globally except Japan.
● Data Transfer OUT charges are dependent on the source AWS Region.

– Back to Index – 114


AWS Elastic Load Balancer
What is AWS Elastic Load Balancer?
● ELB Stands for Elastic Load Balancer.
● It distributes the incoming traffic to multiple targets such as Instances, Containers,
Lambda Functions, IP Addresses etc.
● It spans in single or multiple availability zones.
● It provides high availability, scaling and security for the application.

Types of Elastic Load Balancer

➢ Application Load Balancer


o It is best suited for load balancing of the web applications and websites.
o It routes traffic to targets within Amazon VPC based on the content of the
request.

➢ Network Load Balancer


o It is mostly for the application which has ultra-high performance.
o This load balancer also acts as a single point of contact for the clients.
o This Load Balancer distributes the incoming traffic to the multiple targets.
o The listener checks the connection request from the clients using the protocol
and ports we specify.
o It supports TCP, UDP and TLS protocol.

– Back to Index – 115


➢ Gateway Load Balancer
● It is like other load balancers but it is for third-party appliances.
● This provides load balancing and auto scaling for the fleet of third-party
appliances.
● It is used for security, network analytics and similar use cases. Classic Load
Balancer ● It operates at request and connection level.
● It is for the EC2 Instance build in the old

➢ Classic Network.
● It is an old generation Load Balancer.
● AWS recommends to use Application or Network Load Balancer instead.

Listeners
● A listener is a process that checks for connection requests, using the protocol and port
that you configured.
● You can add HTTP, HTTPS or both.
Target Group
● It is the destination of the ELB.
● Different target groups can be created for different types of requests.
● For example, one target group i.e., a fleet of instances will be handling the general
request and other target groups will handle the other type of request such as micro
services.
● Currently, three types of target supported by ELB: Instance, IP and Lambda Functions.
Health Check
● Health checks will be checking the health of Targets regularly and if any target is
unhealthy then traffic will not be sent to that Target.
● We can define the number of consecutive health checks failure then only the Load
Balancer will not send the traffic to those Targets.
● Example: If 4 EC2 are registered as Target behind Application Load Balancer and if one
of the EC2 Instance is not healthy then Load Balancer will not send the traffic to that EC2
Instance

Use Cases:

● Web Application Deployed in Multiple Servers: If a web Application/Website is deployed in


multiple EC2 Instances then we can distribute the traffic between the Application Load
Balancers.
● Building a Hybrid Cloud: Elastic Load Balancing offers the ability to load balance across AWS
and on-premises resources, using a single load balancer. You can achieve this by registering all
of your resources to the same target group and associating the target group with a load
balancer.

– Back to Index – 116


● Migrating to AWS: ELB supports the load balancing capabilities critical for you to migrate to
AWS. ELB is well positioned to load balance both traditional as well as cloud native applications
with auto scaling capabilities that eliminate the guess work in capacity planning.

Charges:
● Charges will be based on each hour or partial hour that the ELB is running.
● Charges will also depend on the LCU (Load Balancer Units)

– Back to Index – 117


AWS PrivateLink
What is AWS PrivateLink?
AWS PrivateLink is a network service used to connect to AWS services hosted by other AWS
accounts (referred to as endpoint services) or AWS Marketplace.
Whenever an interface VPC endpoint (interface endpoint) is created for service in the VPC, an
Elastic Network Interface (ENI) in the required subnet with a private IP address is also created
that serves as an entry point for traffic destined to the service.

Interface endpoints
● It serves as an entry point for traffic destined to an AWS service or a VPC endpoint
service. Gateway endpoints
● It is a gateway in the route-table that routes traffic only to Amazon S3 and DynamoDB.

Features:
● It is integrated with AWS Marketplace services so that the services can be directly attached to
the endpoint.
● It provides security by not allowing the public internet and reducing the exposure to threats,
such as brute force and DDoS attacks.

– Back to Index – 118


● It helps to connect services across different accounts and Amazon VPCs without any firewall
rules, VPC peering connection, or managing VPC Classless Inter-Domain Routing (CIDRs).
● It helps to migrate on-premise applications to the AWS cloud more securely. Services can be
securely accessible from the cloud and on-premises via AWS Direct Connect and AWS VPN.

Pricing details:

● PrivateLink is charged based on the use of endpoints.

– Back to Index – 119


Amazon Route 53
What is Amazon Route 53?
Route53 is a managed DNS (Domain Name System) service where DNS is a collection of rules
and records intended to help clients/users understand how to reach any server by its domain
name.

Route 53 hosted zone is a collection of records for a specified domain that can be managed
together.
There are two types of zones:
● Public host zone – It determines how traffic is routed on the Internet.
● Private hosted zone – It determines how traffic is routed within VPC.

Route 53 TTL (seconds):


● It is the amount of time for which a DNS resolver creates a cache information about the
records and reduces the query latency.
● Default TTL does not exist for any record type but always specifies a TTL of 60 seconds or
less so that clients/users can respond quickly to changes in health status.

The most common records supported in Route 53 are:


● A: hostname to IPv4
● AAAA: hostname to IPv6
● CNAME: hostname to hostname
● Alias: hostname to AWS resource.

Other supported records are:


● CAA (certification authority authorization)
● MX (mail exchange record)

– Back to Index – 120


● NAPTR (name authority pointer record)
● NS (name server record)
● PTR (pointer record)
● SOA (start of authority record)
● SPF (sender policy framework)
● SRV (service locator)
● TXT (text record)

Route 53 Routing Policies:


Simple:
● It is used when there is a need to redirect traffic to a single resource.
● It does not support health checks. Weighted:
● It is similar to simple, but you can specify a weight associated with resources.
● It supports health checks.

Failover:
● If the primary resource is down (based on health checks), it will route to a secondary
destination.
● It supports health checks.

Geo-location:
● It routes traffic to the closest geographic location you are in. Geo-proximity:
● It routes traffic based on the location of resources to the closest region within a
geographic area.

Latency based:
● It routes traffic to the destination that has the least latency. Multi-value answer:
● It distributes DNS responses across multiple IP addresses.
● If a web server becomes unavailable after a resolver cache a response, a user can try
up to eight other IP addresses from the response to reduce downtime.

Use cases:
● When users try to register a domain with Route 53, it becomes the trustworthy DNS
server for that domain and creates a public hosted zone.
● Users can have their domain registered in one AWS account and the hosted zone in
another AWS account.
● For private hosted zones, the following VPC settings must be ‘true’:
○ enableDnsHostname.
○ enableDnsSupport.
● Health checks can be pointed at:
○ Endpoints (can be IP addresses or domain names.)

– Back to Index – 121


○ Status of other health checks.
○ Status of a CloudWatch alarm.
● Route53 as a Registrar: A domain name registrar is an organization that manages the
reservation of Internet domain names.
● Domain Registrar != DNS
Price details:
● There are no contracts or any down payments for using Amazon Route 53.
● Route 53 charges annually for each domain name registered via Route 53.
● Different rates are applied for Standard Queries, Latency Based Routing Queries, Geo
DNS and Geo Proximity Queries.

– Back to Index – 122


AWS Transit Gateway
What is AWS Transit Gateway?
AWS Transit Gateway is a network hub used to interconnect multiple VPCs. It can be used to
attach all hybrid connectivity by controlling your organization's entire AWS routing configuration
in one place.
● It can be more than one per region but can not be peered within a single region.
● It helps to solve the problem of complex VPC peering connections.
● It can be connected with an AWS Direct Connect gateway from a different AWS account.
● Resource Access Manager (RAM) cannot integrate AWS Transit Gateway with Direct Connect
gateway.
● To implement redundancy, Transit Gateway also allows multi-user gateway connections.
● Transit Gateway VPN attachment is a feature to create an IPsec VPN connection between
your remote network and the Transit Gateway.
● Transit Gateway Network Manager is used to manage and monitor networking resources and
connections to remote branch locations.
● It reduces the complexity of maintaining VPN connections with hundreds of VPCs, which
become very useful for large enterprises.
● It supports attaching Amazon VPCs with IPv6 CIDRs.

– Back to Index – 123


Transit Gateway can be created using the following ways
● AWS CLI
● AWS Management Console
● AWS CloudFormation

Price details:
● Users will be charged for your AWS Transit Gateway on an hourly basis.

– Back to Index – 124


AWS VPC
What is AWS VPC?
Amazon Virtual Private Cloud (VPC) is a service that allows users to create a virtual dedicated
network for resources.
Security Groups:
Default Security Groups:-
Inbound rule - Allows all inbound traffic
Outbound rule - Allows all outbound traffic
Custom Security Groups:-
(by default) Inbound rule - Allows no inbound traffic
Outbound rule - Allows all outbound traffic
Network ACLs (access control list):
Default Network ACL:-
Inbound rule - Allows all inbound traffic
Outbound rule - Allows all outbound traffic
Custom Network ACL:-
(by default) Inbound rule - Denies all inbound traffic
Outbound rule - Denies all outbound traffic

– Back to Index – 125


Components of VPC:
Subnets
● The subnet is a core component of the VPC.
● Resources will reside inside the Subnet only.
● Subnets are the logical division of the IP Address.
● One Subnet should not overlap another subnet.
● A subnet can be private or public.
● Resources in Public Subnet will have internet access.
● Resources in the Private Subnet will not have internet access.
● If private subnet resources want internet accessibility then we will need a NAT gateway or NAT
instance in a public subnet.

Route Tables
● Route tables will decide where the network traffic will be directed.
● One Subnet can connect to one route table at a time.
● But one Route table can connect to multiple subnets.
● If the route table is connected to the Internet Gateway and that route table is associated with
the subnet, then that subnet will be considered as a Public Subnet.
● The private subnet is not associated with the route table which is connected to the Internet
gateway.

NAT Devices
● NAT stands for Network Address Translation.
● It allows resources in the Private subnet to connect to the internet if required.

NAT Instance
● It is an EC2 Instance.
● It will be deployed in the Public Subnet.
● NAT Instance allows you to initiate IPv4 Outbound traffic to the internet.
● It will not allow the instance to receive inbound traffic from the internet.

NAT Gateway
● Nat Gateway is Managed by AWS.
● NAT will be using the elastic IP address.
● You will be charged for NAT gateway on a per hour basis and data processing rates.
● NAT is not for IPv6 traffic.
● NAT gateway allows you to initiate IPv4 Outbound traffic to the internet.
● It will not allow the instance to receive inbound traffic from the internet.

DHCP Options Set:


● DHCP stands for Dynamic Host Configuration Protocol.

– Back to Index – 126


● It is the standard for passing the various configuration information to hosts over the TCP/IP
Network.
● DHCP contains information such as domain name, domain name server.
● All this information will be contained in Configuration parameters.
● DHCP will be created automatically while creating VPC.

PrivateLink
● PrivateLink is a technology that will allow you to access services privately without internet
connectivity and it will use the private IP Addresses.
Endpoints
● It allows you to create connections between your VPC and supported AWS services.
● The endpoints are powered by PrivateLink.
● The traffic will not leave the AWS network.
● It means endpoints will not require Internet Gateway, Virtual Private Gateway, NAT
components.
● The public IP address is not required for communication.
● Communication will be established between the VPC and other services with high availability.
Types of Endpoints
● Interface Endpoints
o It is an entry point for traffic interception.
o It will route the traffic to the service that you configure.
o It will use an ENI with a private IP address.
o For Example: it will allow instances to connect to Amazon Kinesis through
interface endpoint.

● Gateway Load balancer Endpoints


o It is an entry point for traffic interception.
o It will route the traffic to the service that you configure.
o It will use load balancers to route the traffic.
o For Example Security Inspection.

● Gateway Endpoints
o It is a gateway that you defined in Route Table as a Target.
o And the destination will be the supported AWS Services.
o Amazon S3, DynamoDB supports Gateway Endpoint.
● Egress Only Internet Gateway
● An egress-only internet gateway is designed only for IPv6 address communications.
● It is a highly available, horizontally scaled component which will allow outbound only rule for
IPv6 traffic.
● It will not allow inbound connection to your EC2 Instances.

VPC Peering:

– Back to Index – 127


● VPC peering establishes a connection between two VPCs.
● EC2 Instances in both the VPC can communicate with each other as if they are in the same
network.
● Peering connections can be established between VPCs in the same region, VPCs in a different
region or VPCs in another AWS Account as well.

VPN
● Virtual Private Network (VPN) establish secure connections between multiple
networks i.e., on-premise network, client space, AWS Cloud, and all the network acts
● VPN provides a high-available, elastic, and managed solution to protect your network
traffic.
AWS Site-to-Site VPN
o AWS Site-to-Site VPN creates encrypted tunnels between your network and
your Amazon Virtual Private Clouds or AWS Transit Gateways.
AWS Client VPN
o AWS Client VPN connects your users to AWS or on-premises resources using a
VPN software client.
Use Cases:
● Host a simple public-facing website.
● Host multi-tier web applications.
● Used for disaster recovery as well.
Pricing:
● No additional charges for creating a custom VPC.
● NAT does not come under the free tier limit you will get charged per hour basis.
● NAT Gateway data processing charge and data transfer charges will be separate.
● You will get charged per hour basis for traffic mirroring.

– Back to Index – 128


AWS Certificate Manager (ACM)
What is AWS Certificate Manager?
AWS Certificate Manager is a service that allows a user to provide, manage, renew and deploy
public and private Secure Sockets Layer/Transport Layer Security (SSL/TLS) X.509 certificates.

The certificates can be integrated with AWS services either by issuing them directly with ACM or
importing third-party certificates into the ACM management system.

SSL Server Certificates:


● HTTPS transactions require server certificates X.509 that bind the public key in the certificate
to provide authenticity.
● The certificates are signed by a certificate authority (CA) and contain the server’s name, the
validity period, the public key, the signature algorithm, and more.

The different types of SSL certificates are:


● Extended Validation Certificates (EV SSL) - most expensive SSL certificate type
● Organization Validated Certificates (OV SSL) - validate a business’ creditably
● Domain Validated Certificates (DV SSL) - provide minimal encryption
● Wildcard SSL Certificate - secures base domain and subdomains
● Multi-Domain SSL Certificate (MDC) - secure up to hundreds of domain and subdomains
● Unified Communications Certificate (UCC) - single certificate secures multiple domain names.

Ways to deploy managed X.509 certificates:


1. AWS Certificate Manager (ACM) - useful for large customers who need a secure web
presence.
● ACM certificates are deployed using Amazon API Gateway, Elastic Load Balancing, Amazon
CloudFront.
2. ACM Private CA - useful for large customers building a public key infrastructure (PKI) inside
the AWS cloud and intended for private use within an organization.
● It helps create a certificate authority (CA) hierarchy and issue certificates to authenticate
users, computers, applications, services, servers, and other devices.
● Private certificates by Private CA for applications provide variable certificate lifetimes or
resource names.

ACM certificates are supported by the following services:


● Elastic Load Balancing
● Amazon CloudFront
● AWS Elastic Beanstalk
● Amazon API Gateway

– Back to Index – 129


● AWS Nitro Enclaves (an Amazon EC2 feature)
● AWS CloudFormation

Benefits:
● It automates the creation and renewal of private certificates for on-premises and AWS
resources.
● It provides an easy process to create certificates. Just submit a CSR to a Certificate Authority,
or upload and install the certificate once received.
● SSL/TLS provides data-in-transit security, and SSL/TLS certificates authorize the identity of
sites and connections between browsers and applications.

Price details:
● The certificates created by AWS Certificate Manager for using ACM-integrated services are
free.
● With AWS Certificate Manager Private Certificate Authority, monthly charges are applied for
the operation of the private CA and the private certificates issued.

– Back to Index – 130


Amazon Cognito
What is Amazon Cognito?
Amazon Cognito is a service used for authentication, authorization, and user management for
web or mobile applications. Amazon Cognito enables users to sign in through social identity
providers such as Google, Facebook, and Amazon, and through enterprise identity providers
such as Microsoft Active Directory via SAML.

Amazon Cognito authorizes a unique identifier for each user and acts as an OpenID token
provider trusted by AWS Security Token Service (STS) to access temporary, limited-permission
AWS credentials.

The two main components of Amazon Cognito are


User pools are user repositories (where user profile details are kept) that provide sign-up and
sign-in options for your app users. User pools provides
● sign-up and sign-in services through a built-in customizable web UI.
● user directory and user profiles.
● security features such as multi-factor authentication (MFA), checks for compromised
credentials, account takeover protection, and phone and email verification.
● helps in customized workflows and user migration through AWS Lambda triggers.

Identity pools provide temporary AWS credentials to the users so that they could access other
AWS resources without re-entering their credentials. Identity pools support the following identity
providers
● Amazon Cognito user pools.
● Third-party sign-in facility.
● OpenID Connect (OIDC) providers.
● SAML identity providers.
● Developer authenticated identities. Amazon Cognito is capable enough to allow usage of user
pools and identity pools separately or together.

Amazon Cognito Federated Identities


● It is a service that provides limited temporary security credentials to mobile devices and other
untrusted environments.
● It helps to create a unique identity for the user over the lifetime of an application.

– Back to Index – 131


Features:
● Advanced security features of Amazon Cognito provide risk-based authentication and
protection from the use of compromised credentials.
● To add user sign-up and sign-in pages to your apps, Android, iOS, and JavaScript SDKs for
Amazon Cognito can be used.
● Cognito User Pools provide a user directory that scales to millions of users.
● Amazon Cognito uses famous identity management standards like OAuth 2.0, OpenID
Connect, and SAML 2.0. ● Users' identities can be verified using SMS or a Time-based One-time
Password (TOTP) generator, like Google Authenticator.

Pricing Details: (pay only for what you use)


● Amazon Cognito is mainly charged for identity management and data synchronization.
● There are volume-based pricing tiers above the free tier for users who sign in directly with their
credentials from a User Pool or with social identity providers such as Apple, Google, Facebook,
and Amazon.

– Back to Index – 132


Amazon Detective

What is Amazon Detective?


For Amazon Detective to be enabled, GuardDuty should be enabled for your Account for at least
48 hours.
● For Amazon Detective to be enabled, Volume of data flowing into Amazon Detective’s Security
Behavior Graph for your account should be less than the maximum allowed by Detective.
● Amazon Detective is a Regional service and needs to be enabled for each Region.
Features:
● It is recommended to use an Administrator Account for Amazon Detective, GuardDuty &
Security Hub for the following integration points to work seamlessly
○ Details of GuardDuty findings can be pivoted from the finding details to Amazon
Detective’s finding profile.
○ While investigating a GuardDuty finding in Amazon Detective, an option to archive the
finding can be chosen.
● In order to reduce the amount of time it takes for Detective to receive updates of GuardDuty
findings, it is recommended to update the Amazon CloudWatch notification frequency to 15
minutes in GuardDuty rather than its default frequency of 6 hours.

Use cases:
● Triage Security findings/alerts - Explore whether GaurdDuty findings need to be examined
further. Amazon Detective helps users to see whether a finding is a concern.
● Incident investigation - Since Amazon Detective allows for viewing analysis & summaries
going back up to a year, it can help answer questions like how long has the security issue been
there, and the resources affected because of that.
● Threat Hunting - Access indicators like IP addresses, users to see what interactions they
would have had with the environment. Detective’s Security Behaviour Graph will help here.

– Back to Index – 133


AWS Directory Service
What is AWS Directory Service?
AWS Directory Service, also known as AWS Managed Microsoft Active Directory (AD), enables
multiple ways to use Microsoft Active Directory (AD) with other AWS services.
● Trust relationships can be set up from on-premises Active Directories into the AWS cloud to
extend authentication.
● It runs on a Windows Server, can perform schema extensions, and works with SharePoint,
Microsoft SQL Server, and .Net apps.
● The directory remains available for use during the patching (updating) process for AWS
Managed Microsoft AD.
● Using AWS Managed Microsoft AD, it becomes easy to migrate AD-dependent applications
and Windows workloads to AWS.
● A trust relationship can be created between AWS Managed Microsoft AD and existing
on-premises Microsoft Active using single sign-on (SSO).

AWS Directory Service provides the following directory types to choose from
● Simple AD
● Amazon Cognito
● AD Connector Simple AD:
● It is an inexpensive Active Directory-compatible service driven by SAMBA 4.
● It is an isolated or self-supporting AD directory type.
● It can be used when there is a need for less than 5000 users.

– Back to Index – 134


● It cannot be joined with on-premise AD.
● It is not compatible with RDS SQL Server.
● It provides some features like
○ Applying Kerberos-based SSO,
○ Assigning Group policies,
○ Managing user accounts and group memberships,
○ Helping in joining a Linux domain or Windows-based EC2 instances.
● It does not support the following functionalities.
○ Multi-factor authentication (MFA),
○ Trust relationships,
○ DNS dynamic update,
○ Schema extensions,
○ Communication over LDAPS,
○ PowerShell AD cmdlets.
Amazon Cognito:
● It is a user directory type that provides sign-up and sign-in for the application using Amazon
Cognito User Pools.
● It can create customized fields and store that data in the user directory.
● It helps to federate users from a SAML IdP with Amazon Cognito user pools and provide
standard authentication tokens after they authenticate with a SAML IdP (identities from external
identity providers).
AD Connector:
● It is like a gateway used for redirecting directory requests to the on-premise Active Directory.
● For this, there must be an existing AD, and VPC must be connected to the on-premise network
via VPN or Direct Connect.
● It is compatible with Amazon WorkSpaces, Amazon WorkDocs, Amazon QuickSight, Amazon
Chime, Amazon Connect, Amazon WorkMail, and Amazon EC2.
● It is also not compatible with RDS SQL Server.
● It supports multi-factor authentication (MFA) via existing RADIUS-based MFA infrastructure.

Use cases:
● It provides a Sign In option to AWS Cloud Services with AD Credentials.
● It provides Directory Services to AD-Aware Workloads.
● It enables a single-sign-on (SSO) feature to Office 365 and other Cloud applications.
● It helps to extend On-Premises AD to the AWS Cloud by using AD trusts.

Pricing:
● Prices vary by region for the directory service
● Hourly charges are applied for each additional account to which a directory is shared.
● Charges are applied per GB for the data transferred “out” to other AWS Regions where the
directory is deployed.

– Back to Index – 135


Amazon GuardDuty
What is Amazon GuardDuty?
AWS GuardDuty is a managed threat detection service offered by Amazon Web Services (AWS)
that continuously monitors and analyzes activity within AWS accounts to identify potential
security threats and vulnerabilities. It uses machine learning algorithms and threat intelligence
feeds to detect suspicious behavior such as unauthorized access, compromised instances, and
malicious activity. GuardDuty examines AWS CloudTrail logs, VPC Flow Logs, and DNS logs to
identify anomalies and security issues. When it detects a threat, GuardDuty generates detailed
findings and alerts that provide actionable insights for remediation. By automating threat
detection and response, GuardDuty helps AWS users enhance the security posture of their cloud
environments and protect against various cyber threats.

Features:
● Ensure that GuardDuty has complete visibility over Logs for complete Detection Coverage - Eg
consider enabling VPC Flow logs for all Regions and required network interfaces that are being
planned to monitor for threat detection.
● GuardDuty is Region-specific and it is recommended to enable GuardDuty for all Regions for
complete threat visibility.
● It is recommended to analyze GuardDuty monitoring activities with CloudTrail to ensure that
users are not tampering with GuardDuty itself.
● It is recommended to integrate GuardDuty with EventBridge & Lambda for automating risk
mitigation.

Use cases:
● Security analysts can be assisted to carry out investigations using the Security event findings
from GuardDuty. It provides Context, Metadata, and impacted resource details using which the
root cause can be detected using GuardDuty console integration with Amazon Detective.
● GuardDuty can be used to identify files containing malware - EBS can be scanned for files
containing malware that creates suspicious behavior on instance, container workloads running
on EC2.
● When GuardDuty is enabled, the associated log sources that it accesses (VPC Flow logs, DNS
Logs) need not be enabled separately.
● They are all enabled by default by GuardDuty and are provided access to GuardDuty. You
cannot add your own Log sources to GuardDuty other than the 5 mentioned above

– Back to Index – 136


AWS IAM
What is Identity Access and Management?
● IAM stands for Identity and Access Management.
● AWS IAM may be a service that helps you control access to AWS resources securely.
● You use IAM to regulate who is allowed and have permissions to AWS Resources.
● You can manage and use resources in your AWS account without having to share your
password or access key.
● It enables you to manage access to AWS services and resources securely.
● We can attach Policies to AWS users, groups, and roles.
Principal:
An Entity that will make a call for action or operation on an AWS Resource. Users, Groups, Roles
all are AWS Principal. AWS Account Root user is the first principal.

IAM User & Root User


● Root User - When you first create an AWS account, you begin with an email (Username) and
Password with complete access to all AWS services and resources in the account. This is the
AWS account, root user.
● IAM User - A user that you create in AWS.
o It represents the person or service who interacts with AWS.
o IAM users’ primary purpose is to give people the ability to sign in to AWS individually
without sharing the password with others.
o Access permissions will be depending on the policies which are assigned to the IAM
User.
IAM Group
● A group is a collection of IAM users.
● You can assign specific permission to a group and add the users to that group.
● For example, you could have a group called DB Admins and give that type of permission that
Database administrators typically need.

IAM Role
● IAM Role is like a user with policies attached to it that decides what an identity can or cannot
do.
● It will not have any credentials/Password attached to it.
● A Role can be assigned to a federated user who signed in from an external Identity Provider. ●
IAM users can temporarily assume a role and get different permission for the task.

IAM Policies
● It decides what level of access an Identity or AWS Resource will possess.

– Back to Index – 137


● A Policy is an object associated with identity and defines their level of access to a certain
resource.
● These policies are evaluated when an IAM principal (user or role) makes a request.
● Policies are JSON based documents.
● Permissions inside policies decide if the request is allowed or denied.
o Resource-Based Policies: These JSON based policy documents attached to a resource
such as Amazon S3 Bucket.
o These policies grant permission to perform an action on that resource and define
under what condition it will apply.
o These policies are the inline policies, not managed resource-based policies.
o IAM supports only one type of resource-based policy called trust policy, and this policy
is attached to a role.
o Identity-Based Policies: These policies have complete control over the identity that it
can perform on which resource and under which condition.
o Managed policies: Managed policies can attach to the multiple users, groups, and
roles in the AWS Account.
▪ AWS managed policies: These policies are created and managed by AWS.
▪ Customer managed policies: These policies are created and managed by you. It
provides more precise control than AWS Managed policies.
o Inline policies: Inline policies are the policies that can directly be attached to any
individual user, group, or role. It maintains a one-to-one relationship between the policy
and the identity.

IAM Security Best Practices:


● Grant least possible access rights.
● Enable multi-factor authentication (MFA).
● Monitor activity in your AWS account using CloudTrail.
● Use policy conditions for extra security.
● Create a strong password policy for your users.
● Remove unnecessary credentials.

Pricing:
● Amazon provides IAM Service at no additional charge.
● You will be charged for the services used by your account holders.

– Back to Index – 138


Amazon Inspector
What is Amazon Inspector?
Amazon Inspector is a vulnerability management service which continuously scans AWS
resources for software vulnerabilities and network accessibility. When activated, Amazon
Inspector discovers known vulnerabilities in EC2 instances, container images/ECR, Lambda
functions and provides a consolidated view of vulnerabilities across compute environments.

Features:
● Automation of vulnerability management - Upon activation, it automatically scans and
discovers vulnerabilities in AWS resources like EC2, Lambda functions and container workloads.
These vulnerabilities could compromise workloads, target resources for malicious use.
● Amazon Inspector provides multi-account support with AWS Organizations. By assigning an
Inspector Delegated Administrator(DA) account for your Organization, it can seamlessly start
and configure all member accounts and consolidate all findings.
● Amazon Inspector integrates with AWS Systems Manager Agent for collecting software
inventory and configurations from EC2 instances. They are then used to access workloads for
vulnerability.
● Findings from Amazon Inspector can be suppressed based on defined criteria. Findings that
are deemed by an Organization as acceptable can be suppressed by creating suppression rules.
● A highly contextualized risk score is generated by Amazon Inspector for each finding. ● When
a vulnerability has been patched or remediated, Amazon Inspector provides automatic closure
of those findings.
● Amazon Inspector provides detailed monitoring of organization-wide environment coverage. It
helps to avoid gaps in coverage.
● Amazon Inspector provides integration with AWS Security Hub and EventBridge for its
findings. They can be used to automate workflows like Ticketing.
● Amazon Inspector scans Lambda functions for security vulnerabilities like injection flaws, and
missing encryption based on AWS best practices. It uses Generative AI and automated
reasoning; it provides in-context code remediations for multiple classes of vulnerability reducing
the efforts required to fix them.
● Amazon Inspector integrates with CI/CD tools like Jenkins for container image assessments
pushing proactive security measures early in the software development cycle.

Use cases:
● Use Common Vulnerabilities & Exposures (CVE) and network accessibility for creating
contextual risk scores to Prioritize Patch remediation.
● Support compliance requirements like PCI DSS, NIST CSF and other regulations by utilizing
Amazon Inspector scans.

– Back to Index – 139


AWS Key Management Service
What is AWS Key Management Service?
AWS Key Management Service (AWS KMS) is a secured service to create and control the
encryption keys. It is integrated with other AWS services such as Amazon EBS, Amazon S3 to
provide data at rest security with encryption keys.
KMS is a global service but keys are regional which means you can’t send keys outside the
region in which they are created.
Customer master keys (CMKs):
The CMK contains metadata, such as key ID, creation date, description, key state, and key
material to encrypt and decrypt data. AWS KMS supports symmetric and asymmetric CMKs:
● Symmetric CMK constitutes a 256-bit key that is used for encryption and decryption.
● An asymmetric CMK resembles an RSA key pair that is used for encryption and decryption or
signing and verification (but not both), or an elliptic curve (ECC) key pair that is used for signing
and verification.
● Both symmetric CMKs and the private keys of asymmetric CMKs never leave AWS KMS
unencrypted.
Customer managed CMKs:
● Customer-managed CMKs are CMKs that are created, owned, and managed by user full
control.
● Customer-managed CMKs are visible on the Customer-managed keys page of the AWS KMS
Management Console.
● Customer-managed CMKs can be used in cryptographic operations.
AWS managed CMKs:
● AWS managed CMKs are CMKs that are created, managed, and used on the user’s behalf by
an AWS service that is integrated with AWS KMS.
● AWS managed CMKs are visible on the AWS managed keys page of the AWS KMS
Management Console.
● It can not be used in cryptographic operations.

Envelope encryption is the method of encrypting plain text data with a data key and
then encrypting the data key under another key. Envelope encryption offers several benefits:
● Protecting data keys.
● Encrypting the same data under multiple master keys.
● Combining the strengths of multiple algorithms.

Features:
● The automatic rotation of master keys generated in AWS KMS once per year is done without
the need to re-encrypt previously encrypted data.

– Back to Index – 140


● Using AWS CloudTrail, each request to AWS KMS is recorded in a log file that is delivered to
the specified Amazon S3 bucket. Log information includes details of the user, time, date, API
action, and the key used.
● This service automatically scales as the encryption grows.
● For the high availability of data and keys, KMS stores multiple copies of an encrypted version
of keys.

Benefits:
Key Management Service (KMS) with Server-side Encryption in S3.
● Manage encryption for AWS services.
Price details:
● Provides a free tier of 20,000 requests/month across all regions where the service is available.
● Each customer master key (CMK) that you create in AWS Key Management Service (KMS)
costs $1 per month until deleted.
● Creation and storage of AWS-managed CMKs are not charged as they are created on the
user’s behalf by AWS.
● Customer-managed CMKs are scheduled for deletion but it will incur charges if deletion is
canceled during the waiting period.

– Back to Index – 141


AWS Resource Access Manager
What is AWS Resource Access Manager?

AWS Resource Access Manager (RAM) is a service that permits users to share their resources
across AWS accounts or within their AWS Organization.

Resources that can be integrated with AWS RAM are:


● AWS App Mesh
● Amazon Aurora
● AWS Certificate Manager Private Certificate Authority
● AWS CodeBuild
● EC2 Image Builder
● AWS Glue
● AWS License Manager
● AWS Network Firewall
● AWS Outposts
● AWS Resource Groups

– Back to Index – 142


Benefits:
● The resource sharing feature of AWS RAM reduces customers’ need to create duplicate
resources in each of their accounts.
● It controls the consumption of shared resources using existing policies and permissions.
● It can be integrated with Amazon CloudWatch and AWS CloudTrail to provide detailed visibility
into shared resources and accounts.
● Access control policies in AWS Identity & Access Management (IAM) and Service Control
Policies in AWS Organizations provide security and governance controls to AWS Resource
Access Manager (RAM).

Price details:
● The charges only differ based on the resource type. No charges are applied for creating
resource shares and sharing your resources across accounts.

– Back to Index – 143


AWS Secrets Manager
What is AWS Secrets Manager?

AWS Secrets Manager is a service that replaces secret credentials in the code like passwords,
with an API call to retrieve the secret programmatically. The service provides a feature to rotate,
manage, and retrieve database passwords, OAuth tokens, API keys, and other secret credentials.
It ensures in-transit encryption of the secret between AWS and the system to retrieve the secret.

Secrets Manager can easily rotate credentials for AWS databases without any additional
programming. Though rotating the secrets for other databases or services requires Lambda
function to instruct how Secrets Manager interacts with the database or service.

Accessing Secrets Manager:


● AWS Management Console
○ It stores binary data in secret.
● AWS Command Line Tools
○ AWS Command Line Interface
○ AWS Tools for Windows PowerShell
● AWS SDKs
● Secrets Manager HTTPS Query API

Secret rotation is available for the following Databases:


● MySQL on Amazon RDS
● PostgreSQL on Amazon RDS
● Oracle on Amazon RDS
● MariaDB on Amazon RDS
● Amazon DocumentDB
● Amazon Redshift
● Microsoft SQL Server on Amazon RDS
● Amazon Aurora on Amazon RDS

Features:
● It provides security and compliance facilities by rotating secrets safely without the need for
code deployment.
● With Secrets Manager, IAM policies and resource-based policies can assign specific
permissions for developers to retrieve secrets and passwords used in the development
environment or the production environment.

– Back to Index – 144


● Secrets can be secured with encryption keys managed by AWS Key Management Service
(KMS).
● It integrates with AWS CloudTrail and AWS CloudWatch to log and monitor services for
centralized auditing.

Use cases:
● Store sensitive information as part of the encrypted secret value, either in the SecretString or
SecretBinary field.
● Use a Secrets Manager open-source client component to cache secrets and update them only
when there is a need for rotation.
● When an API request quota exceeds, the Secrets Manager throttles the request and returns a
‘ThrottlingException’ error. To resolve this, retry the requests.
● It integrates with AWS Config and facilitates tracking of changes in Secrets Manager.

Price details:
● There are no upfront costs or long-term contracts.
● Charges are based on the total number of secrets stored and API calls made.
● AWS charges at the current AWS KMS rate if the customer master keys(CMK) are created
using AWS KMS.

– Back to Index – 145


AWS Security Hub
What is AWS Security Hub?
AWS Security Hub is a service that provides an extensive view of the security aspects of AWS
and helps to protect the environment against security industry standards and best practices.

It provides an option to aggregate, organize, and prioritize the security alerts, or findings from
multiple AWS services, such as Amazon GuardDuty, Amazon Inspector, Amazon Macie, AWS
IAM Access Analyzer, AWS Firewall Manager, and also from AWS Partner solutions.

It helps the Payment Card Industry Data Security Standard (PCI DSS) and the Center for Internet
Security (CIS) AWS Foundations Benchmark with a set of security configuration best practices
for AWS. If any problem occurs, AWS Security Hub recommends remediation steps.

Enabling (or disabling) AWS Security Hub can be quickly done through,
● AWS Management Console
● AWS CLI
● By using Infrastructure-as-Code tools -- Terraform

If AWS architecture is divided across multiple regions, it needs to enable Security Hub within
each region.
The most powerful aspect of using Security Hub is the continuous automated compliance
checks using CIS AWS Foundations Benchmark.

The CIS AWS Foundations Benchmark consists of 43 best practice checks (such as “Ensure IAM
password policy requires at least one uppercase letter” and “Ensure IAM password policy
requires at least one number“).

Benefits:
● It collects data using a standard findings format and reduces the need for time-consuming
data conversion efforts.
● Integrated dashboards are provided to show the current security and compliance status.

Price details:
● Charges applied for usage of other services that Security Hub interacts with, such as AWS
Config items, but not for AWS Config rules that are enabled by Security Hub security standards.
● Using the Master account’s Security Hub, the monthly cost includes the costs associated with
all of the member accounts.
● Using a Member account’s Security Hub, the monthly cost is only for the member account.
● Charges are applied only for the current Region, not for all Regions in which Security Hub is
enabled.

– Back to Index – 146


AWS Security Token Service (AWS STS)

What is AWS STS?


● AWS STS provides additional security but may incur additional operational costs so if your
requirement is simple and straightforward within a single AWS account, IAM users and roles can
suffice. Also, you can avoid AWS STS if you looking for long-term access to AWS resources.

● Use AWS STS when you need to enhance security, delegate permissions, or provide temporary,
controlled access to AWS resources for users, applications, or services in a flexible and granular
manner. It helps you follow security best practices and reduce the reliance on long-lived
credentials, improving overall security posture in your AWS environment.

Features:
● Imagine you have two AWS accounts: Account A and Account B. You want to allow an IAM
user in Account A to access an S3 bucket in Account B without sharing long-term credentials.
You can use AWS STS to accomplish this.
● You have a web application running on an Amazon EC2 instance that needs to access an
Amazon S3 bucket securely. Instead of storing long-term credentials on the EC2 instance, you
can use AWS STS to grant temporary access to the S3 bucket.
● Several API operations in AWS STS:
AWS STS provides several API operations that allow you to manage temporary security
credentials and perform various identity and access management tasks.
● Here are some of the key AWS STS API operations: AssumeRole AssumeRoleWithSAML
AssumeRoleWithWebIdentity GetSessionToken DecodeAuthorizationMessage GetCallerIdentity

Price details:
● AWS STS itself does not have any additional charges. However, if you use it with other AWS
services, you will be charged for the other services.
● For example, if you use STS to grant permissions to an application to write data to an Amazon
S3 bucket, you'll be charged for the S3 usage.

– Back to Index – 147


AWS WAF

What is AWS WAF?

AWS WAF stands for Amazon Web Services Web Application Firewall. It is a managed service
provided by AWS that helps protect web applications from common web exploits that could
affect application availability, compromise security, or consume excessive resources.
AWS WAF provides an additional layer of security for your web applications, helping to protect
them from common web vulnerabilities and attacks such as SQL injection, cross-site scripting
(XSS), and distributed denial-of-service (DDoS) attacks.

Features:
● Combine AWS WAF with other AWS services such as AWS Shield (for DDoS protection) and
Amazon CloudFront (for content delivery) to create a robust, multi-layered security strategy.
● If you're using AWS Managed Rule Sets, ensure that you keep them up to date. AWS regularly
updates these rule sets to protect against emerging threats.
● Enable logging for AWS WAF to capture detailed information about web requests and potential
threats. Use Amazon CloudWatch or a SIEM solution to monitor and analyze these logs.
● Implement rate-limiting rules to protect APIs from abuse and DDoS attacks. Set appropriate
rate limits based on expected traffic patterns.
● Tailor your web access control lists (web ACLs) to the specific needs of your application.
● Periodically review your AWS WAF rules to make adjustments based on changing application
requirements and emerging threats.

– Back to Index – 148


AWS Backup
What is AWS Backup?
AWS Backup is a secure service that automates and governs data backup (protection) in the
AWS cloud and on-premises.

Features:
● It offers a backup console, backup APIs, and the AWS Command Line Interface (AWS CLI) to
manage backups across the AWS resources like instances and databases.
● It offers backup functionalities based on policies, tags, and resources.
● It provides scheduled backup plans (policies) to automate backup of AWS resources across
AWS accounts and regions.
● It offers incremental backup to minimize storage costs. The first backup backs up a full copy
of the data and then only the successive incremental backup changes.
● It provides backup retention plans to retain and expire backups automatically. Automated
backup retention also helps to minimize storage costs for backup.
● It provides a dashboard in the AWS Backup console to monitor backup and restore activities.
● It offers an enhanced solution by providing separate encryption keys for encrypting multiple
AWS resources.
● It provides lifecycle policies configured to transition backups from Amazon EFS to cold
storage automatically.
● It is tightly integrated with Amazon EC2 to schedule backup jobs and the storage (EBS) layer. It
also simplifies recovery by restoring whole EC2 instances from a single point.
● It supports cross-account backup and restores either manually or automatically within the
AWS organizations.
● It allows backups and restores to different regions, especially during any disaster, to reduce
downtime and maintain business continuity.
● It integrates with Amazon CloudWatch, AWS CloudTrail, and Amazon SNS to monitor, audit API
activities and notifications.

Use cases:
● It can use AWS Storage Gateway volumes for hybrid storage backup. AWS Storage Gateway
volumes are secure and compatible with Amazon EBS, which helps restore volumes to
on-premises or the AWS environment.

Price details:
● AWS charges monthly based on the amount of backup storage used and the amount of
backup data restored.

– Back to Index – 149


AWS EBS - Elastic Block Store
What is AWS EBS?
Amazon Elastic Block Store (AWS EBS) is a persistent block-level storage (volume) service
designed to be used with Amazon EC2 instances. EBS is AZ specific & automatically replicated
within its AZ to protect from component failure, offering high availability and durability.

Types of EBS:

Features:
● High Performance (Provides single-digit-millisecond latency for high-performance)
● Highly Scalable (Scale to petabytes)
● Offers high availability (guaranteed 99.999% by Amazon) & Durability
● Offers seamless encryption of data at rest through Amazon Key Management Service (KMS).
● Automate Backups through data lifecycle policies using EBS Snapshots to S3 Storage.

– Back to Index – 150


● EBS detached from an EC2 instance and attached to another one quickly.
Key Points to Remember:
● Backup/Migration: To move a volume across AZs, you first need to take a snapshot.
● Provisioned capacity: capacity needs to be provisioned in advanced (GBs & IOPS)
● You can increase the capacity of the drive over time.
● It can be detached from an EC2 instance and attached to another one quickly.
● It’s locked to Single Availability Zone (AZ)
● The default volume type is General Purpose SSD (gp2)
● EBS Volume can be mounted parallely using RAID Settings:
○ RAID 0 (increase performance)
○ RAID 1 (increase fault tolerance)
● It’s a network drive (i.e. not a physical drive).
● Unencrypted volume can be encrypted using an encrypted snapshot
● Snapshot of the encrypted volume is encrypted by default.
● When you share an encrypted snapshot, you must also share the customer-managed CMK
used to encrypt the snapshot.

Pricing:
● You will get billed for all the provisioned capacity & snapshots on S3 Storage + Sharing Cost
between AZs/Regions

EBS vs Instance Store


Instance Store (ephemeral storage) :
● It is ideal for temporary block-level storage like buffers, caches, temporary content
● Data on an instance store volume persists only during the life of the associated instance. (As
it is volatile storage - lose data if stop the instance/instance crash)
● Physically attached to ec2 instance - hence, the lowest possible latency.
● Massive IOPS - High performance
● Instance store backed Instances can be of maximum 10GiB volume size
● Instance store volume cannot be attached to an instance, once the Instance is up and running.
● Instance store volume can be used as root volume.
● You cannot create a snapshot of an instance store volume.

EBS :
● Persistent Storage.
● Reliable & Durable Storage.
● EBS volume can be detached from one instance and attached to another instance.
● EBS boots faster than instance stores.

– Back to Index – 151


AWS EFS - Elastic File Storage
What is AWS EFS?
Amazon Elastic File System (Amazon EFS) provides a scalable, fully managed elastic distributed
file system based on NFS. It is persistent file storage & can be easily scaled up to petabytes. It is
designed to share parallelly with thousands of EC2 instances to provide better throughput and
IOPS.
It is a regional service automatically replicated across multiple AZ’s to provide High Availability
and durability.

Types of EFS Storage Classes:

EFS Access Modes :


1) Performance Modes:
● General Purpose: low latency at the cost of lower throughput.
● Max I/O: high throughput at the cost of higher latency.
2) Throughput Modes :
● Bursting (default): throughput grows as the file system grows
● Provisioned: specify throughput in advance. (fixed capacity)

Features:
● Fully Managed and Scalable, Durable, Distributed File System (NFSv4)
● Highly Available & Consistent low latencies. (EFS is based on SSD volumes)
● POSIX Compliant (NFS) Distributed File System.
● EC2 instances can access EFS across AZs, regions, VPCs & on-premises through AWS Direct
Connect or AWS VPN.
● Provides EFS Lifecycle Management for the better price-performance ratio
● It can be integrated with AWS Datasync for moving data between on-premise to AWS EFS
● Supported Automatic/Schedule Backups of EFS (AWS Backups)
● It can be integrated with CloudWatch & CloudTrail for monitoring and tracking.
● EFS supports encryption at transit(TLS) and rest both. (AWS Key Management Service (KMS))
● Different Access Modes: Performance and Throughput for the better cost-performance
tradeoff.
● EFS is more expensive than EBS.

– Back to Index – 152


● Once your file system is created, you cannot change the performance mode
● Not suitable for boot volume & highly transactional data (SQL/NoSQLdatabases)
● Read-after-write consistency for data access.
● Integrated with IAM for access rights & security.

Use Cases: (Sharing Files Across instances/containers)


● Mission critical business applications
● Microservice based Applications
● Container storage
● Web serving and content management
● Media and entertainment file storage
● Database Backups
● Analytics and Machine Learning

Best Practices:
● Monitor using cloudWatch and track API using CloudTrails
● Leverage IAM services for access rights and security
● Test before fully migrating mission critical workload for performance and throughput.
● Separate out your latency-sensitive workloads. Storing these workloads on separate volumes
ensures dedicated I/O and burst capabilities.

Pricing:
● Pay for what you have used based on Access Mode/Storage Type + Backup Storage.

– Back to Index – 153


Amazon FSx for Windows File Server
What is Amazon FSx for Windows File Server?

● Amazon FSx for Windows File Server is an FSx solution that offers a scalable and shared file
storage system on the Microsoft Windows server.
● Using the Server Message Block (SMB) protocol with Amazon FSx Can access file storage
systems from multiple windows servers.
● It offers to choose from HDD and SSD storage, offers high throughput, and IOPS with
sub-millisecond latencies for Windows workloads.
● Using SMB protocol, Amazon FSx can connect file systems to Amazon EC2, Amazon ECS,
Amazon WorkSpaces, Amazon AppStream 2.0 instances, and on-premises servers using AWS
Direct Connect or AWS VPN.
● It provides high availability (Multi-AZ deployments) with an active and standby file server in
separate AZs.
● It automatically and synchronously replicates data in the standby Availability Zone (AZ) to
manage failover.
● Using AWS DataSync with Amazon FSx helps to migrate self-managed file systems to
Windows storage systems.
● It offers identity-based authentication using Microsoft Active Directory (AD).
● It automatically encrypts data at rest with the help of AWS Key Management Service (AWS
KMS). It uses SMB Kerberos session keys to encrypt data in transit.

Use cases:
● Large organizations which require shared access to multiple data sets between multiple users
can use Amazon FSx for Windows File Server.
● Using Windows file storage, users can easily migrate self-managed applications to AWS using
AWS DataSync.
● It helps execute business-critical Microsoft SQL Server database workloads easily and
automatically handles SQL Server Failover and data replication.
● Using Amazon FSx for Windows File Server, users can easily process media workloads with
low latencies and high throughput.
● It enables users to execute high intensive analytics workloads, including business intelligence
and data analytics applications.

Price details:
● Charges are applied monthly based on the storage and throughput capacity used for the file
system’s file system and backups.
● The cost of storage and throughput depends on the deployment type, either single-AZ or
multi-AZ

– Back to Index – 154


Amazon FSx for Lustre
What is Amazon FSx for Lustre?
● Amazon FSx for Lustre is an FSx solution that offers scalable storage for the Lustre system
(parallel and high-performance file storage system).
● It supports fast processing workloads like custom electronic design automation (EDA) and
high-performance computing (HPC).
● It provides shared file storage with hundreds of gigabytes of throughput, sub-millisecond
latencies, and millions of IOPS.
● It offers a choice between SSD and HDD for storage.
● It integrates with Amazon S3 to process data concurrently using parallel data-transfer
techniques.
● It stores datasets in S3 as files instead of objects and automatically updates with the latest
data to run the workload.
● It offers to select unreplicated file systems for shorter-term data processing.
● It can be used with existing Linux-based applications without any changes.
● It offers network access control using POSIX permissions or Amazon VPC Security Groups.
● It easily provides data-at-rest and in-transit encryption.
● AWS Backup can also be used to backup Lustre file systems.
● It integrates with SageMaker to process machine learning workloads.

Use cases:
● The workloads which require shared file storage and multiple compute instances use Amazon
FSx for Lustre for high throughput and low latency.
● It is also applicable in media and big data workloads to process a large amount of data.

Price details:
● Charges are applied monthly in GB based on the storage capacity used for the file system.
● Backups are stored incrementally, which helps in storage cost savings.

– Back to Index – 155


Amazon S3
What is Amazon S3?
S3 stands for Simple Storage Service. Amazon S3 is object storage that allows us to store any
kind of data in the bucket. It provides availability in multiple AZs, durability, security, and
performance at a very low cost. Any type of customer can use it to store and protect any
amount of data for use cases, like static and dynamic websites, data analytics, and backup.

Basics of S3?
● It is object-based storage.
● Files are stored in Buckets.
● The bucket is a kind of folder.
● Folders can be from 0 to 5 TB.
● S3 bucket names must be unique globally.
● When you upload a file in S3, you will receive an HTTP 200 code if the upload was successful.
● S3 offers Strong consistency for PUTs of new objects, overwrites or delete of current object
and List operations.
● By Default, all the Objects in the bucket are private.

Properties of Amazon S3.


● Versioning: This allows you to keep multiple versions of Objects in the same bucket.
● Static Website Hosting: S3 can be used to host a Static Website, which does not require any
server-side Technology.
● Encryption: Encrypt Object at rest with Amazon S3 Managed keys (SSE-S3), or Amazon KMS
Managed Keys (SS3-KMS).
● Objects Lock: Block Version deletion of the object for a defined period. Object lock can be
enabled during the bucket creation only.
● Transfer Acceleration: Transfer Acceleration takes advantage of Amazon CloudFront’s globally
distributed edge locations and enables the fast, easy, and secure transfer of files.

Permissions & Management.


● Access Control List: ACLs used to grant read/write permission to another AWS Account.
● Bucket Policy: It uses JSON based access policy advance permission to your S3 Resources. ●
CORS: CORS stands for Cross-Origin Resource Sharing. It allows cross-origin access to your S3
Resources.

– Back to Index – 156


Charges:
You will be charged based on multiple factors:
● Storage
● Requests
● Storage Management Pricing (Life Cycle Policies)
● Transfer Acceleration
● Data Transfer Pricing

Miscellaneous Topic
● Access Point: By creating Access Point, you can make S3 accessible over the internet.
● Life Cycle: By Configuring Lifecycle, you can make a transition of objects to different storage
classes.
● Replication: This feature will allow you to replicate data between buckets within the same or
different region.

Storage Class/Pricing model of S3


● S3 Standard
● S3 Standard-IA (Infrequent Access)
● S3 Intelligent Tiering (No need to mentioned Life Cycle Policy)
● S3 One Zone-IA (Kept in a Single Zone)
● S3 Glacier (For Archiving Purpose)
● S3 Glacier Deep Archive (For Archiving Purpose)

– Back to Index – 157


– Back to Index – 158
Amazon S3 Glacier
What is Amazon S3 Glacier?
Amazon S3 Glacier is a web service with vaults that offer long-term data archiving and data
backup. It is the cheapest S3 storage class and offers 99.999999999% of data durability. It helps
to retain unlimited data like photos, videos, documents as TAR or ZIP file, data lakes, analytics,
IoT, machine learning, and compliance data. S3-Standard, S3 Standard-IA, and S3 Glacier
storage classes, objects, or data are automatically stored across availability zones in a specific
region.
S3 Glacier provides the following data retrieval options:
● Expedited retrievals -
○ It retrieves data in 1-5 minutes.
● Standard retrievals -
○ It retrieves data between 3-5 hours.
● Bulk retrievals -
○ It retrieves data between 5-12 hours.

Features:
● It integrates with AWS IAM to allow vaults to grant permissions to the users.
● It integrates with AWS CloudTrail to log and monitor API call activities for auditing.
● A vault is a place for storing archives with certain functionalities like to create, delete, lock, list,
retrieve, tag, and configure.
● Vaults can be set with access policies for additional security by the users.
● Amazon S3 Glacier jobs are the select queries that execute to retrieve archived data.
● It uses Amazon SNS to notify when the jobs complete.
● It uses ‘S3 Glacier Select’ to query specific archive objects or bytes for analytics instead of
complete archives.
● S3 Glacier Select operates on uncompressed comma-separated values (CSV format) and
output results to Amazon S3.
● Amazon S3 Glacier Select uses SQL queries using SELECT, FROM, and WHERE.
● It offers only SSE-KMS and SSE-S3 encryption.
● Amazon S3 Glacier does not provide real-time data retrieval of the archives.

Use Cases:
● It helps to store and archive media data that can increase up to the petabyte level.
● Organizations that generate, analyze, and archive large data can make use of Amazon S3
Glacier and S3 Glacier Deep Archive storage classes.
● Amazon S3 Glacier replaces tape libraries for storage because it does not require high upfront
cost and maintenance.

– Back to Index – 159


Price details:
● Free Usage Tier - Users can retrieve with standard retrieval up to 10 GB of archive data per
month for free.
● Data transfer out from S3 Glacier in the same region is free.

– Back to Index – 160


AWS Storage Gateway
What is the AWS Storage Gateway?
AWS Storage Gateway is a hybrid cloud storage service that allows your on-premise storage & IT
infrastructure to seamlessly integrate with AWS Cloud Storage Services. It Can be AWS Provided
Hardware or Compatible Virtual Machine.

Purpose of Using AWS Storage Gateway(hybrid Cloud Storage) :


● To Fulfill Licencing Requirements.
● To Achieve Data-Compliance Requirements.
● To Reduce Storage & Management Cost.
● For Easy and Effective Application Storage-Lifecycle & Backup Automation.
● For Hybrid Cloud & Easy Cloud Migration.

Volume Gateway (iSCSI)


● To Access Virtual Block-Level Storage Stored on-premise
● It can be asynchronously backed up and stored as a snapshot on AWS S3 for high reliability &
durability.
○ Storage Volume Gateway: All Applications Data Stored on-premise and the only backup
is stored on AWS S3

● Cache Volume Gateway: Only Hot Data / Cached data is Stored on-premise and all other
application data is stored on AWS S3.

– Back to Index – 161


File Gateway (NFSv4 / SMB)
● To Access Object-based Storage ( AWS S3 Service )
● Supports NFS Mount Point for accessing S3 Storage to the local system as Virtual Local File
System
● Leverage the benefits of AWS S3 Storage Service

Tape Gateway (VTL)


● It is virtual local tape storage.
● It uses the Virtual Tape Library(VTL) by iSCSI protocol.
● It is cost-effective archive storage (AWS S3) for cloud backup.

– Back to Index – 162


Features of AWS Storage Gateway
● Cost-Effective Storage Management
● To achieve Low Latency on-premise.
● Greater Control over Data still take advantage of the cloud (Hybrid Cloud)
● Compatible and Compliance
● To meets license requirement
● Supports both hardware and software gateway
● Easy on-premise to Cloud Migrations
● Standard Protocol for storage access like NFS/SMB/iSCSI

Use Cases:
● Cost-Effective Backups and Disaster Recovery Management
● Migration to/from Cloud
● Managed Cache: Integration of Local(on-premises) Storage to Cloud Storage (Hybrid Cloud) ●
To Achieve Low Latency by storing data on-premise and still leverage cloud benefits

Pricing :
● Charges are applied on what you use with the AWS Storage Gateway and based on the type
and amount of storage you use.

– Back to Index – 163


AWS Elastic Disaster Recovery
What is AWS Elastic Disaster Recovery?
AWS Elastic Disaster Recovery (AWS DRS) ensures fast and reliable recovery of both
on-premises and cloud-based applications.It utilizes cost-effective storage and minimal
compute resources for replication to a staging area subnet within your AWS account.
Users can conduct non-disruptive tests, launch recovery instances within minutes, and
decide whether to maintain applications on AWS or replicate data back to the primary
site.

Features:
Launch Management Settings: Control recovery instance launches for source servers,
with options for default settings for new servers and bulk modifications for existing
ones.
AZ Modification: Modify the recovery Availability Zone for multiple source servers to
streamline cross-AZ recovery.
Post-launch Actions: Define automatic actions post-launch, including custom AWS SSM
commands or pre-defined actions like CloudWatch agent installation.
Network Components Replication: Replicate and recover network components (subnet
settings, security groups, etc.) to ensure readiness and security.
Automated Network Configuration: Automate VPC configuration replication for
smoother recovery, enhanced security, and resource efficiency.
Use Cases:
● Recovery into Existing Instances: Recover into pre-defined existing instances,
preserving metadata and security parameters.
● Implement AWS Elastic Disaster Recovery to ensure fast and reliable recovery of
on-premises applications in the event of a disaster.
● AWS Elastic Disaster Recovery enables organizations to recover cloud-based
applications swiftly and efficiently.
● Organizations can leverage AWS Elastic Disaster Recovery to perform point-in-time
recovery of applications. By capturing and replicating data at regular intervals,
organizations can restore applications to a specific point in time, reducing data loss and
maintaining data integrity.
● With AWS Elastic Disaster Recovery, organizations can conduct non-disruptive tests to
validate their disaster recovery strategies.
● AWS Elastic Disaster Recovery offers failback capabilities, allowing organizations to
seamlessly return applications to their primary environment once the disaster has been
resolved.

– Back to Index – 164


AWS CodeCommit
What is AWS CodeCommit?
AWS CodeCommit is a managed source control service used to store and manage private
repositories in the AWS cloud, such as Git.

Features:
● It works with existing Git-based repositories, tools, and commands in addition to AWS CLI
commands and APIs.
● CodeCommit repositories support pull requests, version differencing, merge requests between
branches, and notifications through emails about any code changes.
● AWS CodeCommit As compared to Amazon S3 versioning of individual files, AWS
CodeCommit support tracking batched changes across multiple files.
● It provides encryption at rest and in transit for the files in the repositories.
● It provides high availability, durability, and redundancy.
● It eliminates the need to back up and scale the source control servers.

Use Cases:
● AWS CodeCommit offers high availability, scalability, and durability for Git repositories.
● AWS CodeCommit provides built-in security features such as encryption, access
control, and integration with AWS Identity and Access Management (IAM).
● It enables teams to collaborate effectively on codebases regardless of their
geographical locations.
● It integrates seamlessly with other AWS services such as AWS CodePipeline and AWS
CodeBuild to automate the CI/CD process.

– Back to Index – 165


AWS CodeBuild
What is AWS CodeBuild?
AWS CodeBuild is a continuous integration service in the cloud used to compile source
code, run tests, and build packages for deployment.

Features:
● AWS Code Services family consists of AWS CodeBuild, AWS CodeCommit, AWS
CodeDeploy, and AWS CodePipeline that provide complete and automated continuous
integration and delivery (CI/CD).
● It provides prepackaged and customized build environments for many programming
languages and tools.
● It scales automatically to process multiple separate builds concurrently.
● It can be used as a build or test stage of a pipeline in AWS CodePipeline.
● It requires VPC ID, VPC subnet IDs, and VPC security group IDs to access resources in
a VPC to perform build or test.
● Charges are applied based on the amount of time taken by AWS CodeBuild to
complete the build.

Use Cases:
● AWS services like AWS Lambda, Amazon S3,
Amazon ECR, and AWS CodeArtifact, enabling developers to deploy applications to AWS
cloud services easily.
● It optimizes build performance by automatically provisioning and scaling build
resources based on workload demands.
● It offers pre-configured build environments with popular programming languages,
runtime versions, and build tools pre-installed.

– Back to Index – 166


AWS CodeDeploy
What is AWS CodeDeploy?
AWS CodeDeploy is a service that helps to automate application deployments to a
variety of compute services such as Amazon EC2, AWS Fargate, AWS ECS, and
on-premises instances.
Features:
● Using Amazon EKS, Kubernetes clusters and applications can be managed across
hybrid environments without altering the code.
● It can fetch the content for deployment from Amazon S3 buckets, Bitbucket, or
GitHub repositories.
● It can deploy different types of application content such as Code, Lambda functions,
configuration files, scripts and even Multimedia files. ❑ It can scale with the
infrastructure to deploy on multiple instances across development, test, and production
environments.
● It can integrate with existing continuous delivery workflows such as AWS
CodePipeline, GitHub, Jenkins

Use Cases:
In-place deployment:
● All the instances in the deployment group are stopped, updated with new revision and
started again after the deployment is complete.
● Useful for EC2/On-premises compute platform.
Blue/green deployment:
● The instances in the deployment group of the original environment are replaced by a
new set of instances of the replacement environment.
● Using Elastic Load Balancer, traffic gets rerouted from the original environment to the
replacement environment and instances of the original environment get terminated
after the deployment is complete.
● Useful for EC2/On-Premises, AWS Lambda and Amazon ECS compute platform.

– Back to Index – 167


AWS CodePipeline
What is AWS CodePipeline?
AWS CodePipeline is a Continuous Integration(CI) and Continuous Delivery (CD) service.
It helps automate the build, test, and deployment phases of your software release
process. It can create a workflow that automates the steps required to release your
application, allowing you to deliver new features and updates more quickly and reliably.
Features:
● Pipeline: A pipeline in AWS CodePipeline is a series of stages and actions that define
the steps your code must go through from source code to production deployment. Each
stage represent a different part of your CI/CD process, they are as follow:
● Source Stage: This is the first stage of a pipeline, where you specify the source code
repository (e.g., AWS CodeCommit, GitHub, Amazon S3, etc.) that contains your
application code. When changes are detected in the source repository, CodePipeline
automatically triggers the pipeline.
● Build Stage: In this stage, you can use AWS CodeBuild or another build tool to compile
your source code, run tests, and generate deployable artifacts, such as executable files
or container images.
● Test Stage: You can integrate testing tools and frameworks in this stage to
automatically test your application, ensuring that it meets the required quality
standards. Common testing tools include AWS CodeBuild, AWS Device Farm, or
third-party services.
● Deployment Stage: This stage is responsible for deploying your application to various
environments, such as development, testing, staging, and production. AWS
CodePipeline supports deployment to different AWS services like AWS Elastic
Beanstalk, AWS Lambda, Amazon ECS, or custom deployment targets.
● Approval Actions: In some cases, you may want to introduce manual approval steps
before promoting changes to production. AWS CodePipeline allows you to include
approval actions, where designated individuals or teams can review and approve the
changes before they proceed to the next stage.
● Notifications: AWS CodePipeline can send notifications through Amazon SNS (Simple
Notification Service) or other notification mechanisms to alert stakeholders about
pipeline events and status changes.
● Integration with Other AWS Services: AWS CodePipeline seamlessly integrates with
various AWS services and tools, such as AWS CodeBuild, AWS CodeDeploy, AWS
CodeCommit, AWS Elastic Beanstalk, AWS Lambda, and more, making it easy to build a
comprehensive CI/CD pipeline in the AWS ecosystem.

– Back to Index – 168


Use Cases:
● Web Application Deployment: You have a web application hosted on AWS (e.g., AWS
Elastic Beanstalk, Amazon S3 static website, or an EC2 instance), and you want to
automate the deployment process.
● Serverless Application Deployment: You're developing a serverless application using
AWS Lambda, API Gateway, and other AWS services, and you want to automate the
deployment process whenever changes are made to your code or infrastructure.
● Continuous Integration and Continuous Deployment for Containerized Applications:
You have a containerized application (e.g., Docker containers) and want to automate the
building, testing, and deployment of containers to a container orchestration platform
like Amazon ECS or Amazon EKS.

Pricing:
● AWS CodePipeline has a flexible pay-as-you-go pricing model. It costs $1.00 per active
pipeline per month, and there are no upfront fees.
● You get the first 30 days for free to encourage experimentation. An active pipeline is
one that has been around for more than 30 days and had at least one code change go
through in a month.
● As part of the AWS Free Tier, you receive one free active pipeline monthly, which
applies across all AWS regions.
● Note: Additional charges may apply for storing and accessing pipeline artifacts in
Amazon S3, as well as for actions triggered by other AWS and third-party services
integrated into your pipeline.

– Back to Index – 169


AWS Cloud9
What is AWS Cloud9?
AWS Cloud9 represents a cloud-hosted integrated development environment (IDE)
offered by Amazon Web Services (AWS). It is designed to facilitate collaborative
software development, making it easier for developers to write, debug, and deploy code
in the cloud. As AWS Cloud9 IDE is cloud-based it will let your code write, run, and debug
within the browser itself. It means no need to install any kind of IDE in your local
machine.

Features:
● Cloud-Based IDE: AWS Cloud9 is entirely cloud-based, which means you can access it
from any device with an internet connection.
● Code Collaboration: AWS Cloud9 includes features for real-time collaboration among
developers. Multiple team members can work on the same codebase simultaneously,
making it easier to collaborate on projects.
● Built-In Code Editor: The IDE comes with a built-in code editor that supports popular
programming languages such as Python, JavaScript, Java, and many others. It also
provides code highlighting, autocompletion, and code formatting features.
● Terminal Access: Developers can access a fully functional terminal within the IDE,
enabling them to run commands and manage their AWS resources directly from the
same interface where they write code.
● Integrated Debugger: AWS Cloud9 includes debugging tools that help developers
identify and fix issues in their code. This includes features like breakpoints, step-through
debugging, and variable inspection.
● Version Control Integration: It supports integration with popular version control
systems like Git, allowing developers to easily manage and track changes to their code.
● Serverless Development: AWS Cloud9 is well-suited for serverless application
development. It includes AWS Lambda function support and can be used to build and
test serverless applications.
● Cloud Integration: As part of the AWS ecosystem, AWS Cloud9 can seamlessly
interact with other AWS services, making it easier to deploy and manage applications on
AWS infrastructure.
● Customization: Developers can customize the IDE to suit their preferences by
installing plugins and configuring settings.
● Cost Management: AWS Cloud9 offers cost-efficient pricing models, including a free
tier with limited resources and pay-as-you-go pricing for additional resources.

– Back to Index – 170


Pricing:
AWS Cloud9 is free to use. You're only charged for specific resources you use, like EC2
instances or storage. Connecting to an existing Linux server via SSH is also free. No
minimum fees or upfront commitments; you pay as you go for any additional AWS
resources used within AWS Cloud9.

Best Practices:
● Resource Monitoring: Keep an eye on resource usage, especially if you're using an
EC2 instance for your AWS Cloud9 environment. Monitor CPU, memory, and storage to
ensure you're not over-provisioning or running into performance issues.
● Environment Cleanup: When you're done with a development environment, terminate
it to avoid incurring unnecessary charges. AWS CloudFormation can help automate
environment creation and cleanup.

– Back to Index – 171


AWS CodeArtifact
What is AWS CodeArtifact?
AWS CodeArtifact is a fully managed comprehensive software artifact repository
service. It is designed to help organizations store, manage, and share software artifacts
such as libraries, packages, and dependencies. AWS CodeArtifact can be used to
improve the software development and deployment workflow, particularly for teams
working with multiple programming languages and dependencies

Features:
● Centralized Artifact Repository: AWS CodeArtifact provides a centralized location for
storing and managing software artifacts.
● Support for Multiple Package Formats: AWS CodeArtifact supports multiple package
formats, including popular ones like npm (Node.js), Maven (Java), PyPI (Python), and
others.
● Security and Access Control: AWS CodeArtifact integrates with AWS Identity and
Access Management (IAM), allowing you to control who can access and publish
artifacts.
● Dependency Resolution: AWS CodeArtifact can be used to resolve dependencies for
your projects.
● Integration with Popular Tools: AWS CodeArtifact seamlessly integrates with popular
build and deployment tools like AWS CodePipeline, AWS CodeBuild, and AWS
CodeDeploy.

– Back to Index – 172


AWS CodeStar
What is AWS CodeStar?
AWS CodeStar is a fully managed development service offered by Amazon Web
Services (AWS) that aims to simplify the development and deployment of applications
on AWS. It provides a set of tools and services that help developers quickly build, test,
and deploy applications on AWS cloud infrastructure.

Features:
● Project Templates: AWS CodeStar offers pre-configured project templates for various
programming languages and application types. These templates provide a starting point
for developers, saving them time on initial setup and configuration.
● Integrated Development Tools: AWS CodeStar integrates with popular development
tools such as AWS Cloud9, Visual Studio Code, and others, making it easier for
developers to write code and collaborate on projects.
● Continuous Integration/Continuous Deployment (CI/CD): Developers can automate
the building, testing, and deployment of their applications, helping to maintain a reliable
and efficient development workflow all these can be achieved using AWS CodePipeline.

Use Cases:
● Rapid Project Initialization & Deployment: With AWS CodeStar, the startup can select
a pre-configured project template (e.g., a Python web app using Flask deployed on AWS
Elastic Beanstalk). CodeStar automatically provisions the necessary AWS services like
AWS CodeCommit, AWS CodeBuild, AWS CodeDeploy, and AWS CodePipeline. The
startup can then immediately start coding and see their changes deployed in real time.
● Standardizing Development Across Multiple Projects: Using AWS CodeStar, the IT
department can create custom project templates that align with the company's best
practices and standards. Each team can then use these templates when starting a new
project, ensuring a consistent development and deployment process across the
enterprise.

Pricing:
●AWS CodeStar incurs no additional fees. You are exclusively charged for the AWS
resources you allocate within your AWS CodeStar projects, such as Amazon EC2
instances, AWS Lambda executions, Amazon Elastic Block Store volumes, or Amazon
S3 buckets.
● There are no obligatory minimum fees or upfront commitments.

– Back to Index – 173


AWS X-Ray
What is AWS X-Ray?
AWS X-Ray is a service that allows visual analysis or allows to trace microservices
based applications.
Features:
● It provides end-to-end information about the request, response and calls made to
other AWS resources by travelling through the application's underlying components
consisting of multiple microservices.
● It creates a service graph by using trace data from the AWS resources. The graph
shows the information about front-end and backend services calls to process requests
and continue the flow of data.
● The graph helps to troubleshoot issues and improve the performance of the
applications

The X-Ray SDKs are available


for the following languages:
Go
Java
Node.js
Python
Ruby

– Back to Index – 174


AWS CodeGuru
What is AWS CodeGuru?
CodeGuru is a developer tool designed to enhance code quality and optimize
application performance by offering intelligent recommendations. It analyzes code to
identify areas for improvement and pinpoint the most costly lines of code. By integrating
CodeGuru into your development workflow, you can automate code reviews, receive
continuous performance monitoring in production, and access actionable insights to
enhance code quality, optimize application performance, and reduce costs.

Features:
● AWS CodeGuru offers several features to help developers improve code quality and
application performance.
● CodeGuru provides automated code reviews powered by machine learning algorithms.
It analyzes code for best practices, potential defects, and opportunities for optimization.
●Developers receive actionable recommendations to improve code quality and
maintainability.
● CodeGuru offers detailed insights into code quality metrics, including code
duplication, code complexity, and adherence to coding standards.
● Developers can identify areas for improvement and prioritize refactoring efforts based
on data-driven insights.
● CodeGuru helps optimize AWS resource usage and reduce costs by identifying
inefficient code patterns and resource-intensive operations.
● CodeGuru seamlessly integrates with popular development tools and IDEs, including
AWS CodeCommit, GitHub, and AWS CodePipeline.

Use Cases:
● It can also be used to perform automated code reviews on third-party libraries and
dependencies.
● It can be used to modernize legacy codebases by identifying outdated code patterns,
deprecated APIs, and performance bottlenecks
● It seamlessly integrates with CI/CD pipelines, enabling automated code reviews and
performance profiling as part of the development workflow.
● It’s performance profiler helps developers optimize application performance by
identifying resource-intensive code paths, memory leaks, and performance.

– Back to Index – 175


Amazon Elastic Transcoder
What is Amazon Elastic Transcoder?
Amazon Elastic Transcoder delivers a cloud-based service for media transcoding,
offering developers and businesses a scalable, intuitive, and cost-effective method to
convert media files into formats suitable for diverse devices, including smartphones,
tablets, and computers.

Features:
● User-Friendly Interface: Accessible via AWS Management Console, API, or SDKs,
Elastic Transcoder offers intuitive controls for starting transcoding tasks with system
presets for optimal settings.
● Scalability: Seamlessly handles large volumes of media files and varying sizes,
leveraging AWS services like S3, EC2, DynamoDB, SWF, and SNS for parallel processing
and reliability.
● Cost-Effective Pricing: Pay based on output media duration with no minimum volumes
or long-term commitments, ensuring affordability for transcoding needs.
● Managed Service: Elastic Transcoder manages transcoding tasks, including scaling
and codec updates, freeing users to focus on content creation.
● Secure Content Handling: User assets remain secure within their S3 buckets,
accessed through IAM roles, following best security practices.
● Seamless Content Delivery: Utilizes S3 and CloudFront for storing, transcoding, and
delivering content seamlessly, with simplified permissions for distribution.
● AWS Integration: Integrates with AWS services like Glacier for storage, CloudFront for
distribution, and CloudWatch for monitoring, enabling end-to-end media solutions.

Use Cases:
Transcoding Pipelines: Enable concurrent transcoding workflows, allowing for flexibility
in handling tasks like short or long content transcoding and allocation based on
resolutions or storage.
Transcoding Jobs: Convert media files, generating multiple output files with different
formats and bit rates. Jobs run within pipelines, facilitating simultaneous processing.
System Transcoding Presets: Simplify transcoding settings for various devices with
presets ensuring broad compatibility or optimized quality and size.
Custom Transcoding Presets: Customize presets for specific output targets, ensuring
consistency across pipelines.

– Back to Index – 176


Amazon Managed Blockchain(AMB)
What is AMB?

A service by AWS built on Blockchain with reliable APIs and without specialized infrastructure
that powers your application with actionable, real-time blockchain data, allowing you to focus on
innovation and speed to market with fully managed blockchain infrastructure.
Features:
Simplify Web3 Development with Amazon Managed Blockchain (AMB), streamlining
development for public and private blockchain networks.

● Effortless Access: Instant, serverless connections with AMB Access, eliminating


complex infrastructure management.
● Seamless Data Integration: AMB Query offers developer-friendly APIs for
real-time and historical blockchain data integration with AWS services.
● Scalability and Security: AMB supports secure scaling for institutional-grade and
consumer-facing applications.

Supported Blockchains: AMB currently supports Ethereum, Polygon, Bitcoin, and


Hyperledger Fabric, offering access and query capabilities for each blockchain.

Use cases:

● Enable token-gated experiences: Utilize standardized APIs to access users'


historical token balances for verifying event ticket NFTs.
● Develop a digital asset wallet: Create multichain wallets with developer-friendly
APIs for transaction history and fully managed public blockchain nodes.

Pricing:
● Pay for actual usage, scaling dynamically without infrastructure investment.
● Choose between dedicated service charging based on node instance, storage,
API requests, and data transfer sizes, or serverless service charging based on
API request count and complexity.
.

Links: https://aws.amazon.com/managed-blockchain/
https://aws.amazon.com/managed-blockchain/pricing/

– Back to Index – 177

You might also like