Cheat Sheet AWS Solutions Architect Professional
Cheat Sheet AWS Solutions Architect Professional
Cheat Sheet
Quick Bytes for you before the exam!
The information provided in the Cheat Sheet is for educational purposes only; created in our efforts to
help aspirants prepare for the AWS Solutions Architect Professional Certification. Though references have
been taken from Google Cloud documentation, it’s not intended as a substitute for the official docs. The
document can be reused, reproduced, and printed in any form; ensure that appropriate sources are
credited and required permissions are received.
750+ Hands-on-Labs
Hands-on Labs - AWS, GCP, Azure (Whizlabs)
– Back to Index – 1
Index
Section Name Topic Names Page No
Amazon Athena 6
Amazon EMR 7
AWS Glue 8
Amazon QuickSight 15
AWS Simple Workflow Service 17
AWS AppSync 18
Amazon EventBridge 19
Application Integration
Amazon Simple Notification Service 22
Amazon Simple Queue Service 24
AWS Step Functions 26
AWS Budgets 28
Cloud Financial
AWS Cost and Usage Report 30
Management
AWS Cost Explorer 31
AWS Auto Scaling 32
AWS Batch 34
AWS EC2 36
Amazon EC2 Auto Scaling 38
Compute AWS Elastic Beanstalk 39
AWS Fargate 42
AWS Lambda 43
AWS Outposts 46
AWS Wavelength 47
– Back to Index – 2
Amazon Elastic Container Registry 48
Container Amazon Elastic Container Service 49
Amazon Elastic Kubernetes Service(EKS) 52
Amazon Aurora 54
Amazon DocumentDB 55
Amazon ElastiCache 56
Database Amazon Keyspaces (for Apache Cassandra) 58
Amazon Neptune 59
Amazon RDS 60
Amazon Redshift 64
Amazon Polly 75
Machine Learning
Amazon SageMaker 75
Amazon Transcribe 76
Amazon Comprehend 77
Amazon Rekognition 79
Amazon Lex 81
AWS CloudFormation 82
AWS CloudTrail 84
Amazon CloudWatch 86
– Back to Index – 3
AWS Control Tower 94
AWS License Manager 95
AWS Management Console 96
AWS Organizations 98
AWS Systems Manager 100
AWS Trusted Advisor 102
– Back to Index – 4
AWS Backup 149
Amazon S3 156
– Back to Index – 5
Amazon Athena
What is Amazon Athena?
Amazon Athena is an interactive serverless service used to analyze data directly in Amazon
Simple Storage Service using standard SQL ad-hoc queries.
Pricing Details:
● Charges are applied based on the amount of data scanned by each query at standard S3
rates for storage, requests, and data transfer.
● Canceled queries are charged based on the amount of data scanned
● No charges are applied for Data Definition Language (DDL) statements
● Charges are applied for canceled queries also based on the amount of data scanned.
● Additional costs can be reduced if data gets compressed, partitioned, or converted into a
columnar format.
Functions of Athena:
● It helps to analyze different kinds of data (unstructured, semi-structured, and structured)
stored in Amazon S3
● Using Athena, ad-hoc queries can be executed using ANSI SQL without actually loading
the data into Athena.
● It can be integrated with Amazon Quick Sight for data visualization and helps generate
reports with business intelligence tools.
● It helps to connect SQL clients with a JDBC or an ODBC driver
● It executes multiple queries in parallel, so no need to worry about compute resources.
● It supports various standard data formats, such as CSV, JSON, ORC, Avro, and Parquet.
– Back to Index – 6
Amazon EMR
What is Amazon EMR?
Amazon EMR (Elastic Map Reduce) is a service used to process and analyze large amounts of
data in the cloud using Apache Hive, Hadoop, Apache Flink, Spark, etc.
● The main component of EMR is a cluster that collects Amazon EC2 instances (also
known as nodes in EMR).
● It decouples the compute and storage layer by scaling independently and storing cluster
data on Amazon S3.
● It also controls network access for the instances by configuring instance firewall
settings.
● It offers basic functionalities for maintaining clusters such as monitoring, replacing
failed instances, bug fixes, etc.
● It analyzes machine learning workloads using Apache Spark MLlib and TensorFlow,
clickstream workloads using Apache Spark and Apache Hive, and real-time streaming
workloads from Amazon Kinesis using Apache Flink.
It provides more than one compute instance or container to process the workloads and can be
executed on the following AWS services:
● Amazon EC2
● Amazon EKS
● AWS Outposts
Amazon EMR can be accessed in the following ways:
● EMR Console
● AWS Command Line Interface (AWS CLI)
● Software Development Kit (SDK)
● Web Service API
It offers basic functionalities for maintaining clusters such as
● Monitoring
● Replacing failed instances
● Bug fixes
– Back to Index – 7
AWS Glue
What is AWS Glue?
AWS Glue is a serverless ETL (extract, transform, and load) service used to categorize data and
move them between various data stores and streams.
– Back to Index – 8
Amazon Kinesis Data Analytics
What is Amazon Kinesis Data Analytics?
Amazon Kinesis Data Analytics is a cloud-native offering within the AWS ecosystem, designed
to simplify the processing and analysis of real-time streaming data. It is an integral component
of the broader Amazon Kinesis family, which is tailored to streamline operations involving
streaming data.
Features
Real-time Data Processing: Kinesis Data Analytics can ingest and process data streams in
real-time, making it well-suited for applications that require immediate insights and responses
to streaming data, such as IoT (Internet of Things) applications, clickstream analysis, and more.
SQL-Based Programming: You can write SQL queries to transform, filter, aggregate, and analyze
streaming data without the need for low-level coding. It may not support very complex SQL
queries or advanced analytical functions found in traditional databases.
Integration with Other AWS Services: Kinesis Data Analytics can easily integrate with other
AWS services like Kinesis Data Streams (for data ingestion), Lambda (for serverless computing),
and various data storage and analytics tools like Amazon S3, Amazon Redshift, and more.
Real-time Analytics Applications: You can use Kinesis Data Analytics to build real-time analytics
applications, perform anomaly detection, generate alerts based on streaming data patterns, and
even create real-time dashboards to visualize your insights.
Scalability: Kinesis Data Analytics is designed to scale automatically based on the volume of
data you're processing, ensuring that your analytics application can handle growing workloads
without manual intervention.
Limitations:
Data Retention: Data retention in Kinesis Data Analytics is generally limited. You may need to
store your data in another AWS service (e.g., Amazon S3) if you require long-term storage of
streaming data.
Throughput: There are limits on the maximum throughput that Kinesis Data Analytics can
handle. If you need to process extremely high volumes of streaming data, you may need to
consider partitioning your data streams and scaling your application accordingly. Resource
Allocation: AWS manages the underlying infrastructure for Kinesis Data Analytics, but you may
have limited control over the resource allocation. This means that you might not be able to
fine-tune the resources allocated to your application.
– Back to Index – 9
Amazon Data Firehose
What is Amazon Data Firehose?
Amazon Data Firehose is a serverless service used to capture, transform, and load streaming
data into data stores and analytics services.
● It synchronously replicates data across three AZs while delivering them to the
destinations.
● It allows real-time analysis with existing business intelligence tools and helps to
transform, batch, compress, and encrypt the data before delivering it.
● It creates a Kinesis Data Firehose delivery stream to send data. Each delivery stream
keeps data records for one day.
● It has 60 seconds minimum latency or a minimum of 32 MB of data transfer at a time.
● Kinesis Data Streams and CloudWatch events can be considered as the source(s) to
Kinesis Data Firehose.
– Back to Index – 10
Amazon Kinesis Data Streams
● The Kinesis family consists of Kinesis Data Streams, Kinesis Data Analytics, Kinesis
Data Firehose, and Kinesis Video Streams.
● The Real-time data can be fetched from Producers which are Kinesis Streams API,
Kinesis Producer Library (KPL), and Kinesis Agent.
● It allows building custom applications known as Kinesis Data Streams applications
(Consumers), which reads data from a data stream as data records.
● Data Streams are divided into Shards / Partitions whose data retention is 1 day
(by default) and can be extended to 7 days
● Each shard provides a capacity of 1MB per second of input data and 2MB per
second of output data
– Back to Index – 11
AWS Lake Formation
A data lake is a secure repository that stores all the data in its original form and is used for
analysis.
Lake Formation is pointed at the data sources, then crawls the sources and moves the data into
the new Amazon S3 data lake.
It integrates with AWS Identity and Access Management (IAM) to provide fine-grained access to
the data stored in data lakes using a simple grant/revoke process
Pricing Details:
Charges are applied based on the service integrations (AWS Glue, Amazon S3, Amazon EMR,
Amazon Redshift) at a standard rate
– Back to Index – 12
Amazon Managed Streaming for Apache Kafka
(Amazon MSK)
It helps to populate machine learning applications, analytical applications, and data lakes, and
stream changes to and from databases using Apache Kafka APIs.
– Back to Index – 13
Amazon OpenSearch Service
OpenSearch Service is a free and open-source search engine for all types of data like textual,
numerical, geospatial, structured, and unstructured.
Amazon OpenSearch Service with Kibana (visualization) & Logstash (log ingestion) provides an
enhanced search experience for applications and websites to find relevant data quickly
Amazon OpenSearch Service launches the Elasticsearch cluster’s resources detects the failed
Elasticsearch nodes and replaces them.
The OpenSearch Service cluster can be scaled with a few clicks in the console.
Pricing Details:
● Charges are applied for each hour of use of EC2 instances and storage volumes
attached to the instances
● Amazon OpenSearch Service does not charge for data transfer between availability
zones
– Back to Index – 14
Amazon QuickSight
What is Amazon QuickSight?
● Amazon QuickSight: A scalable cloud-based BI service providing clear insights to
collaborators worldwide.
● Connects to various data sources, consolidating them into single data
dashboards.
● Fully managed with enterprise-grade security, global availability, and built-in
redundancy.
● User management tools support scaling from 10 users to 10,000 without
infrastructure deployment.
● Empowers decision-makers to explore and interpret data interactively.
● Securely accessible from any network device, including mobile devices.
Features:
● Automatically generate accurate forecasts.
● Automatically detect anomalies.
● Uncover latent trends.
● Take action based on critical business factors.
● Transform data into easily understandable narratives, such as headline tiles for your
dashboard.
The platform offers enterprise-grade security with authentication for federated users
and groups via IAM Identity Center, supporting single sign-on with SAML, OpenID
Connect, and AWS Directory Service. It ensures fine-grained permissions for AWS data
– Back to Index – 15
access, row-level security, and robust encryption for data at rest. Users can access both
AWS and on-premises data within Amazon Virtual Private Cloud for enhanced security.
Benefits:
● Achieve a 74% cost reduction in BI solutions over three years, with up to a 300%
increase in analytics usage.
● Enjoy no upfront licensing costs and minimal total cost of ownership (TCO).
● Enable collaborative analytics without application installation.
● Aggregate diverse data sources into single analyses and share them as
dashboards.
● Manage dashboard features, permissions, and simplify database permissions
management for viewers accessing shared content.
Amazon Q in QuickSight:
Amazon Q within QuickSight enhances business productivity by leveraging Generative BI
capabilities to expedite decision-making. New dashboard authoring features empower
analysts to swiftly build, discover, and share insights using natural language prompts.
Amazon Q simplifies data comprehension with executive summaries, an improved
context-aware Q&A experience, and customizable interactive data stories.
Pricing:
● QuickSight offers flexible pricing based on user roles, allowing selection of the
model that aligns with business requirements.
● A low $3/month reader fee enables organization-wide access to interactive
analytics and natural language capabilities.
● Choose between per-user pricing and capacity pricing based on business needs.
Links: https://docs.aws.amazon.com/quicksight/latest/user/welcome.html
Amazon QuickSight - Business Intelligence Tools
https://aws.amazon.com/quicksight/pricing/
– Back to Index – 16
Amazon Simple Workflow Service (Amazon SWF)
Amazon Simple Workflow Service (Amazon SWF) is used to coordinate work amongst
distributed application components.
Tasks are performed by implementing workers and execute either on Amazon EC2 or on
on-premise servers (which means it is not a serverless service).
● Amazon SWF stores tasks and assigns them to workers during execution.
● It controls task implementation and coordination, such as tracking and maintaining the
state using API.
● It helps to create distributed asynchronous applications and supports sequential and
parallel processing.
● It is best suited for human-intervened workflows.
● Amazon SWF is a less-used service, so AWS Step Functions is the better option than
SWF
– Back to Index – 17
AWS AppSync
What is AWS AppSync?
● AWS AppSync is a serverless service used to build GraphQL API with real-time data
synchronization and offline programming features.
● GraphQL is a data language built to allow apps to fetch data from servers.
– Back to Index – 18
Amazon EventBridge
What is Amazon EventBridge?
● A serverless event bus service for Software-as-a-Service (SAAS) and AWS services.
● It is a fully managed service that takes care of event ingestion, delivery, security,
authorization, error handling, and required infrastructure management tasks to set up
and run a highly scalable serverless event bus. EventBridge was formerly called Amazon
CloudWatch Events, and it uses the same CloudWatch Event API.
Key Concepts
Event Buses
An event bus receives events. When a user creates a rule, which will be associated with a
specific event bus, the rule matches only to the event received by the event bus. Each user’s
account has one default event bus, which receives events from AWS services. We can also
create our custom event buses.
Events
An event indicates a change in the environment. By creating rules, you can have AWS services
that act automatically when changes occur in other AWS services, in SaaS applications, or user’s
custom applications.
Shema Registry
A Schema Registry is a container for schemas. Schemas are available for the events for all AWS
services on Amazon EventBridge. Users can always create or update their schemas or
automatically infer schemas from events running on event buses. Each schema will have
multiple versions. Users can use the latest schema or select earlier versions.
Rules
A rule matches incoming events and routes them to targets for processing. A single rule can
route an event (JSON format) to multiple targets. All pointed targets will be processed in parallel
and in no particular order.
– Back to Index – 19
Targets
A target processes events and receives events in JSON format. A rule’s target must be in the
same region as a rule.
Features:
● Fully managed, pay-as-you-go.
● Native integration with SaaS providers.
● 90+ AWS services as sources.
● 17 AWS services as targets.
● $1 per million events put into the bus.
● No additional cost for delivery.
● Multiple target locations for delivery.
● Easy to scale and manage.
As shown above, this service receives input from different sources (such as custom apps, SaaS
applications, and AWS services). Amazon EventBridge contains an event source for a SaaS
application responsible for authentication and security of the source. EventBridge has a schema
registry, event buses (default, custom, and partner), and rules for the target services.
– Back to Index – 20
Pricing
– Back to Index – 21
AWS SNS (Simple Notification Service)
What is AWS SNS?
Amazon Simple Notification Service (Amazon SNS) is a web service that makes it easy to set
up, operate, and send notifications from the cloud.
It provides developers with a highly scalable, flexible, and cost-effective approach to publishing
messages from an application and delivering them to subscribers or other applications. It
provides push notifications directly to mobile devices and delivers notifications by SMS text
messages, email to Amazon Simple Queue Service (SQS), or any HTTP client.
A topic is an access point for allowing recipients to get identical copies for the same
notification. One topic can support deliveries to multiple end-points – for example - we can
group together to android, IOS, and SMS text messages.
Two types of topics can be defined in the AWS SNS service.
1. Standard topic is used when incoming messages are not in order. In other words,
messages can be delivered as they are received.
2. FIFO topic is designed to maintain order of the messages between the applications,
especially when the events are critical. Duplication will be avoided in this case.
Features
● Instantaneous, push-based delivery.
● Simple API and easy integration with AWS services.
● Flexible message delivery over multiple message protocols.
● Cost-effective – as pay as pay-as-you-go model.
● Fully managed and durable with automatic scalability.
Use cases
● SNS application to person: below use cases show SNS service publishes messages to
topic, sending messages to each customer’s cell phone. This is an example of an AWS
application to personal service.
– Back to Index – 22
● SNS Application to Application: In this type of service, where SNS topic would interact
with different AWS services such as AWS Lambda, Node JS app, and SQS services. For
example, AWS S3 service has only configuration with AWS SNS service, which will be
responsible for sending identical messages to other AWS services.
Pricing
● Standard Topics: First 1 million Amazon SNS requests per month are free. There will be a
cost associated with $0.50 per 1 million requests.
● FIFO Topics: Amazon SNS FIFO topic pricing is based on the number of published
messages, the number of subscribed messages, and their respective amount of payload
data.
– Back to Index – 23
Amazon Simple Queue Service (SQS)
What is Amazon Simple Queue Service (SQS)?
Amazon Simple Queue Service (SQS) is a serverless service used to decouple (loose couple)
serverless applications and components.
The queue represents a temporary repository between the producer and consumer of
messages.
It can scale up to 1-10000 messages per second.
The default retention period of messages is four days and can be extended to fourteen days.
SQS messages get automatically deleted after being consumed by the consumers. SQS
messages have a fixed size of 256KB.
Standard Queue -
● The unlimited number of transactions per second.
● Messages get delivered in any order.
● Messages can be sent twice or multiple times.
FIFO Queue -
● 300 messages per second.
● Support batches of 10 messages per operation, results in 3000 messages per second.
Messages get consumed only once.
Delay Queue is a queue that allows users to postpone/delay the delivery of messages to a
queue for a specific number of seconds. Messages can be delayed for 0 seconds (default) -15
(maximum) minutes.
Dead-Letter Queue is a queue for those messages that are not consumed successfully. It is
used to handle message failure. Visibility Timeout is the amount of time during which SQS
prevents other consumers from receiving (poll) and processing the messages.
– Back to Index – 24
● Default visibility timeout - 30 seconds
● Minimum visibility timeout - 0 seconds
● Maximum visibility timeout - 12 hours
– Back to Index – 25
AWS Step Functions
What are Step functions?
Step functions allow developers to offload application orchestration into fully managed AWS
services. This means you can just modularize your code to “Steps” and let AWS worry about
handling partial failure cases, retries, or error handling scenarios.
Best Practices:
● Set time-outs in state machine definitions, which help in better task response when
something goes wrong in getting a response from an activity.
Example:
"ActivityState": {
"Type": "Task",
"Resource":
"arn:aws:states:us-east-1:123456789012:activity:abc",
"TimeoutSeconds": 900,
"HeartbeatSeconds": 40,
"Next": "State2" }
● Always provide the Amazon S3 arn (amazon resource name) instead of large payloads to
the state machine when passing input to Lambda function.
Example:
{
– Back to Index – 26
"Data": "arn:aws:s3:::MyBucket/data.json"
}
● Handle errors in state machines while invoking AWS lambda functions.
Example:
"Retry": [ {
"ErrorEquals": [ "Lambda.CreditServiceException"]
"IntervalSeconds": 2,
"MaxAttempts": 3,
"BackoffRate": 2
}]
● It has a hard quota of 25K entries during execution history. To avoid this for long-running
executions, implement a pattern using the AWS lambda function.
● Lambda
● AWS Batch
● DynamoDB
● ECS/Fargate
● SNS
● SQS
● SageMaker
● EMR
Pricing:
● With Step Functions Express Workflows, you pay only for what you use. You are charged
based on the number of requests for your workflow and its duration.
● $0.025 per 1,000 state transitions (For Standardworkflows)
● $1.00 per 1M requests (For Express workflows)
– Back to Index – 27
AWS Budgets
What is AWS Budgets?
AWS Budgets enables the customer to set custom budgets to track cost and usage from the
simplest to the complex use cases.
● AWS Budgets can be used to set reservation utilization or coverage targets allowing you to get
alerts by email or SNS notification when the metrics reach the threshold.
● Reservation alerts feature is provided to Amazon EC2, Amazon RDS, Amazon Redshift,
Amazon ElastiCache, and Elasticsearch.
● The Budgets can be filtered based on specific dimensions such as Service, Linked Account,
Tags, Availability Zone, API Operation, and Purchase Option (i.e., “Reserved”) and be notified
using SNS.
● AWS Budgets can be accessed from the AWS Management Console’s service links and within
the AWS Billing Console. Budgets API or CLI (command-line interface) can also be used to
create, edit, delete, and view up to 20,000 budgets per payer account.
● AWS Budgets can be integrated with other AWS services such as AWS Cost Explorer, AWS
Chatbot, Amazon Chime room, and AWS Service Catalog.
● AWS Budgets can now be created monthly, quarterly, or annual budgets for the AWS resource
usage or the AWS costs.
Best Practices:
● Users can set up to five alerts for each budget. But the most important are:
○ Alerts when current monthly costs exceed the budgeted amount.
○ Alerts when current monthly costs exceed 80% of the budgeted amount.
○ Alerts when forecasted monthly costs exceed the budgeted amount.
● When creating budgets using Budgets API, a separate IAM user should be made for allowing
access or IAM role for each user, if multiple users need access to Budgets API.
● If using consolidated billing in an organization is handled by a master account, IAM policies
can control access to budgets by member accounts. Member account owners can create their
budgets but cannot change or edit budgets of Master accounts.
– Back to Index – 28
● Two of the related managed policies are provided for budget actions. One policy allows a user
to pass a role to the budgets service, and the other allows budgets to execute the action.
● Budget actions are not effective enough to control costs with Auto Scaling groups.
Price details:
● Monitoring the budgets and receiving notifications are free of charge.
● Each subsequent action-enabled budget will experience a $0.10 daily cost after the free quota
ends.
– Back to Index – 29
AWS Cost and Usage Report
What is AWS Cost and Usage Report?
AWS Cost & Usage Report is a service that allows users to access the detailed set of AWS cost
and usage data available, including metadata about AWS resources, pricing, Reserved Instances,
and Savings Plans.
✔ For viewing, reports can be downloaded from the Amazon S3 console; for analyzing the
report, Amazon Athena can be used, or upload the report into Amazon Redshift or Amazon
QuickSight.
✔ Users with IAM permissions or IAM roles can access and view the reports.
✔ If a member account in an organization owns or creates a Cost and Usage Report, it can have
access only to billing data when it has been a member of the Organization.
✔ If the master account of an AWS Organization wants to block access to the member
accounts to set-up a Cost and Usage Report, Service Control Policy (SCP) can be used
– Back to Index – 30
AWS Cost Explorer
What is AWS Cost Explorer?
AWS Cost Explorer is a UI-tool that enables users to analyze the costs and usage with the help
of a graph, the Cost Explorer cost and usage reports, and/or the Cost Explorer RI report. It can
be accessed from the Billing and Cost Management console
It provides default reports for analysis with some filters and constraints to create the reports.
Analysis using Cost Explorer can be saved as a bookmark, CSV file download, or save them as a
report.
● The first time that the user signs up for Cost Explorer, it directs through the main parts of
the console. It prepares the data regarding costs & usage and displays up to 12 months
of historical data (might be less if less used), current month data, and then calculates
the forecast data for the next 12 months.
● It uses the same set of data that is used to generate the AWS Cost and Usage Reports
and the billing reports.
● It provides a custom time period to view the data at a monthly or daily interval.
● It provides a feature of Savings Plans which provides savings of up to 72% on the AWS
compute usage.
● It provides a way to access the data programmatically using the Cost Explorer API
– Back to Index – 31
AWS Auto Scaling
What is AWS Auto Scaling?
● AWS Auto Scaling keeps on monitoring your Application and automatically adjusts the
capacity required for steady and predictable performance.
● By using auto scaling it's very easy to set up the scaling of the application automatically with
no manual intervention.
● It allows you to create scaling plans for the resources like EC2 Instances, Amazon EC2 tasks,
Amazon DynamoDB, Amazon Aurora Read Replicas.
● It balances Performance Optimization and cost.
Monitoring:
● Health Check: Keep on checking the health of the instance and remove the unhealthy instance
out of Target Group.
● CloudWatch Events: AutoScaling can submit events to Cloudwatch for any type of action to
perform in the autoscaling group such as a launch or terminate an instance.
– Back to Index – 32
● CloudWatch Metrics: It shows you the statistics of whether your application is performing as
expected.
● Notification Service: Autoscaling can send a notification to your email if the autoscaling group
launches or the instance gets terminated.
Charges:
● AWS will not charge you additionally for the Autoscaling Group.
● You will be paying for the AWS Resources that you will use.
– Back to Index – 33
AWS Batch
AWS Batch allows developers, scientists, and engineers to run thousands of computing jobs in
the AWS platform. It is a managed service that dynamically maintains the optimal compute
resources like CPU, Memory based on the volume of submitted jobs.The User just has to focus
on the applications (like shell scripts, Linux codes or java programs).
It executes workloads on EC2 (including Spot instances) and AWS Fargate.
Components:
Best Practices:
● Use Fargate if you want to run the application without getting into EC2 infrastructure
details. Let the AWS batch manage it.
● Use EC2 if your work scale is very large and you want to get into machine specifications
like memory, CPU, GPU.
● Jobs running on Fargate are faster on startup as there is no time lag in scale-out
operation, unlike EC2 where launching new instances may take time.
Use Cases:
● Stock markets and Trading – The trading business involves daily processing of large
scale data and loading them into a Data warehouse for analytics. So that your
predictions and decisions are quick enough to make a business grow on a regular basis.
– Back to Index – 34
● Media houses and the Entertainment industry – Here a large amount of data in the
forms of audio, video and photos are being processed daily to cater to their customers.
These application workloads can be moved to containers on AWS Batch.
Pricing:
● There is no charge for AWS Batch rather you pay for the resources like EC2 and Fargate
you use.
– Back to Index – 35
AWS EC2
What is AWS EC2?
● EC2 stands for Elastic Compute Cloud.
● Amazon EC2 is the virtual machine in the Cloud Environment.
● Amazon EC2 provides scalable capacity. Instances can scale up and down automatically
based on the traffic.
● You do not have to invest in the hardware.
● You can launch as many servers as you want and you will have complete control over the
servers and can manage security, networking, and storage.
Instance Type:
● Instance type is providing a range of instance types for various use cases.
● The instance is the processor and memory of your EC2 instance.
EBS Volume:
● EBS Stands for Elastic Block Storage.
● It is the block-level storage that is assigned to your single EC2 Instance.
● It persists independently from running EC2.
➤ Types of EBS Storage
➤ General Purpose (SSD)
➤ Provisioned IOPS (SSD)
➤ Throughput Optimized Hard Disk Drive
➤ Cold Hard Disk Drive
➤ Magnetic
Instance Store:
Instance store is the ephemeral block-level storage for the EC2 instance.
● Instance stores can be used for faster processing and temporary storage of the application.
AMI:
AMI Stands for Amazon Machine Image.
● AMI decides the OS, installs dependencies, libraries, data of your EC2 instances.
● Multiple instances with the same configuration can be launched using a single AMI.
Security Group:
A Security group acts as a virtual firewall for your EC2 Instances.
● It decides the type of port and kind of traffic to allow.
– Back to Index – 36
● Security groups are active at the instance level whereas Network ACLs are active at the subnet
level.
● Security Groups can only allow but can’t deny the rules.
● The Security group is considered stateful.
● By default, in the outbound rule all traffic is allowed and needs to define the inbound rules.
Key Pair:
A key pair, consisting of a private key and a public key, is a set of security credentials that you
can use to prove your identity while connecting to an instance.
● Amazon EC2 instances use two keys, one is the public key which is attached to your EC2
instance.
● Another is the private key which is with you. You can get access to the EC2 instance only if
these keys get matched.
● Keep the private key in a secure place.
Tags:
Tag is a key-value name you assign to your AWS Resources.
● Tags are the identifier of the resource.
● Resources can be organized well using the tags.
Pricing:
● You will get different pricing options such as On-Demand, Savings Plan, Reserved Instances,
and Spot Instances.
– Back to Index – 37
Amazon EC2 Auto Scaling
Features
● The Auto Scaling group is a collection of the minimum number of EC2 used for high
availability.
● It enables users to use Amazon EC2 Auto Scaling features such as fault tolerance, health
check, scaling policies, and cost management.
● The scaling of the Auto Scaling group depends on the size of the desired capacity. It is
not necessary to keep DesiredCapacity and MaxSize equal.
● EC2 Auto Scaling supports automatic Horizontal Scaling (increases or decreases the
number of EC2 instances) rather than Vertical Scaling (increases or decreases EC2
instances like large, small, medium).
● It scales across multiple Availability Zones within the same AWS region.
E.g., DesiredCapacity: '2' - There will be total 2 EC2 instances
MinSize: '1'
MaxSize: ‘2
– Back to Index – 38
AWS Elastic Beanstalk
This application hosted on the Web Server Environment handles the HTTP and HTTPS requests
from the users.
o Beanstalk Environment: When an environment is launched, Beanstalk automatically
assigns various resources to run the application successfully.
o Elastic Load Balancer: Request is received from the user via Route53 which forwards
the request to ELB. Then ELB distributes the request among various EC2 Instances of
the Autoscaling group.
o Auto Scaling Group: Auto Scaling will automatically add or remove EC2 Instance based
on the load in the application.
o Host Manager: Software components inside every EC2 Instance which is responsible
for the following:
▪ Log files generation
▪ Monitoring
▪ Events in Instance
– Back to Index – 39
● Worker Environment
○ A worker is a background process that helps applications for handling heavy resource
and time-intensive operations.
○ It is responsible for database clean up, report generation that helps to remain up and
running. ○ In the Worker Environment, Beanstalk installs a Daemon on each EC2 Instance
in the Auto Scaling Group.
○ Daemon pulls requests from the SQS queue and executes the task based on the
message received.
○ After execution, SQS will delete the message, and in case of failure, it will retry to send
the message.
Platform Supported
● .Net (on Linux or Windows)
● Docker
● GlassFish
● Go
● Java
● Node.js
● Python
● Ruby
● Tomcat
Deployment Models:
– Back to Index – 40
All at Once: Deployment will start taking place in all the instances at the same time. It means all
your EC2 Instances will be out of service for a short time. Your application will be completely
down for the same duration.
Rolling: Deploy the new version in batches; unlike all at once, one group of instances will run
the old version of the application. That means there will not be complete downtime just like all
at once.
Rolling with additional batch: Deploy the new version in batches. But before that, provision an
additional group of instances to compensate for the updating one.
Immutable: Deploy the new version to a separate group of instances, and the update will be
immutable.
Traffic splitting: Deploy the new version to a separate group of instances and split the incoming
traffic between the older and the new ones.
Pricing:
– Back to Index – 41
AWS Fargate
Benefits:
● Fargate allows users to focus on building and operating the applications rather than focusing
on securing, scaling, patching, and managing servers.
● Fargate automatically scales the compute environment that matches the resource
requirements for the container.
● Fargate provides built-in integrations with other AWS services like Amazon CloudWatch
Container Insights.
Price details:
● Charges are applied for the amount of vCPU and memory consumed by the containerized
applications.
● Fargate’s Savings Plans provide savings of up to 50% in exchange for one or three-year long
term commitment.
● Additional charges will be applied if containers are used with other AWS services.
– Back to Index – 42
AWS Lambda
What is AWS Lambda?
● AWS Lambda is a serverless compute service through which you can run your code without
provisioning any Servers.
● It only runs your code when needed and also scales automatically when the request count
increases.
● AWS Lambda follows the Pay per use principle – it means there is no charge when your code
is not running.
● Lambda allows you to run your code for any application or backend service with zero
administration.
● Lambda can run code in response to the events. Example – update in DynamoDB Table or
change in S3 bucket.
● You can even run your code in response to HTTP requests using Amazon API Gateway.
– Back to Index – 43
How does Lambda work?
Lambda Functions
Lambda Layers
Lambda Event
– Back to Index – 44
o AWS IoT
o Kinesis
o CloudWatch Logs
● NodeJS
● Go
● Java
● Python
● Ruby
Lambda@Edge
● It is the feature of Amazon CloudFront which allows you to run your code closer to the
location of Users of your application.
● It improves performance and reduces latency.
● Just like lambda, you don’t have to manage and provision the infrastructure around the
world.
● Lambda@Edge runs your code in response to the event created by the CDN.
Pricing:
● Charges will be calculated based on the number of requests for the function executed
in a particular duration.
● Duration will be counted on a per 100-millisecond basis.
● Lambda Free tier usage includes 1 million free requests per month.
● It also comes with 400,000 GB-Seconds of compute time per month.
– Back to Index – 45
AWS Outposts
What is AWS Outposts?
AWS Outposts enables running AWS services locally and accessing a variety of services
within the local AWS Region. Host applications on-premises using familiar AWS tools
and APIs, ensuring seamless integration. It supports low-latency access for workloads
needing local data processing, data residency compliance, and migration of
applications with local system dependencies.
Features:
● Deploy AWS Services locally to meet low latency and data residency requirements,
enabling on-premises data processing.
● Benefit from a fully managed infrastructure, minimizing the resources, time, and
operational risk involved in managing IT infrastructure.
● Achieve a consistent hybrid experience by utilizing identical hardware, APIs, tools, and
management controls available both on-premises and in the cloud.
● Utilizes Local Gateway, necessitating Border Gateway Protocol (BGP) over a routed
network for connectivity.
● Outposts racks are provided by AWS in a fully assembled state, simplifying the
installation process.
● Installation involves simply plugging the racks into power and network, with
centralized redundant power conversion units and a direct current (DC) distribution
system managed by line mate connectors in the backplane.
Use Cases
● Hybrid Cloud Deployment: Organizations with strict data residency requirements or
latency-sensitive workloads can deploy AWS Outposts to run AWS services locally while
still leveraging the broader capabilities of the AWS cloud.
● Edge Computing: Outposts enables edge computing by bringing AWS infrastructure
and services closer to where data is generated, allowing for real-time processing and
analysis of data at the edge.
● Data Processing at Remote Locations: Companies operating in remote locations with
limited or intermittent connectivity can use Outposts to process data locally and
synchronize with the cloud when connectivity is available.
● Application Modernization: Outposts supports application modernization efforts by
providing a consistent infrastructure platform across on-premises and cloud
environments, facilitating the migration of legacy applications to AWS services.
– Back to Index – 46
AWS Wavelength
What is AWS Wavelength?
Amazon Wavelength is used to create mobile applications with exceptionally low latencies.
Wavelength integrates storage and computing resources directly into the 5G edge networks of
communications service providers (CSPs). By expanding a virtual private cloud (VPC) to one or
more Wavelength Zones, developers can utilize AWS resources like Amazon EC2 instances to
run applications requiring ultra-low latency and connectivity to AWS services within the Region.
Features
● AWS Wavelength provides infrastructure tailored for mobile edge computing
applications, offering Wavelength Zones within CSP 5G networks.
● Wavelength Zones embed AWS compute and storage services directly into CSP 5G
networks, allowing application traffic from 5G devices to reach servers within these
zones without traversing the internet.
● This setup minimizes latency by eliminating the need for application traffic to pass
through multiple internet hops, leveraging the low latency and high bandwidth of
modern 5G networks.
● In Wavelength Zones, users can create EC2 instances, EBS volumes, VPC subnets, and
carrier gateways, enabling the deployment of various AWS services.
● Systems Manager, CloudWatch, CloudTrail, CloudFormation, and ALB, can also be
utilized.
● Wavelength services are integrated into a VPC connected via a reliable,
high-bandwidth connection to an AWS Region.
Use Cases:
● Develop media and entertainment applications leveraging AWS Wavelength to
provide high-resolution live video streaming, high-quality audio, and immersive
AR/VR experiences.
● Accelerate machine learning inference tasks at the edge by running AI and
ML-driven video and image analytics, enhancing 5G applications in medical
diagnostics, retail environments, and smart factories.
● Create connected vehicle applications for advanced driver assistance systems,
autonomous driving functionalities, and in-vehicle entertainment experiences.
– Back to Index – 47
Amazon Elastic Container Registry
What is Amazon Elastic Container Registry?
Amazon Elastic Container Registry (ECR) is a managed service that allows users to store,
manage, share, and deploy container images and artifacts. It is mainly integrated with Amazon
Elastic Container Service (ECS), for simplifying the production workflow.
Features:
● It stores both the containers which are created, and any container software bought through
AWS Marketplace.
● It is integrated with Amazon Elastic Container Service (ECS), Amazon Elastic Kubernetes
Service (EKS), and AWS Lambda, and AWS Fargate for easy deployments.
● AWS Identity and Access Management (IAM) enables resource-level control of each repository
within ECR.
● It supports public and private container image repositories. It allows sharing container
applications privately within the organization or publicly for anyone to download.
● A separate portal called Amazon ECR Public Gallery, helps to access all public repositories
hosted on Amazon ECR Public.
● It stores the container images in Amazon S3 because S3 provides 99.999999999% (11 9’s) of
data durability.
● It allows cross-region and cross-account replication of the data for high availability
applications.
● Encryption can be done via HTTPS while transferring container images. Images are also
encrypted at rest using Amazon S3 server-side encryption or by using customer keys managed
by AWS KMS.
● It is integrated with continuous integration and continuous delivery and also with third-party
developer tools.
● Lifecycle policies are used to manage the lifecycle of the images.
Pricing details:
● Using AWS Free Tier, new customers get 500 MB-month of storage for one year for private
repositories and 50 GB-month of free storage for public repositories.
● Without Sign-up, 500 GB of data can be transferred to the internet for free from a public
repository each month.
● By signing-up to an AWS account, or authenticating to ECR with an existing AWS Account, 5
TB of data can be transferred to the internet for free from a public repository each month
– Back to Index – 48
Amazon Elastic Container Service
What is Amazon ECS?
Amazon Elastic Container Service (Amazon ECS) is a regional container orchestration service
like Docker that allows to execute, stop, and manage containers on a cluster.
A container is a standard unit of software development that combines code, its dependencies,
and system libraries so that the application runs smoothly from one environment to another.
Images are created from a Dockerfile (text format), which specifies all of the components that
are included in the container. These images are then stored in a registry from where they can
then be downloaded and executed on the cluster.
All the containers are defined in a task definition that runs a single task or tasks within a service.
The task definitions (JSON format) defines which container images should run across the
clusters. A service is a configuration that helps to run and maintain several tasks
simultaneously in a cluster.
ECS cluster is a combination of tasks or services that can be executed on EC2 Instances or AWS
Fargate, a serverless compute for containers. When using Amazon ECS for the first time, a
default cluster is created.
The container agent runs on each instance within an Amazon ECS cluster. It sends data on the
resource's current running tasks and resource utilization to Amazon ECS. It starts and stops the
tasks whenever it receives a request from Amazon ECS. A task is the representation of a task
definition.
The number of tasks to run on your cluster is specified after the task definition is created within
Amazon ECS. The task scheduler is responsible for attaching tasks within your cluster based on
the task definitions.
– Back to Index – 49
Application Load Balancers offer some attractive features:
● It enables containers to use dynamic host port mapping. For that, multiple tasks from the
same service are allowed per container instance.
● It supports path-based routing and priority rules due to which multiple services can use the
same listener port on a single Application Load Balancer.
– Back to Index – 50
Amazon ECS can be integrated with:
➢ It decreases time consumption by eliminating user tasks to install, operate, and scale
cluster management infrastructure.With API calls, Docker-enabled applications can be
launched and stopped.
➢ It powers other services such as Amazon SageMaker, AWS Batch, Amazon Lex. It also
integrates with AWS App Mesh, to provide rich observability, controls traffic and security
features to the applications.
Use Cases:
– Back to Index – 51
Amazon Elastic Kubernetes Service(EKS)
What is Amazon Elastic Kubernetes Service(EKS)?
Amazon Elastic Kubernetes Service (Amazon EKS) is a service that enables users to manage
Kubernetes applications in the AWS cloud or on-premises. Any standard Kubernetes application
can be migrated to EKS without altering the code.
The Amazon EKS control plane consists of nodes that run the Kubernetes software, such as
etcd and the Kubernetes API server. Amazon EKS runs its own Kubernetes control plane without
sharing control plane infrastructure across other clusters or AWS accounts.
To ensure high availability, Amazon EKS runs Kubernetes control plane instances across
multiple Availability Zones. It automatically replaces unhealthy control plane instances and
provides automated upgrades and patches for the new control planes.
The two methods for creating a new Kubernetes cluster with nodes in Amazon EKS:
● eksctl – A command-line utility that consists of kubectl for creating and managing
Kubernetes clusters on Amazon EKS.
● AWS Management Console and AWS CLI
There are methods that Amazon EKS cluster uses to schedule pods using single or combined
node groups:
● Self-managed nodes - consist of one or more Amazon EC2 instances that are deployed
in an Amazon EC2 Auto Scaling group
● Amazon EKS Managed node groups - helps to automate the provisioning and lifecycle
management of nodes.
● AWS Fargate - run Kubernetes pods on AWS Fargate
Amazon Elastic Kubernetes Service is integrated with many AWS services for unique
capabilities:
– Back to Index – 52
Use Cases:
● Using Amazon EKS, Kubernetes clusters and applications can be managed across
hybrid environments.
● EKS with kubeflow can model machine learning workflows using the latest EC2
GPU-powered instances.
● Users can execute batch workloads on the EKS cluster using the Kubernetes Jobs API
across AWS compute services such as Amazon EC2, Fargate, and Spot Instances.
Price details:
● $0.10 per hour is charged for each Amazon EKS cluster created.
● Using EKS with EC2 - Charged for AWS resources (e.g. EC2 instances or EBS volumes).
● Using EKS with AWS Fargate - Charged for CPU and memory resources starting from
the time to download the container image until the Amazon EKS pod terminates.
– Back to Index – 53
Amazon Aurora
What is Amazon Aurora?
Aurora is the fully managed RDS services offered by AWS. It’s only compatible with
PostgreSQL/MySQL. As per AWS, Aurora provides 5 times throughput to traditional MySQL and
3 times throughput to PostgreSQL.
Features:
● It is only supported by regions which have minimum 3 availability zones.
● High availability of 99.99%. Data in Aurora is kept as 2 copies in each AZ with a minimum 3
AZ’s making a total of 6 copies.
● It can have up to 15 Read replicas (RDS has only 5).
● It can scale up to 128 TB per database instance.
● Aurora DB cluster comprises two instances:
○ Primary DB instance – It supports both read/write operations and one primary DB
instance is always present in the DB cluster.
○ Aurora Replica – It supports only read operation. Aurora automatically fails over to its
replica in less time in case a primary DB instance is not available.
Read replicas fetch the same result as the primary instance with a lag of not more than
100 ms.
● Data is highly secure as it resides in VPC. Encryption at rest is done through AWS KMS
and encryption in transit is done by SSL.
● Aurora Global Database - helps to span in multiple AWS regions for low latency access
across the globe. This can also be utilised as backup in case the whole region has gone
over an outage or disaster.
– Back to Index – 54
● Aurora Multi master – is a new feature only compatible only with MySQL edition. It
gives the ability to scale out write operations over multiple AZ. So there is no single point
of failure in the cluster and applications can perform both read/write at any node.
● Aurora Serverless - gives you the flexibility to scale in and out on the basis of database
load. The user has to only specify the minimum (2 GB of RAM), maximum (488 GB of
RAM) capacity units. This feature of Aurora is highly beneficial if the user has
intermittent or unpredictable workload. It is available for both MySQL and PostgreSQL.
● Fault tolerance and Self-Healing feature- In Aurora, each set of data is replicated in six
copies over 3 AZ. So that it can handle the loss up to 2 copies without impacting the
write feature and up to 3 copies without impacting the read feature. Aurora storage is
also self-healing which means disks are continuously scanned for errors and repaired
Best practices:
● If the user is not sure about the workload of the database then prefer Aurora
Serverless. If you have a team of developers and testers who hit the database only
during particular hours of day and it remains minimal during night, again prefer Aurora
Serverless.
● If write operations and DDL are crucial requirements, choose Multi-master Aurora for
MySQL. In this manner, all writer nodes are equally functional and failure one doesn’t
impact the other.
● Aurora Global database is best for industries in finance, gaming as one single DB
instance provides a global footprint. The application enjoys low latency read operation
on such databases.
Pricing:
● There are no up-front fees.
● On-demand instances are costlier than reserved instances. There is no additional fee
for backup if the retention period is less than a day.
● Data transfer between Aurora DB instance and EC2 in the same AZ is free.
● All data transfer IN to Database is free of charge.
● Data transfer OUT of Database through the internet is chargeable if it exceeds 1
GB/month.
Amazon DocumentDB
What is Amazon DocumentDB?
– Back to Index – 55
DocumentDB is a fully managed document database service by AWS which supports MongoDB
workloads. It is highly recommended for storing, querying, and indexing JSON Data.
Features:
● It is compatible with MongoDB versions 3.6 and 4.0.
● All on-premise MongoDB or EC2 hosted MongoDB databases can be migrated to DocumentDB
by using DMS (Database Migration Service).
● All database patching is automated in a stipulated time interval.
● DocumentDB storage scales automatically in increments of 10GB and maximum up to 64TB.
● Provides up to 15 Read replicas with single-digit millisecond latency.
● All database instances are highly secure as they reside in VPCs which only allow a given set of
users to access through security group permissions.
● It supports role-based access control (RBAC).
● Minimum 6 read copies of data is created in 3 availability zones making it fault-tolerant.
● Self-healing – Data blocks and disks are continuously scanned and repaired automatically.
● All cluster snapshots are user-initiated and stored in S3 till explicitly deleted.
Best Practices:
● It reserves 1/3rd RAM for its services, so choose your instance type with enough RAM so that
performance and throughput are not impacted.
● Setup Cloudwatch alerts to notify users when the database is reaching its maximum capacity.
Use Case:
Pricing:
● Pricing is based on the instance hours, I/O requests, and backup storage.
Amazon ElastiCache
What is Amazon ElastiCache?
– Back to Index – 56
ElastiCache is a fully managed in-memory data store. It significantly improves latency and
performance for all read-heavy application workloads. In-memory caches are faster than
disk-based databases. It works with both Redis and Memcached protocol based engines.
Features:
● It is high availability as even the data center is under maintenance or outage; the data is still
retrieved from Cache.
● Unlike databases, data is retrieved in a key-value pair fashion.
● Data is stored in nodes which is a unit of network-attached RAM. Each node has its own Redis
or Memcached protocol running. Automatic replacement of failed nodes is configured.
● Memcached features –
○ Data is volatile.
○ Supports only simple data-type.
○ Supports multi-threading.
○ Scaling can be done by adding or removing nodes.
○ Nodes can span in different Availability Zones.
○ Multi-AZ failover is not supported.
● Redis features –
○ Data is non-volatile.
○ Supports complex Data types like strings, hashes, and geospatial-indexes.
○ Doesn’t support multi-threading.
○ Scaling can be done by adding shards and not nodes. A shard is a collection of
primary nodes and read-replicas.
○ Multi-AZ is possible by placing a read replica in another AZ.
– Back to Index – 57
○ In case of failover, can be switched to read replica in another AZ
Best practices:
● Storing web sessions. In web applications running behind a load balancer, use Redis
so if one the server is lost, data can still be retrieved.
● Caching Database results. Use Memcached in front of any RDS where repetitive
queries are being fired to improve latency and performance.
● Live Polling and gaming dashboards. Store frequently accessed data in Memcached to
fetch results quickly.
● Combination of RDS and Elasticache can be utilized to improve architecture on the
backend.
Pricing:
● Available only for on-demand and reserved nodes.
● Charged for per node hour.
● Partial node hours will be charged as full node hours.
● No charge for data exchange between Elasticache and EC2 within the same AZ.
https://aws.amazon.com/elasticache/pricing/
– Back to Index – 58
Keyspaces is an Apache Cassandra compatible database in AWS. It is fully managed by AWS,
highly available, and scalable. Management of servers, patching is done by Amazon. It scales
based on incoming traffic with virtually unlimited storage and throughput.
Features:
● Keyspaces is compatible with Cassandra Query Language (CQL). So your application can be
easily migrated from on-premise to cloud.
● Two operation modes are available as below
1. The On-Demand capacity mode is used when the user is not certain about the
incoming load. So throughput and scaling are managed by Keyspaces itself. It’s costly
and you pay only for the resources you use.
2. The Provisioned capacity mode is used when you have predictable application traffic.
A user just needs to provide many max read/write per second in advance while
configuring the database. It’s less costly.
● There is no upper limit for throughput and storage.
● Keyspaces is integrated with Cloudwatch to measure the performance of the database with
incoming traffic.
● Data is replicated across 3 Availability Zones for high durability.
● Point-in-Time-recovery (PITR) is there to recover data lost due to accidental deletes. The data
can be recovered up to any second till 35 days.
Use Cases:
● Build Applications using open source Cassandra APIs and drivers. Users can use Java,
Python, .NET, Ruby, Perl.
● Highly recommended for applications that demand a low latency platform like trading.
● Use cloud trail to check the DDL operations. It gives brief information on who accessed, when,
what services were used and a response returned from AWS. Some hackers creeping into the
database firewall can be detected here.
Pricing:
● Users only pay for the read and write throughput, storage, and networking resources.
Amazon Neptune
What is Amazon Neptune?
Amazon Neptune is a graph database service used as a web service to build and run
applications that require connected datasets.
– Back to Index – 59
The graph database engine helps to store billions of connections and provides milliseconds
latency for querying them.
It offers a choice from graph models and languages for querying data.
● Property Graph (PG) model with Apache TinkerPop Gremlin graph traversal language,
● W3C standard Resource Description Framework (RDF) model with SPARQL Query
Language.
It is highly available across three AZs and automatically fails over any of the 15 low latency read
replicas.
It provides fault-tolerant storage by replicating two copies of data across three availability
zones.
It provides continuous backup to Amazon S3 and point-in-time recovery from storage failures.
It automatically scales storage capacity and provides encryption at rest and in transit.
Amazon RDS
What is Amazon RDS?
RDS (Relational Database System) in AWS makes it easy to operate, manage, and scale in the
cloud. It provides scalable capacity with a cost-efficient pricing option and automates manual
administrative tasks such as patching, backup setup, and hardware provisioning.
– Back to Index – 60
Engines supported by RDS are given below:
MySQL
● It is the most popular open-source DB in the world.
● Amazon RDS makes it easy to provision the DB in AWS Environment without worrying about
the physical infrastructure.
● In this way, you can focus on application development rather than Infra. Management. MS
SQL
● MS-SQL is a database developed by Microsoft.
● Amazon allows you to provision the DB Instance with provisioned IOPS or Standard Storage.
MariaDB
● MariaDB is also an open-source DB developed by MySQL developers.
● Amazon RDS makes it easy to provision the DB in AWS Environment without worrying about
the physical infrastructure.
PostgreSQL
● Nowadays, PostgreSQL has become the preferred open-source relational DB. Many
enterprises now have started using PostgreSQL powered database engines.
Oracle
● Amazon RDS also provides a fully managed commercial database engine like Oracle.
● Amazon RDS makes it easy to provision the DB in AWS Environment without worrying about
the physical infrastructure.
● You can run Oracle DB Engine with two different licensing models – “License Included” and
“Bring-Your-Own-License (BYOL).”
Amazon Aurora
● It is the relational database engine developed by AWS only.
● It is a MySQL and PostgreSQL-compatible DB engine.
● Amazon claims that it is five times faster than the standard MySQL DB engine and around
three times faster than the PostgreSQL engine.
● The cost of the aurora is also less than the other DB Engines.
● In Amazon Aurora, you can create up to 15 read replicas instead of 5 in other databases.
– Back to Index – 61
Multi AZ Deployment
● Enabling multi-AZ deployment creates a Replica (Copy) of the database in different availability
zones in the same Region.
● Multi-AZ synchronously replicates the data to the standby instance in different AZ.
● Each AZ runs on physically different and independent infrastructure and is designed for high
reliability.
● Multi-AZ deployment is for Disaster recovery not for performance enhancement.
Read Replicas
● Read Replicas allow you to create one or more read-only copies of your database in the same
or different regions.
● Read Replica is mostly for performance enhancement. We can now use Read-Replica with
Multi-AZ as a Part of DR (disaster recovery) as well.
● A Read Replica in another region can be used as a standby database in event of regional
failure/outage. It can also be promoted to the Production database.
– Back to Index – 62
Storage Type
● General Purpose (SSD): General Purpose storage is suitable for database workloads that
provide a baseline of 3 IOPS/GiB and the ability to burst to 3,000 IOPS.
● Provisioned IOPS (SSD): Provisioned IOPS storage is suitable for I/O-intensive database
workloads. I/O range is from 1,000 to 30,000 IOPS.
Monitoring
● By default, enhanced monitoring is disabled.
● Enabling enhanced monitoring incurs extra charges.
● Enhanced monitoring is not available in the AWS GovCloud(US) Region.
● Enhanced monitoring is not available for the instance class db.m1.small.
● Enhanced monitoring metrics include IOPS, Latency, Throughput, Queue Depth.
● Enhanced monitoring gathers information from an agent installed in DB Instance.
Backups & Restore
● The default backup retention period for automatic backup is 7 days if you use the console, for
CLI and RDS API it’s 1 day.
● Automatic backup can be retained for up to 35 days.
● The minimum Automatic backup retention period is 0 days, which will disable the automatic
backup for the instance.
● 100 Manual snapshots are allowed in a single region.
Charges:
You will be charged based on multiple factors:
● Active RDS Instances
● Storage
● Requests
● Backup Storage
● Enhanced monitoring
● Transfer Acceleration
● Data Transfer for cross-region replication
– Back to Index – 63
Amazon Redshift
What is Amazon Redshift?
Amazon redshift is a fast and powerful, fully managed, petabyte-scale data warehouse service
in the cloud. This service is highly scalable to a petabyte or more for $1000 per terabyte per year,
less than a tenth of most other data warehousing solutions.
Features:
● It employs multiple compression techniques and can often achieve significant compression
relative to traditional relational databases.
● It doesn’t require indexes or materialized views, so uses less space than traditional database
systems.
● Massively parallel processing (MPP): Amazon redshift automatically distributes data and
query load across all nodes. Amazon redshift makes it easy to add nodes to your data
warehouse and maintain fast query performance as data grows in future.
● Enabled by default with a 1-day retention period.
● Maximum retention period is 35 days.
● Redshift always maintain at least three copies of your data (the original and replica on the
compute nodes and a backup in Amazon S3)
● Redshift can also asynchronously replicate your snapshots to S3 in another region for disaster
recovery.
● It is only available in 1 AZ but can store snapshots to new AZs in the event of an outage.
Security Considerations
● Data encrypted in transit using SSL.
● Encrypted at rest using AES-256 encryption.
● By default, RedShift takes care of key management.
o Manager your own keys through HSM
o AWS key Management service.
Use cases
● If we want to copy data from EMR, S3, and DynamoDB to power a custom Business
intelligence tool. Using a third-party library, we can connect and query redshift for results.
– Back to Index – 64
Pricing:
● Compute Node Hours - total number of hours you run across your all compute nodes for the
billing period.
● You are billed for 1 unit per node per hour, so a 3-node data warehouse cluster running
persistently for an entire month would incur 2160 instance hours.
● You will not be charged for leader node hours, only compute nodes will incur charges.
– Back to Index – 65
Amazon WorkSpaces
What is Amazon WorkSpaces?
Amazon WorkSpaces is a managed service used to provision virtual Windows or Linux desktops
for users across the globe.
Benefits
– Back to Index – 66
AWS Amplify
Features
● Use Version Control: Always use version control (e.g., Git) to track changes to your AWS
Amplify projects.
● Modularize Your App: Organize your application into smaller, manageable modules.
● Infrastructure as Code: Leverage the AWS Amplify CLI to define your infrastructure as code
(IaC).
● Environment Management: Use environment variables and separate configurations for
development, testing, and production environments.
● Continuous Integration/Continuous Deployment (CI/CD): Implement CI/CD pipelines to
automate the building, testing, and deployment of your application.
● Data Validation and Sanitization: Always validate and sanitize user input to prevent security
vulnerabilities like cross-site scripting (XSS) and SQL injection.
Use Case
● Personal Blog Website: You want to create a personal blog website to publish articles and
showcase your writing skills.
● E-commerce Mobile App: You're building an e-commerce mobile app that needs features like
user registration, product catalog, and real-time updates for inventory changes.
● Event Scheduling Web App: You want to create a web application for scheduling and
managing events for a local community organization.
– Back to Index – 67
Amazon API Gateway
What is Amazon API Gateway?
Amazon API Gateway is a service which creates, publishes, maintains, monitors and secures
APIs at any scale.
● It helps to create Synchronous microservices with Load Balancers and forms the app-facing
part of the AWS serverless infrastructure with AWS Lambda.
● It handles the tasks involved in processing concurrent API calls.
● It combines with Amazon EC2, AWS Lambda or any web application (public or private
endpoints) to work as back-end services.
– Back to Index – 68
Regional endpoint:
● It signifies reduced latency for requests that originate in the same region. It can also
configure the CDN and protect WAF
Private endpoint:
● It securely exposes the REST APIs to other services only within the VPC.
– Back to Index – 69
● Resource-based policies
● IAM Permissions
● Lambda Authorizer (formerly Custom Authorizers)
● Cognito user pools
Features:
● It helps to create stateful (WebSocket) and stateless (HTTP and REST) APIs.
● It integrates with CloudTrail for logging and monitoring API usage and API changes.
● It integrates with CloudWatch metrics to monitor REST API execution and WebSocket API
execution.
● It integrates with AWS WAF to protect APIs against common web exploits.
● It integrates with AWS X-Ray for understanding and triaging performance latencies.
Price details:
● You pay for API Caching as it is not eligible for the AWS Free Tier.
● API requests are not charged for authorization and authentication failures.
● Method calls which consist of API keys are not charged if API keys are missing or invalid.
● API Gateway-throttled and plan-throttled requests are not charged if the request rate exceeds
the predefined limits.
– Back to Index – 70
AWS IoT Analytics
What is AWS IoT Analytics?
AWS IoT Analytics streamlines the intricate process of analyzing extensive amounts of IoT data,
eliminating the need for constructing a complex and costly IoT analytics platform.
Features:
● Collect: AWS IoT Analytics seamlessly gathers data from diverse sources, including AWS
IoT Core and other sources like Amazon S3 and Amazon Kinesis, simplifying the
ingestion process.
● Process: It cleanses, filters, transforms, and enriches data using customizable Lambda
functions and logical operations, ensuring data accuracy and relevance for analysis.
● Store: The service stores both raw and processed data in an optimized time-series data
store, offering robust data management capabilities, including access control and data
retention policies.
● Analyze: Users can run ad hoc or scheduled SQL queries for quick insights and perform
time-series analysis to monitor device performance and predict maintenance issues.
● Hosted Notebooks: AWS IoT Analytics supports hosted Jupyter Notebooks for advanced
analytics and machine learning tasks, including statistical classification, LSTM for
time-series prediction, and K-means clustering for device segmentation.
● Automated Execution: The service automates the execution of custom containers and
Jupyter Notebooks, allowing users to perform continuous analysis on scheduled
intervals.
● Incremental Data Capture: AWS IoT Analytics captures incremental data since the last
analysis, optimizing analysis efficiency and cost by focusing only on new data.
● Visualization: Integration with Amazon QuickSight enables users to visualize data sets in
interactive dashboards, while embedded Jupyter Notebooks provide visualization
options within the AWS IoT Analytics console.
Use Cases:
● Contextual Data Enrichment: Agricultural operators enhance moisture sensor
data with predicted rainfall to optimize irrigation equipment efficiency.
● Predictive Maintenance: Utilize prebuilt templates for predictive maintenance
models, such as predicting heating and cooling system failures in cargo vehicles.
● Proactive Supply Replenishment: Monitor inventory levels in IoT-enabled vending
machines and automate accurate merchandise reordering when supplies are low.
– Back to Index – 71
● Process Efficiency Monitoring: Improve efficiency by monitoring IoT applications,
such as identifying optimal truck loads for efficient loading guidelines.
Features
● It provides secure and bi-directional communication with all the devices, even when they
aren’t connected.
● It consists of a device gateway and a message broker that helps connect and process
messages and routes those messages to other devices or AWS endpoints.
● It helps developers to operate wireless LoRaWAN (low-power long-range Wide Area
Network) devices.
● It helps to create a persistent Device Shadow (a virtual version of devices) so that other
applications or devices can interact.
● It integrates with Amazon services like Amazon CloudWatch, AWS CloudTrail, Amazon
S3, Amazon DynamoDB, AWS Lambda, Amazon Kinesis, Amazon SageMaker, and
Amazon QuickSight to build IoT applications
– Back to Index – 72
AWS IoT Events
Features
It detects events from IoT sensors such as temperature, motor voltage, motion detectors,
humidity.
It builds event monitoring applications in the AWS Cloud that can be accessed through the AWS
IoT Events console.
It helps to create event logic using conditional statements and trigger alerts when an event
occurs.
AWS IoT Events accepts data from many IoT sources like sensor devices, AWS IoT Core, and
AWS IoT Analytics.
– Back to Index – 73
AWS IoT Greengrass
What is AWS IoT Greengrass?
AWS IoT Greengrass is a cloud service that groups, deploys, and manages software for all
devices at once and enables edge devices to communicate securely.
Features
● It is used on multiple IoT devices in homes, vehicles, factories, and businesses.
● It provides a pub/sub message manager that stores messages as a buffer to preserve
them in the cloud.
● The Greengrass Core is a device that enables the communication between AWS IoT Core
and the AWS IoT Greengrass.
● Devices with IoT Greengrass can process data streams without being online.
● It provides different programming languages, open-source software, and development
environments to develop and test IoT applications on specific hardware.
● It provides encryption and authentication for device data for cloud communications.
● It provides AWS Lambda functions and Docker containers as an environment for code
execution.
– Back to Index – 74
Amazon Polly
What is Amazon Polly?
Amazon Polly is a cloud service used to convert text into speech.
Features
● It requires no setup costs, only pay for the text converted.
● It supports many different languages, and Neural Text-to-Speech (NTTS) voices to create
speech-enabled applications.
● It offers caching and replays of Amazon Polly’s generated speech in a format like MP3.
Amazon SageMaker
Amazon SageMaker is a cloud service that allows developers to prepare, build, train, deploy and
manage machine learning models.
Features
● It provides a secure and scalable environment to deploy a model using SageMaker
Studio or the SageMaker console.
● It has pre-installed machine learning algorithms to optimize and deliver 10X
performance.
● It scales up to petabytes level to train models and manages all the underlying
infrastructure.
● Amazon SageMaker notebook instances are created using Jupyter notebooks to write
code to train and validate the models.
● Amazon SageMaker gets billed in seconds based on the amount of time required to
build, train, and deploy machine learning models.
– Back to Index – 75
Amazon Comprehend
What is Amazon Comprehend?
Amazon Comprehend employs natural language processing (NLP) to extract insights from
document content. It generates insights by identifying entities, key phrases, language,
sentiments, and other common elements within documents. Utilize Amazon Comprehend to
develop new products that leverage document structure understanding.
Features:
● Discover valuable insights from text in various sources such as documents, customer
support tickets, product reviews, emails, and social media feeds.
● Secure and manage access to sensitive data by identifying and redacting Personally
Identifiable Information (PII) from documents.
Use cases:
● Analyze business and call center data
● Index and search through product reviews
● Manage legal briefs
● Handle financial document processing
Pricing:
1. Comprehend services are priced based on 100-character units with a minimum
charge of 3 units per request for various APIs.
2. Custom Comprehend entails additional charges for model training and
management.
3. Topic modeling charges are determined by the total size of documents
processed per job.
Links- https://aws.amazon.com/comprehend/
https://aws.amazon.com/comprehend/pricing/
– Back to Index – 76
Amazon Rekognition
What is Amazon Rekognition?
Amazon Rekognition is a cloud-based service that employs advanced computer vision
technology to analyze images and videos without requiring expertise in machine
learning. Its intuitive API allows for quick analysis of images and videos stored in
Amazon S3, offering features like object and text detection, identifying unsafe content,
and analyzing faces. With its face recognition capabilities, Rekognition enables various
applications such as user verification, cataloguing, people counting, and public safety.
Features:
Image Analysis:
● Object, Scene, and Concept Detection: Detect and classify various objects,
scenes, concepts, and celebrities present in images.
● Text Detection: Identify both printed and handwritten text in images, supporting
multiple languages.
Video Analysis:
● Object, Scene, and Concept Detection: Categorize objects, scenes, concepts, and
celebrities appearing in videos.
● Text Detection: Recognize printed and handwritten text in videos in different
languages.
● People Tracking: Monitor individuals identified in videos as they move across
frames.
● Facial Analysis: Detect, analyze, and compare faces in both live streaming and
recorded videos.
Use cases:
● Simplify content retrieval with Amazon Rekognition's automatic analysis,
enabling easy searchability for images and videos.
● Enhance security with Rekognition's face liveness detection, preventing identity
spoofing beyond traditional passwords.
● Quickly locate individuals across your visual content using Rekognition's efficient
face search feature.
● Ensure content safety with Rekognition's ability to detect explicit, inappropriate,
and violent content, facilitating proactive filtering.
– Back to Index – 77
● Benefit from HIPAA Eligibility, making Amazon Rekognition suitable for handling
protected health information in healthcare applications.
Pricing:
With Amazon Rekognition there are 4 different types of usage, each with their own pricing
details.
Links- https://docs.aws.amazon.com/rekognition/latest/dg/what-is.html
https://aws.amazon.com/rekognition/pricing/
https://aws.amazon.com/rekognition/
– Back to Index – 78
Amazon Lex
What Is Amazon Lex?
Amazon Lex, an AWS service, enables developers to build chatbots with natural
conversation capabilities, leveraging the technology behind Alexa. With seamless
integration and advanced language understanding, Lex simplifies speech recognition
and facilitates the creation of engaging chatbots for intuitive user interactions.
Features:
● Effortlessly integrate AI that comprehends intent, retains context, and automates
basic tasks across multiple languages.
● Design and deploy omnichannel conversational AI with a single click, without the
need to manage hardware or infrastructure.
● Seamlessly connect with other AWS services to access data, execute business
logic, monitor performance, and more.
● Pay only for speech and text requests without any upfront costs or minimum
fees.
Use Cases:
● Enable virtual agents and voice assistants: Provide users with self-service
options through virtual contact center agents and interactive voice
response (IVR), allowing them to perform tasks autonomously, like
scheduling appointments or changing passwords.
● Automate responses to FAQs: Develop conversational solutions that
answer common inquiries, enhancing Connect & Lex conversation flows
with natural language search for frequently asked questions powered by
Amazon Kendra.
● Improve productivity with application bots: Streamline user tasks within
applications using efficient chatbots, seamlessly integrating with
enterprise software through AWS Lambda and maintaining precise access
control via IAM.
– Back to Index – 79
● Extract insights from transcripts: Design chatbots using contact center
transcripts to maximize captured information, reducing design time and
expediting bot deployment from weeks to hours.
Pricing:
Charges for request and response interaction in Amazon Lex are based on the number
of speech or text requests processed by the bot, with speech requests at $0.004 each
and text requests at $0.00075, determining the total monthly charges.
Links-https://aws.amazon.com/pm/lex/?gclid=CjwKCAjw88yxBhBWEiwA7cm6pdHoQs2IU6oBU
12kk11fUisSIFF9DRDJYx6qxhFdUgnEMvPgpY3EWBoCRhcQAvD_BwE&trk=436e9c39-382a-42a
6-a49f-4cdbdfe8cadc&sc_channel=ps&ef_id=CjwKCAjw88yxBhBWEiwA7cm6pdHoQs2IU6oBU1
2kk11fUisSIFF9DRDJYx6qxhFdUgnEMvPgpY3EWBoCRhcQAvD_BwE:G:s&s_kwcid=AL!4422!3!65
2868433334!e!!g!!amazon%20lex!19910624536!147207932349
https://aws.amazon.com/lex/pricing/
https://docs.aws.amazon.com/lex/latest/dg/how-it-works.html
– Back to Index – 80
Amazon Transcribe
Features
● It is best suited for customer service calls, live broadcasts, and media subtitling.
● Amazon Transcribe Medical is used to convert medical speech to text for clinical
documentation.
● It automatically matches the text quality similar to the manual transcription. For
transcribe, charges are applied based on the seconds of speech converted per month.
– Back to Index – 81
AWS CloudFormation
What is AWS CloudFormation?
AWS CloudFormation is a service that collects AWS and third-party resources and manages
them throughout their life cycles, by launching them together as a stack.
A template is used to create, update, and delete an entire stack as a single unit, without
managing resources individually.
It provides the capability to reuse the template to set the resources easily and repeatedly. It can
be integrated with AWS IAM for security.
Templates -
A JSON or YAML formatted text file used for building AWS resources.
Stack -
It is a single unit of resources.
– Back to Index – 82
Change sets -
It allows checking how any change to a resource might impact the running resources.
Stacks can be created using the AWS CloudFormation console and AWS Command Line
Interface (CLI).
Stack updates:
First the changes are submitted and compared with the current state of the stack and only the
changed resources get updated.
There are two methods for updating stacks:
● Direct update - when there is a need to quickly deploy the updates.
● Creating and executing change sets - they are JSON files, providing a preview option for the
changes to be applied.
StackSets are responsible for safely provisioning, updating, or deleting stacks.
Nested Stacks are stacks created within another stack by using the
AWS::CloudFormation::Stack resource.
When there is a need for common resources in the template, Nested stacks can be used by
declaring the same components instead of creating the components multiple times. The main
stack is termed as parent stack and other belonging stacks are termed as child stack, which can
be implemented by using ref variable ‘! Ref’.
Price details:
● AWS does not charge for using AWS CloudFormation, charges are applied for the services that
the CloudFormation template comprises.
● AWS CloudFormations supports the following namespaces: AWS::*, Alexa::*, and Custom::*. If
anything else is used except these namespaces, charges are applied per handler operation.
● Free tier - 1000 handler operations per month per account
● Handler operation - $0.0009 per handler operation
Example:
CloudFormation template for creating EC2 instance
EC2Instance:
Type: AWS::EC2::Instance
Properties:
ImageId: 1234xyz
KeyName: aws-keypair
InstanceType: t2.micro
SecurityGroups: - !Ref EC2SecurityGroup
– Back to Index – 83
BlockDeviceMappings: -
DeviceName: /dev/sda1
Ebs: VolumeSize: 50
AWS CloudTrail
– Back to Index – 84
CloudTrail events of the past 90 days recorded by CloudTrail can be viewed in the CloudTrail
console and can be downloaded in CSV or JSON file.
Trail log files can be aggregated from multiple accounts to a single bucket and can be shared
between accounts.
AWS CloudTrail Insights enables AWS users to identify and respond to unusual activities of API
calls by analyzing CloudTrail management events.
{"Records": [{
"eventVersion": "1.0", "userIdentity": {
"type": "IAMUser", "principalId":
"PR_ID", "arn": "arn:aws:iam::210123456789:user/Rohit",
"accountId": "210123456789",
"accessKeyId": "KEY_ID",
"userName": "Rohit"
"eventTime": "2021-01-24T21:18:50Z",
"eventSource": "iam.amazonaws.com",
eventName": "CreateUser",
"awsRegion": "ap-south-2",
"sourceIPAddress": "176.1.0.1",
"userAgent": "aws-cli/1.3.2 Python/2.7.5 Windows/7",
"requestParameters": {"userName": "Nayan"},
"responseElements": {"user": {
"createDate": "Jan 24, 2021 9:18:50 PM",
– Back to Index – 85
"userName": "Nayan",
"arn": "arn:aws:iam::128x:user/Nayan",
"path": "/", "userId": "12xyz"
}}
}]}
CloudWatch monitors and manages the activity of AWS services and resources, reporting on
their health and performance. Whereas, CloudTrail resembles logs of all actions performed
inside the AWS environment.
Price details:
● Charges are applied based on the usage of Amazon S3.
● Charges are applied based on the number of events analyzed in the region.
● The first copy of Management events within a region is free, but charges are applied for
additional copies of management events at $2.00 per 100,000 events.
● Data events are charged at $0.10 per 100,000 events.
● CloudTrail Insights events provide visibility into unusual activity and are charged at $0.35 per
100,000 write management events analyzed.
Amazon CloudWatch
What is Amazon CloudWatch?
Amazon CloudWatch is a service that helps to monitor and manage services by providing data
and actionable insights for AWS applications and infrastructure resources.
It monitors AWS resources such as Amazon RDS DB instances, Amazon EC2 instances, Amazon
DynamoDB tables, and, as well as any log files generated by the applications.
– Back to Index – 86
It collects monitoring data in the form of logs, metrics, and events from AWS resources,
applications, and services that run on AWS and on-premises servers. Some metrics are
displayed on the home page of the CloudWatch console. Additional custom dashboards to
display metrics can be created by the user.
Alarms can be created using CloudWatch Alarms that monitor metrics and send notifications or
make automatic changes to the resources based on actions whenever a threshold is breached.
CloudWatch Container Insights are used to collect and summarize metrics and logs from
containerized applications. These Insights are available for Amazon ECS, Amazon EKS, and
Kubernetes platforms on Amazon EC2.
CloudWatch Lambda Insights are used to collect and summarize system-level metrics including
CPU time, memory, disk, and network for serverless applications running on AWS Lambda.
– Back to Index – 87
Amazon CloudWatch Logs
Features
● Log Collection: CloudWatch Logs allows you to collect log data from a wide range of AWS
resources and services, including Amazon EC2 instances, Lambda functions, AWS CloudTrail,
AWS Elastic Beanstalk, and custom applications running on AWS or on-premises.
● Log Storage: It provides a secure and durable repository for your log data.
– Back to Index – 88
● Real-time Monitoring: You can set up CloudWatch Alarms to monitor log data in real time and
trigger notifications or automated actions when specific log events or patterns are detected.
● Log Queries: CloudWatch Logs Insights allows you to run ad-hoc queries on your log data to
extract valuable information and troubleshoot issues. You can use a simple query language to
filter and analyze logs.
● Log Retention: You can define retention policies for your log data, specifying how long you
want to retain logs before they are automatically archived or deleted. This helps in cost
management and compliance with data retention policies.
● Log Streams: Within a log group, log data is organized into log streams, which represent
individual sources of log data. This organization makes it easy to distinguish between different
sources of log data.
Use Cases:
● Application Debugging: Developers want to troubleshoot and debug issues in a
microservices-based application.
● Cost Monitoring for EC2 Instances: An organization wants to track and control costs
associated with their Amazon EC2 instances.
● Security and Compliance Auditing: A company needs to monitor and audit user activities
across its AWS environment to ensure compliance with security policies.
– Back to Index – 89
●It utilize AI and ML-driven analytics provided by AWS Compute Optimizer to optimize
workload sizes based on your specific workload needs, resulting in potential cost
reductions of up to 25%.
● It enhance savings and gain insights into memory usage by activating Amazon
CloudWatch metrics to monitor resource utilization.
●It improve cost efficiency by automating license optimization recommendations
post-authentication to optimize licensing expenses.
Use Cases:
● Utilize External Metrics: Enhance EC2 instance and Auto Scaling group optimization
by leveraging historical data and third-party metrics from your Application Performance
Monitoring (APM) tools.
● Facilitate Migration to AWS Graviton CPUs: Identify EC2 workloads that offer
significant benefits with minimal migration effort when transitioning to AWS Graviton
CPUs.
AWS Config
What is AWS Config?
AWS Config is a service that continuously monitors and evaluates the configurations of the AWS
resources (services).
It helps to view configuration changes performed over a specific period of time using AWS
Config console and AWS CLI.
– Back to Index – 90
It evaluates AWS resource configurations based on specific settings and creates a snapshot of
the configurations to provide a complete inventory of resources in the account.
It uses Config rules to evaluate configuration settings of the AWS resources. AWS Config also
checks any condition violation in the rules.
There can be 150 AWS Config rules per region.
● Managed Rules
● Custom Rules
It is integrated with AWS IAM, to create permission policies attached to the IAM role, Amazon S3
buckets, and Amazon Simple Notification Service (Amazon SNS) topics.
It is also integrated with AWS CloudTrail, which provides a record of user actions or an AWS
Service by capturing all API calls as events in AWS Config.
AWS Config provides an aggregator (a resource) to collect AWS Config configuration and
compliance data from:
● Multiple accounts and multiple regions.
● Single account and multiple regions.
● An organization in AWS Organizations
● The Accounts in the organization which have AWS Config enabled.
Use Cases:
● It enables the user to code custom rules in AWS Lambda that define the best guidelines for
resource configurations. Users can also automate the assessment of the resource configuration
changes to ensure compliance and self-governance across your AWS infrastructure.
● Data from AWS Config allows users to continuously monitor the configurations for potential
security weaknesses. After any security alert, Config allows the user to review the configuration
history and understand the risk factor.
– Back to Index – 91
Price details:
● Charges are applied based on the total number of configuration items recorded at the rate of
$0.003 per configuration item recorded per AWS Region in the AWS account.
● For Config rules, charges are applied based on the number of AWS Config rules evaluated.
● Additional charges are applied if AWS Config integrates with other AWS Services at a standard
rate.
– Back to Index – 92
AWS Health dashboard
What is AWS Health dashboard?
AWS Health dashboard keeps you informed about service events, scheduled modifications, and
account notifications, enabling effective management and action-taking. Access your
personalized health information and receive event updates through Amazon EventBridge or by
logging into the AWS Health Dashboard. Additionally, integrate AWS Health into your workflows
programmatically via the AWS Health API, accessible with AWS Premium Support.
Features:
● AWS Health serves as the central hub for event data, seamlessly integrating with over 200
AWS services, ensuring comprehensive coverage for both operational incidents and planned
changes.
● Get actionable insights promptly to troubleshoot issues and prepare for upcoming changes
effectively, facilitating swift resolution and proactive management.
● With AWS Health integrated into AWS Organizations, gain a consolidated view of service
health across your organization, streamlining operational management across teams.
● Effortlessly receive AWS Health events via Amazon EventBridge or programmatically integrate
with the AWS Health API, along with pre-built integrations with IT Service Management (ITSM)
tools for enhanced automation and efficiency.
Use Cases:
● Receive Proactive Notifications: Stay informed with timely alerts about events
impacting your resources, enabling you to proactively address any potential impact and
minimize disruptions.
● Prepare for Lifecycle Events: Gain visibility into upcoming planned lifecycle events and
monitor the progress of actions taken by your team at the resource level to ensure
uninterrupted operations of your applications.
● Efficient Event Monitoring: Streamline the monitoring and tracking of AWS Health
events across your organization using programmatic methods or pre-built integrations
with popular IT Service Management (ITSM) tools.
– Back to Index – 93
AWS Control Tower
What is AWS Control Tower?
AWS Control Tower is an extension to AWS Organizations providing additional controls. AWS
Control Tower helps create a Landing Zone which is a well-architected Multi-Account baseline
based on AWS best practices. An AWS Organization will be created if it does not already exist.
Features
● As a part of the Landing Zone, Control Tower sets up a series of OUs - Security OU, Sandbox
OU, and Production OU.
● Within the Security OU, the Control Tower creates the Audit & Log Archive accounts.
● The Sandbox & Production OUs does not contain any default accounts. Accounts related to
Development and Production environments can be added to these OUs.
● Control Tower integrates with AWS Identity Center. The directory sources for SSO can be AWS
Identity Center directories(default), SAML IdPs, and Microsoft AD.
● The Root user in the Management Account can perform actions that are disallowed by
Guardrails similar to AWS Organizations where SCPs cannot affect the Root user in the
Management Account.
● Control Tower comes with a Dashboard providing oversights into the Landing Zone and
central administrative views across all Accounts, OUs, Guardrails & policies.
● Control Tower offers Account Factory which is a configurable Account Template for
standardizing provisioning of new Accounts with Pre-approved Account configurations.
Use cases
● AWS Control Tower provides two configuration options
○ Launch AWS Control Tower in a new AWS Organization.
○ Launch AWS Control Tower in an existing AWS Organization.
● Guardrails created by AWS Control Tower for governance & compliance fall under the
following categories
○ Preventive Guardrails - These are based on SCPs that disallow certain API actions.
○ Detective Guardrails - Implemented using AWS Config & Lambda functions that
monitor & govern compliance.
– Back to Index – 94
AWS License Manager
What is AWS License Manager?
● AWS License Manager is a service that manages software licenses in AWS and on-premises
environments from vendors such as Microsoft, SAP, Oracle, and IBM.
● It supports Bring-Your-Own-License (BYOL) feature which means that users can manage their
existing licenses for third-party workloads (Microsoft Windows Server, SQL Server) to AWS.
● It enables administrators to create customized licensing rules that help to prevent licensing
violations (using more licenses than the agreement).
● The rules operate by stopping the instance from launching or by notifying administrators
about the infringement (violation of a law).
● Administrators use rule-based controls on the consumption of licenses, to set limits on new
and existing cloud deployments.
● Hard limit - does not allow the launch of non-compliant instances
● Soft limit - allow the launch of non-compliant instance but sends an alert to the administrators
● It provides control and visibility of all the licenses to the administrators with the help of the
AWS License Manager dashboard.
● It allows administrators to specify Dedicated Host management preferences for allocation
and capacity utilization.
● AWS License Manager’s managed entitlements provide built-in controls to software vendors
(ISVs) and administrators so that they can assign licenses to approved users and workloads.
● AWS Systems Manager can manage licenses on physical or virtual servers hosted outside of
AWS using AWS License Manager.
● AWS Systems Manager helps to discover software running on existing EC2 instances and then
rules can be attached and validated in EC2 instances allowing the licenses to be tracked using
the License Manager’s dashboard.
● AWS Organizations along with AWS License Manager helps to allow cross-account disclosure
of computing resources in the organization by using service-linked roles and enabling trusted
access between License Manager and Organizations.
AWS License Manager is integrated with the following services:
● AWS Marketplace
● Amazon EC2
● Amazon RDS
● AWS Systems Manager
● AWS Identity and Access Management (IAM)
● AWS Organizations
● AWS CloudFormation
● AWS X-Ray
Price details:
● Charges are applied at normal AWS rates only for the AWS resources integrated with AWS
License Manager.
– Back to Index – 95
AWS Management Console
What is AWS Management Console?
AWS Management Console is a web application that consists of many service consoles for
managing Amazon Web Services.
It can be visible at the time a user first signs in. It provides access to other service consoles and
a user interface for exploring AWS.
AWS Management Console provides a Services option on the navigation bar that allows
choosing services from the Recently visited list or the All services list.
– Back to Index – 96
On the navigation bar, there is a Search box to search any AWS services by entering all or part of
the name of the service. The Console is also available as an app for Android and iOS with
maximized Horizontal and vertical space and larger buttons for a better touch experience.
– Back to Index – 97
AWS Organizations
What are AWS Organizations?
AWS Organizations is a global service that enables users to consolidate and manage multiple
AWS accounts into an organization.
It includes account management and combined billing capabilities that help to meet the
budgetary, and security needs of the business better.
● The main account is the management account – it cannot be changed.
● Other accounts are member accounts that can only be part of a single organization.
– Back to Index – 98
● It integrates with the following services:
○ AWS CloudTrail - Manages auditing and logs all events from accounts.
○ AWS Backup - Monitor backup requirements.
○ AWS Control Tower - to establish cross-account security audits and view policies
applied across accounts.
○ Amazon GuardDuty - Managed security services, such as detecting threats.
○ AWS Resource Access Manager (RAM) - Can reduce resource duplication by sharing
critical resources within the organization.
● Steps to be followed for migrating a member account:
○ Remove the member account from the old Organization.
○ Send an invitation to the member account from the new Organization.
○ Accept the invitation to the new Organization from the member account.
Price details:
● AWS Organizations is free. Charges are applied to the usage of other AWS resources.
● The management account is responsible for paying charges of all resources used by the
accounts in the organization.
● AWS Organizations provides consolidated billing that combines the usage of resources from
all accounts, and AWS allocates each member account a portion of the overall volume discount
based on the account's usage.
– Back to Index – 99
AWS Systems Manager
What is AWS Systems manager?
AWS Systems Manager is a service which helps users to manage EC2 and on-premises systems
at scale. It not only detects the insights about the state of the infrastructure but also easily
detects problems as well.
Additionally, we can patch automation for enhanced compliance. This AWS service works for
both Windows and Linux operating systems.
Features:
● Easily integrated with CloudWatch metrics/dashboards and AWS Config.
● It helps to discover and audit the software installed.
● Compliance management
● We can group more than 100 resource types into applications, business units, and
environments.
● It helps to view instance information such as operating system patch levels, install software
and see the compliance with the desired state.
● Act associate and configurations with resources and find out the discrepancies.
● Distribute multiple software versions safely across the instances.
● Increase the security area by running a command or maintaining scripts.
● Patch your instances of schedule to keep them compliant.
● Helps managers to automate workflows.
● It helps to reduce errors by securely applying configurable parameters into centralized service.
Use cases:
● Optimization of cost & efficiency - Trusted Advisor helps identify resources that are not used
to capacity or idle resources and provides recommendations to lower costs.
● Address Security Gaps - Trusted Advisor performs Security checks of your AWS environment
based on security best practices. It flags off errors or warnings depending on the severity of the
security threat e.g. Open SG/NACL ports for unrestricted external user access, and open access
permissions for S3 buckets in Accounts.
● Performance Improvement - Trusted Advisor checks for usage & configuration of your AWS
resources and provides recommendations that can improve performance e.g. it can check for
Provisioned IOPS EBS volumes on EC2 instances that are not EBS-optimized.
Amazon Web Services Application Discovery Service (Application Discovery Service) helps you
plan application migration projects. It automatically identifies servers, virtual machines (VMs),
and network dependencies in your on-premises data centers.
Features:
● Agentless discovery using Amazon Web Services Application Discovery Service
Agentless Collector (Agentless Collector), which doesn't require you to install an agent
on each host.
● Agent-based discovery using the Amazon Web Services Application Discovery Agent
(Application Discovery Agent) collects a richer set of data than agentless discovery,
which you install on one or more hosts in your data center.
● Amazon Web Services Partner Network (APN) solutions integrate with Application
Discovery Service, enabling you to import details of your on-premises environment
directly into Amazon Web Services Migration Hub (Migration Hub) without using
Agentless Collector or Application Discovery Agent.
Use cases:
Pricing:
You can use the AWS Application Discovery Service to discover your on-premises servers and plan
your migrations at no charge.
You only pay for the AWS resources (e.g., Amazon S3, Amazon Athena, or Amazon Kinesis Firehose)
that are provisioned to store your on-premises data. You only pay for what you use, as you use it;
there are no minimum fees and no upfront commitments.
It does not stop the running application while performing the migration of databases, resulting
in downtime minimization.
It performs homogeneous as well as heterogeneous migrations between different database
platforms.
MySQL - MySQL (homogeneous migration)
MySQL - Amazon Aurora (heterogeneous migration)
AWS DMS supports the following data sources and targets engines for migration:
● Sources: Oracle, Microsoft SQL Server, PostgreSQL, Db2 LUW, SAP, MySQL, MariaDB,
MongoDB, and Amazon Aurora.
● Targets: Oracle, Microsoft SQL Server, PostgreSQL, SAP ASE, MySQL, Amazon Redshift,
Amazon S3, and Amazon DynamoDB.
It performs all the management steps required during the migration, such as monitoring, scaling,
error handling, network connectivity, replicating during failure, and software patching.
AWS DMS with AWS Schema Conversion Tool (AWS SCT) helps to perform heterogeneous
migration.
Features
● Data movement workloads using AWS DataSync support migration scheduling, bandwidth
throttling, task filtering, and logging.
● AWS DataSync provides enhanced performance using compression, and parallel transfers for
transferring data at speed.
● AWS DataSync supports In-Flight encryption using TLS and encryption at rest.
● AWS DataSync provides capabilities for Data Integrity Verification ensuring that all data is
transferred successfully.
● AWS DataSync integrates with AWS Management tools like CloudWatch, CloudTrail, and
EventBridge.
● With DataSync, you only pay for the data you transfer without any minimum cost.
● AWS DataSync can copy data to and from Amazon S3 buckets, Amazon EFS file systems, and
all Amazon FSx file system types.
● AWS DataSync supports Internet, VPN, and Direct Connect to transfer data between
On-premises data centers, Cloud environments & AWS
Use cases
● Application Data Migration residing on On-premises storage systems like Windows Server,
NAS file systems, Object storage to AWS.
● Archival of On-premises storage data to AWS to free capacity & reduce costs for continuously
investing in storage infrastructure.
● Continuous replication of data present On-premises or on existing Cloud platforms for Data
Protection and Disaster Recovery
Best Practices
● In General, when planning a Data Migration, migration tools need to be evaluated, check for
available bandwidth for online migration, and understand the source & destination migration
data sources.
● For using DataSync to transfer data from On-premises storage to AWS, an Agent needs to be
deployed and activated at On-premises locations. Use the Agent’s local console as a tool for
accessing various configurations
Pricing:
● AWS Migration Hub is free for collecting and storing discovery data, planning, or
tracking migrations to AWS in your home region.
● Costs for migration tools and AWS resource consumption are the user's
responsibility.
● Refactor Spaces, an optional feature, incurs usage-based charges without
upfront fees, based on hours of refactor environments and API requests.
● Users receive 2,160 free environment hours per month for 90 days, allowing for
running 3 Refactor Spaces environments free for 3 months.
● After this period, the price is $0.028 per environment per hour ($20 per month per
environment if run continuously).
● The service also costs $0.000002 per API request, with 500,000 API requests
free per month included in the AWS Free Tier indefinitely.
Features
● AWS Transfer Family provides a fully managed endpoint for transferring files into and out of
S3, EFS.
● The Secure File Transport Protocol (SFTP) is a file transfer provided over SSH.
● File Transfer Protocol over SSL (FTPS is an FTP over a TLS-encrypted channel.
● Plain File Transfer Protocol (FTP) does not require a secure channel for transferring files.
● AWS Transfer Family exhibits high availability across the globe.
● AWS Transfer Family provides compliance with regulations within your Region.
● Using a pay-as-you-use model, the AWS Transfer Family service becomes cost-effective and is
simple to use.
● AWS Transfer Family has the ability to use custom Identity Providers using AWS API Gateway
& Lambda.
Use cases
● IAM Roles are used to grant access to S3 buckets for file transfer clients in a secure way.
● Users can use Route 53 to migrate an existing File Transfer hostname for use in AWS.
● SFTP & FTPS protocols can be set up to be accessible from the public internet while FTP is
limited for access from inside a VPC using VPC endpoints.
Retire
Retain
Rehost
Relocate
Repurchase
Replatform
Refactor or re-architect.
Features:
● For large migrations, common strategies include Rehost, Replatform, Relocate, and Retire, as
they offer simpler and more efficient migration paths compared to Refactor, which involves
modernizing applications during migration and is more complex to manage.
● Rehosting, relocating, or replatforming applications initially, and then considering
modernization post-migration, is recommended for large-scale migrations to streamline the
process and reduce complexity.
● Choosing the right migration strategies is crucial for large-scale migrations and should be
based on careful assessment during the mobilize phase or initial portfolio evaluation. Each
strategy has its own set of use cases and considerations for implementation.
● Retain: This strategy involves keeping applications in the source environment
temporarily or for future migration, without making immediate changes. It's often
referred to as "lift and shift," where applications are moved to the AWS Cloud
without modifications.
● Relocate: Also known as "drop and shop," this strategy involves moving instances
or objects within the AWS environment, such as to a different VPC, Region, or
AWS account.
● Repurchase: In this strategy, the existing application is replaced with a new
version or product offering greater business value, such as improved
accessibility, maintenance-free infrastructure, and pay-as-you-go pricing.
● Replatform: This strategy, also known as "lift, tinker, and shift" or "lift and
reshape," involves migrating the application to the cloud while optimizing it for
efficiency, cost reduction, or leveraging cloud capabilities.
● Refactor or re-architect: This strategy focuses on modifying the application's
architecture to fully utilize cloud-native features, enhancing agility, performance,
and scalability. It's driven by business needs to scale, accelerate releases, and
reduce costs.
Features
● Transition applications from various source infrastructures running supported
operating systems seamlessly.
● Enhance applications during the migration process by incorporating features like
disaster recovery and converting operating systems or licenses.
● Ensure uninterrupted business operations while replicating applications.
● Minimize expenses by utilizing a single tool capable of handling a diverse range of
applications, eliminating the requirement for specialized skills in individual applications.
Use cases
● Transfer on-premises applications like SAP, Oracle, and SQL Server from physical
servers, VMware vSphere, Microsoft Hyper-V, and other existing on-premises
infrastructure.
● Migrate cloud-based applications from other public cloud platforms to AWS,
accessing a vast array of over 200 services designed to reduce costs, enhance
availability, and foster innovation.
● Seamlessly move Amazon EC2 workloads between AWS Regions, Availability Zones,
or accounts to meet various business requirements, enhance resilience, and ensure
compliance.
● Modernize applications by applying tailored modernization actions or choosing from
pre-defined options such as cross-Region disaster recovery, Windows Server version
upgrade, and Windows MS-SQL BYOL to AWS license conversion.
It uses edge locations (a network of small data centers) to cache copies of the data for the
lowest latency. If the data is not present at edge locations, the request is sent to the source
server, and data gets transferred from there.
The AWS origins from where CloudFront gets its traffic or requests are:
● Amazon S3
● Amazon EC2
● Elastic Load Balancing
● Customized HTTP origin
It provides the programmable and secure edge CDN computing feature through
Pricing Details:
● You pay for:
○ Data Transfer Out to Internet / Origin
○ A number of HTTP/HTTPS Requests.
○ Each custom SSL certificate associated with CloudFront distributions
○ Field-level encryption requests.
○ Execution of Lambda@Edge
● You do not pay for:
○ Data transfer between AWS regions and CloudFront.
○ AWS ACM SSL/TLS certificates and Shared CloudFront certificates.
Private VIF with AWS Direct Connect helps to transfer business-critical data from the
data-center, office or colocation environment into AWS, bypassing your Internet service
provider and removing network traffic.
Private virtual interface: It helps to connect an Amazon VPC using private IP addresses.
Public virtual interface: It helps to connect AWS services located in any AWS region
(except China) from your on-premises data center using public IP addresses.
Features:
● AWS Management Console helps to configure AWS Direct Connect service quickly and
easily.
● It helps to choose the dedicated connection providing a more consistent network
experience over Internet-based connections.
● It works with all AWS services that are accessible over the Internet.
● It helps to scale by using 1Gbps and 10 Gbps connections based on the capacity
needed.
Price details:
● Pay only for what you use. There is no minimum fee.
● Charges for Dedicated Connection port hours are consistent across all AWS Direct
Connect locations globally except Japan.
● Data Transfer OUT charges are dependent on the source AWS Region.
➢ Classic Network.
● It is an old generation Load Balancer.
● AWS recommends to use Application or Network Load Balancer instead.
Listeners
● A listener is a process that checks for connection requests, using the protocol and port
that you configured.
● You can add HTTP, HTTPS or both.
Target Group
● It is the destination of the ELB.
● Different target groups can be created for different types of requests.
● For example, one target group i.e., a fleet of instances will be handling the general
request and other target groups will handle the other type of request such as micro
services.
● Currently, three types of target supported by ELB: Instance, IP and Lambda Functions.
Health Check
● Health checks will be checking the health of Targets regularly and if any target is
unhealthy then traffic will not be sent to that Target.
● We can define the number of consecutive health checks failure then only the Load
Balancer will not send the traffic to those Targets.
● Example: If 4 EC2 are registered as Target behind Application Load Balancer and if one
of the EC2 Instance is not healthy then Load Balancer will not send the traffic to that EC2
Instance
Use Cases:
Charges:
● Charges will be based on each hour or partial hour that the ELB is running.
● Charges will also depend on the LCU (Load Balancer Units)
Interface endpoints
● It serves as an entry point for traffic destined to an AWS service or a VPC endpoint
service. Gateway endpoints
● It is a gateway in the route-table that routes traffic only to Amazon S3 and DynamoDB.
Features:
● It is integrated with AWS Marketplace services so that the services can be directly attached to
the endpoint.
● It provides security by not allowing the public internet and reducing the exposure to threats,
such as brute force and DDoS attacks.
Pricing details:
Route 53 hosted zone is a collection of records for a specified domain that can be managed
together.
There are two types of zones:
● Public host zone – It determines how traffic is routed on the Internet.
● Private hosted zone – It determines how traffic is routed within VPC.
Failover:
● If the primary resource is down (based on health checks), it will route to a secondary
destination.
● It supports health checks.
Geo-location:
● It routes traffic to the closest geographic location you are in. Geo-proximity:
● It routes traffic based on the location of resources to the closest region within a
geographic area.
Latency based:
● It routes traffic to the destination that has the least latency. Multi-value answer:
● It distributes DNS responses across multiple IP addresses.
● If a web server becomes unavailable after a resolver cache a response, a user can try
up to eight other IP addresses from the response to reduce downtime.
Use cases:
● When users try to register a domain with Route 53, it becomes the trustworthy DNS
server for that domain and creates a public hosted zone.
● Users can have their domain registered in one AWS account and the hosted zone in
another AWS account.
● For private hosted zones, the following VPC settings must be ‘true’:
○ enableDnsHostname.
○ enableDnsSupport.
● Health checks can be pointed at:
○ Endpoints (can be IP addresses or domain names.)
Price details:
● Users will be charged for your AWS Transit Gateway on an hourly basis.
Route Tables
● Route tables will decide where the network traffic will be directed.
● One Subnet can connect to one route table at a time.
● But one Route table can connect to multiple subnets.
● If the route table is connected to the Internet Gateway and that route table is associated with
the subnet, then that subnet will be considered as a Public Subnet.
● The private subnet is not associated with the route table which is connected to the Internet
gateway.
NAT Devices
● NAT stands for Network Address Translation.
● It allows resources in the Private subnet to connect to the internet if required.
NAT Instance
● It is an EC2 Instance.
● It will be deployed in the Public Subnet.
● NAT Instance allows you to initiate IPv4 Outbound traffic to the internet.
● It will not allow the instance to receive inbound traffic from the internet.
NAT Gateway
● Nat Gateway is Managed by AWS.
● NAT will be using the elastic IP address.
● You will be charged for NAT gateway on a per hour basis and data processing rates.
● NAT is not for IPv6 traffic.
● NAT gateway allows you to initiate IPv4 Outbound traffic to the internet.
● It will not allow the instance to receive inbound traffic from the internet.
PrivateLink
● PrivateLink is a technology that will allow you to access services privately without internet
connectivity and it will use the private IP Addresses.
Endpoints
● It allows you to create connections between your VPC and supported AWS services.
● The endpoints are powered by PrivateLink.
● The traffic will not leave the AWS network.
● It means endpoints will not require Internet Gateway, Virtual Private Gateway, NAT
components.
● The public IP address is not required for communication.
● Communication will be established between the VPC and other services with high availability.
Types of Endpoints
● Interface Endpoints
o It is an entry point for traffic interception.
o It will route the traffic to the service that you configure.
o It will use an ENI with a private IP address.
o For Example: it will allow instances to connect to Amazon Kinesis through
interface endpoint.
● Gateway Endpoints
o It is a gateway that you defined in Route Table as a Target.
o And the destination will be the supported AWS Services.
o Amazon S3, DynamoDB supports Gateway Endpoint.
● Egress Only Internet Gateway
● An egress-only internet gateway is designed only for IPv6 address communications.
● It is a highly available, horizontally scaled component which will allow outbound only rule for
IPv6 traffic.
● It will not allow inbound connection to your EC2 Instances.
VPC Peering:
VPN
● Virtual Private Network (VPN) establish secure connections between multiple
networks i.e., on-premise network, client space, AWS Cloud, and all the network acts
● VPN provides a high-available, elastic, and managed solution to protect your network
traffic.
AWS Site-to-Site VPN
o AWS Site-to-Site VPN creates encrypted tunnels between your network and
your Amazon Virtual Private Clouds or AWS Transit Gateways.
AWS Client VPN
o AWS Client VPN connects your users to AWS or on-premises resources using a
VPN software client.
Use Cases:
● Host a simple public-facing website.
● Host multi-tier web applications.
● Used for disaster recovery as well.
Pricing:
● No additional charges for creating a custom VPC.
● NAT does not come under the free tier limit you will get charged per hour basis.
● NAT Gateway data processing charge and data transfer charges will be separate.
● You will get charged per hour basis for traffic mirroring.
The certificates can be integrated with AWS services either by issuing them directly with ACM or
importing third-party certificates into the ACM management system.
Benefits:
● It automates the creation and renewal of private certificates for on-premises and AWS
resources.
● It provides an easy process to create certificates. Just submit a CSR to a Certificate Authority,
or upload and install the certificate once received.
● SSL/TLS provides data-in-transit security, and SSL/TLS certificates authorize the identity of
sites and connections between browsers and applications.
Price details:
● The certificates created by AWS Certificate Manager for using ACM-integrated services are
free.
● With AWS Certificate Manager Private Certificate Authority, monthly charges are applied for
the operation of the private CA and the private certificates issued.
Amazon Cognito authorizes a unique identifier for each user and acts as an OpenID token
provider trusted by AWS Security Token Service (STS) to access temporary, limited-permission
AWS credentials.
Identity pools provide temporary AWS credentials to the users so that they could access other
AWS resources without re-entering their credentials. Identity pools support the following identity
providers
● Amazon Cognito user pools.
● Third-party sign-in facility.
● OpenID Connect (OIDC) providers.
● SAML identity providers.
● Developer authenticated identities. Amazon Cognito is capable enough to allow usage of user
pools and identity pools separately or together.
Use cases:
● Triage Security findings/alerts - Explore whether GaurdDuty findings need to be examined
further. Amazon Detective helps users to see whether a finding is a concern.
● Incident investigation - Since Amazon Detective allows for viewing analysis & summaries
going back up to a year, it can help answer questions like how long has the security issue been
there, and the resources affected because of that.
● Threat Hunting - Access indicators like IP addresses, users to see what interactions they
would have had with the environment. Detective’s Security Behaviour Graph will help here.
AWS Directory Service provides the following directory types to choose from
● Simple AD
● Amazon Cognito
● AD Connector Simple AD:
● It is an inexpensive Active Directory-compatible service driven by SAMBA 4.
● It is an isolated or self-supporting AD directory type.
● It can be used when there is a need for less than 5000 users.
Use cases:
● It provides a Sign In option to AWS Cloud Services with AD Credentials.
● It provides Directory Services to AD-Aware Workloads.
● It enables a single-sign-on (SSO) feature to Office 365 and other Cloud applications.
● It helps to extend On-Premises AD to the AWS Cloud by using AD trusts.
Pricing:
● Prices vary by region for the directory service
● Hourly charges are applied for each additional account to which a directory is shared.
● Charges are applied per GB for the data transferred “out” to other AWS Regions where the
directory is deployed.
Features:
● Ensure that GuardDuty has complete visibility over Logs for complete Detection Coverage - Eg
consider enabling VPC Flow logs for all Regions and required network interfaces that are being
planned to monitor for threat detection.
● GuardDuty is Region-specific and it is recommended to enable GuardDuty for all Regions for
complete threat visibility.
● It is recommended to analyze GuardDuty monitoring activities with CloudTrail to ensure that
users are not tampering with GuardDuty itself.
● It is recommended to integrate GuardDuty with EventBridge & Lambda for automating risk
mitigation.
Use cases:
● Security analysts can be assisted to carry out investigations using the Security event findings
from GuardDuty. It provides Context, Metadata, and impacted resource details using which the
root cause can be detected using GuardDuty console integration with Amazon Detective.
● GuardDuty can be used to identify files containing malware - EBS can be scanned for files
containing malware that creates suspicious behavior on instance, container workloads running
on EC2.
● When GuardDuty is enabled, the associated log sources that it accesses (VPC Flow logs, DNS
Logs) need not be enabled separately.
● They are all enabled by default by GuardDuty and are provided access to GuardDuty. You
cannot add your own Log sources to GuardDuty other than the 5 mentioned above
IAM Role
● IAM Role is like a user with policies attached to it that decides what an identity can or cannot
do.
● It will not have any credentials/Password attached to it.
● A Role can be assigned to a federated user who signed in from an external Identity Provider. ●
IAM users can temporarily assume a role and get different permission for the task.
IAM Policies
● It decides what level of access an Identity or AWS Resource will possess.
Pricing:
● Amazon provides IAM Service at no additional charge.
● You will be charged for the services used by your account holders.
Features:
● Automation of vulnerability management - Upon activation, it automatically scans and
discovers vulnerabilities in AWS resources like EC2, Lambda functions and container workloads.
These vulnerabilities could compromise workloads, target resources for malicious use.
● Amazon Inspector provides multi-account support with AWS Organizations. By assigning an
Inspector Delegated Administrator(DA) account for your Organization, it can seamlessly start
and configure all member accounts and consolidate all findings.
● Amazon Inspector integrates with AWS Systems Manager Agent for collecting software
inventory and configurations from EC2 instances. They are then used to access workloads for
vulnerability.
● Findings from Amazon Inspector can be suppressed based on defined criteria. Findings that
are deemed by an Organization as acceptable can be suppressed by creating suppression rules.
● A highly contextualized risk score is generated by Amazon Inspector for each finding. ● When
a vulnerability has been patched or remediated, Amazon Inspector provides automatic closure
of those findings.
● Amazon Inspector provides detailed monitoring of organization-wide environment coverage. It
helps to avoid gaps in coverage.
● Amazon Inspector provides integration with AWS Security Hub and EventBridge for its
findings. They can be used to automate workflows like Ticketing.
● Amazon Inspector scans Lambda functions for security vulnerabilities like injection flaws, and
missing encryption based on AWS best practices. It uses Generative AI and automated
reasoning; it provides in-context code remediations for multiple classes of vulnerability reducing
the efforts required to fix them.
● Amazon Inspector integrates with CI/CD tools like Jenkins for container image assessments
pushing proactive security measures early in the software development cycle.
Use cases:
● Use Common Vulnerabilities & Exposures (CVE) and network accessibility for creating
contextual risk scores to Prioritize Patch remediation.
● Support compliance requirements like PCI DSS, NIST CSF and other regulations by utilizing
Amazon Inspector scans.
Envelope encryption is the method of encrypting plain text data with a data key and
then encrypting the data key under another key. Envelope encryption offers several benefits:
● Protecting data keys.
● Encrypting the same data under multiple master keys.
● Combining the strengths of multiple algorithms.
Features:
● The automatic rotation of master keys generated in AWS KMS once per year is done without
the need to re-encrypt previously encrypted data.
Benefits:
Key Management Service (KMS) with Server-side Encryption in S3.
● Manage encryption for AWS services.
Price details:
● Provides a free tier of 20,000 requests/month across all regions where the service is available.
● Each customer master key (CMK) that you create in AWS Key Management Service (KMS)
costs $1 per month until deleted.
● Creation and storage of AWS-managed CMKs are not charged as they are created on the
user’s behalf by AWS.
● Customer-managed CMKs are scheduled for deletion but it will incur charges if deletion is
canceled during the waiting period.
AWS Resource Access Manager (RAM) is a service that permits users to share their resources
across AWS accounts or within their AWS Organization.
Price details:
● The charges only differ based on the resource type. No charges are applied for creating
resource shares and sharing your resources across accounts.
AWS Secrets Manager is a service that replaces secret credentials in the code like passwords,
with an API call to retrieve the secret programmatically. The service provides a feature to rotate,
manage, and retrieve database passwords, OAuth tokens, API keys, and other secret credentials.
It ensures in-transit encryption of the secret between AWS and the system to retrieve the secret.
Secrets Manager can easily rotate credentials for AWS databases without any additional
programming. Though rotating the secrets for other databases or services requires Lambda
function to instruct how Secrets Manager interacts with the database or service.
Features:
● It provides security and compliance facilities by rotating secrets safely without the need for
code deployment.
● With Secrets Manager, IAM policies and resource-based policies can assign specific
permissions for developers to retrieve secrets and passwords used in the development
environment or the production environment.
Use cases:
● Store sensitive information as part of the encrypted secret value, either in the SecretString or
SecretBinary field.
● Use a Secrets Manager open-source client component to cache secrets and update them only
when there is a need for rotation.
● When an API request quota exceeds, the Secrets Manager throttles the request and returns a
‘ThrottlingException’ error. To resolve this, retry the requests.
● It integrates with AWS Config and facilitates tracking of changes in Secrets Manager.
Price details:
● There are no upfront costs or long-term contracts.
● Charges are based on the total number of secrets stored and API calls made.
● AWS charges at the current AWS KMS rate if the customer master keys(CMK) are created
using AWS KMS.
It provides an option to aggregate, organize, and prioritize the security alerts, or findings from
multiple AWS services, such as Amazon GuardDuty, Amazon Inspector, Amazon Macie, AWS
IAM Access Analyzer, AWS Firewall Manager, and also from AWS Partner solutions.
It helps the Payment Card Industry Data Security Standard (PCI DSS) and the Center for Internet
Security (CIS) AWS Foundations Benchmark with a set of security configuration best practices
for AWS. If any problem occurs, AWS Security Hub recommends remediation steps.
Enabling (or disabling) AWS Security Hub can be quickly done through,
● AWS Management Console
● AWS CLI
● By using Infrastructure-as-Code tools -- Terraform
If AWS architecture is divided across multiple regions, it needs to enable Security Hub within
each region.
The most powerful aspect of using Security Hub is the continuous automated compliance
checks using CIS AWS Foundations Benchmark.
The CIS AWS Foundations Benchmark consists of 43 best practice checks (such as “Ensure IAM
password policy requires at least one uppercase letter” and “Ensure IAM password policy
requires at least one number“).
Benefits:
● It collects data using a standard findings format and reduces the need for time-consuming
data conversion efforts.
● Integrated dashboards are provided to show the current security and compliance status.
Price details:
● Charges applied for usage of other services that Security Hub interacts with, such as AWS
Config items, but not for AWS Config rules that are enabled by Security Hub security standards.
● Using the Master account’s Security Hub, the monthly cost includes the costs associated with
all of the member accounts.
● Using a Member account’s Security Hub, the monthly cost is only for the member account.
● Charges are applied only for the current Region, not for all Regions in which Security Hub is
enabled.
● Use AWS STS when you need to enhance security, delegate permissions, or provide temporary,
controlled access to AWS resources for users, applications, or services in a flexible and granular
manner. It helps you follow security best practices and reduce the reliance on long-lived
credentials, improving overall security posture in your AWS environment.
Features:
● Imagine you have two AWS accounts: Account A and Account B. You want to allow an IAM
user in Account A to access an S3 bucket in Account B without sharing long-term credentials.
You can use AWS STS to accomplish this.
● You have a web application running on an Amazon EC2 instance that needs to access an
Amazon S3 bucket securely. Instead of storing long-term credentials on the EC2 instance, you
can use AWS STS to grant temporary access to the S3 bucket.
● Several API operations in AWS STS:
AWS STS provides several API operations that allow you to manage temporary security
credentials and perform various identity and access management tasks.
● Here are some of the key AWS STS API operations: AssumeRole AssumeRoleWithSAML
AssumeRoleWithWebIdentity GetSessionToken DecodeAuthorizationMessage GetCallerIdentity
Price details:
● AWS STS itself does not have any additional charges. However, if you use it with other AWS
services, you will be charged for the other services.
● For example, if you use STS to grant permissions to an application to write data to an Amazon
S3 bucket, you'll be charged for the S3 usage.
AWS WAF stands for Amazon Web Services Web Application Firewall. It is a managed service
provided by AWS that helps protect web applications from common web exploits that could
affect application availability, compromise security, or consume excessive resources.
AWS WAF provides an additional layer of security for your web applications, helping to protect
them from common web vulnerabilities and attacks such as SQL injection, cross-site scripting
(XSS), and distributed denial-of-service (DDoS) attacks.
Features:
● Combine AWS WAF with other AWS services such as AWS Shield (for DDoS protection) and
Amazon CloudFront (for content delivery) to create a robust, multi-layered security strategy.
● If you're using AWS Managed Rule Sets, ensure that you keep them up to date. AWS regularly
updates these rule sets to protect against emerging threats.
● Enable logging for AWS WAF to capture detailed information about web requests and potential
threats. Use Amazon CloudWatch or a SIEM solution to monitor and analyze these logs.
● Implement rate-limiting rules to protect APIs from abuse and DDoS attacks. Set appropriate
rate limits based on expected traffic patterns.
● Tailor your web access control lists (web ACLs) to the specific needs of your application.
● Periodically review your AWS WAF rules to make adjustments based on changing application
requirements and emerging threats.
Features:
● It offers a backup console, backup APIs, and the AWS Command Line Interface (AWS CLI) to
manage backups across the AWS resources like instances and databases.
● It offers backup functionalities based on policies, tags, and resources.
● It provides scheduled backup plans (policies) to automate backup of AWS resources across
AWS accounts and regions.
● It offers incremental backup to minimize storage costs. The first backup backs up a full copy
of the data and then only the successive incremental backup changes.
● It provides backup retention plans to retain and expire backups automatically. Automated
backup retention also helps to minimize storage costs for backup.
● It provides a dashboard in the AWS Backup console to monitor backup and restore activities.
● It offers an enhanced solution by providing separate encryption keys for encrypting multiple
AWS resources.
● It provides lifecycle policies configured to transition backups from Amazon EFS to cold
storage automatically.
● It is tightly integrated with Amazon EC2 to schedule backup jobs and the storage (EBS) layer. It
also simplifies recovery by restoring whole EC2 instances from a single point.
● It supports cross-account backup and restores either manually or automatically within the
AWS organizations.
● It allows backups and restores to different regions, especially during any disaster, to reduce
downtime and maintain business continuity.
● It integrates with Amazon CloudWatch, AWS CloudTrail, and Amazon SNS to monitor, audit API
activities and notifications.
Use cases:
● It can use AWS Storage Gateway volumes for hybrid storage backup. AWS Storage Gateway
volumes are secure and compatible with Amazon EBS, which helps restore volumes to
on-premises or the AWS environment.
Price details:
● AWS charges monthly based on the amount of backup storage used and the amount of
backup data restored.
Types of EBS:
Features:
● High Performance (Provides single-digit-millisecond latency for high-performance)
● Highly Scalable (Scale to petabytes)
● Offers high availability (guaranteed 99.999% by Amazon) & Durability
● Offers seamless encryption of data at rest through Amazon Key Management Service (KMS).
● Automate Backups through data lifecycle policies using EBS Snapshots to S3 Storage.
Pricing:
● You will get billed for all the provisioned capacity & snapshots on S3 Storage + Sharing Cost
between AZs/Regions
EBS :
● Persistent Storage.
● Reliable & Durable Storage.
● EBS volume can be detached from one instance and attached to another instance.
● EBS boots faster than instance stores.
Features:
● Fully Managed and Scalable, Durable, Distributed File System (NFSv4)
● Highly Available & Consistent low latencies. (EFS is based on SSD volumes)
● POSIX Compliant (NFS) Distributed File System.
● EC2 instances can access EFS across AZs, regions, VPCs & on-premises through AWS Direct
Connect or AWS VPN.
● Provides EFS Lifecycle Management for the better price-performance ratio
● It can be integrated with AWS Datasync for moving data between on-premise to AWS EFS
● Supported Automatic/Schedule Backups of EFS (AWS Backups)
● It can be integrated with CloudWatch & CloudTrail for monitoring and tracking.
● EFS supports encryption at transit(TLS) and rest both. (AWS Key Management Service (KMS))
● Different Access Modes: Performance and Throughput for the better cost-performance
tradeoff.
● EFS is more expensive than EBS.
Best Practices:
● Monitor using cloudWatch and track API using CloudTrails
● Leverage IAM services for access rights and security
● Test before fully migrating mission critical workload for performance and throughput.
● Separate out your latency-sensitive workloads. Storing these workloads on separate volumes
ensures dedicated I/O and burst capabilities.
Pricing:
● Pay for what you have used based on Access Mode/Storage Type + Backup Storage.
● Amazon FSx for Windows File Server is an FSx solution that offers a scalable and shared file
storage system on the Microsoft Windows server.
● Using the Server Message Block (SMB) protocol with Amazon FSx Can access file storage
systems from multiple windows servers.
● It offers to choose from HDD and SSD storage, offers high throughput, and IOPS with
sub-millisecond latencies for Windows workloads.
● Using SMB protocol, Amazon FSx can connect file systems to Amazon EC2, Amazon ECS,
Amazon WorkSpaces, Amazon AppStream 2.0 instances, and on-premises servers using AWS
Direct Connect or AWS VPN.
● It provides high availability (Multi-AZ deployments) with an active and standby file server in
separate AZs.
● It automatically and synchronously replicates data in the standby Availability Zone (AZ) to
manage failover.
● Using AWS DataSync with Amazon FSx helps to migrate self-managed file systems to
Windows storage systems.
● It offers identity-based authentication using Microsoft Active Directory (AD).
● It automatically encrypts data at rest with the help of AWS Key Management Service (AWS
KMS). It uses SMB Kerberos session keys to encrypt data in transit.
Use cases:
● Large organizations which require shared access to multiple data sets between multiple users
can use Amazon FSx for Windows File Server.
● Using Windows file storage, users can easily migrate self-managed applications to AWS using
AWS DataSync.
● It helps execute business-critical Microsoft SQL Server database workloads easily and
automatically handles SQL Server Failover and data replication.
● Using Amazon FSx for Windows File Server, users can easily process media workloads with
low latencies and high throughput.
● It enables users to execute high intensive analytics workloads, including business intelligence
and data analytics applications.
Price details:
● Charges are applied monthly based on the storage and throughput capacity used for the file
system’s file system and backups.
● The cost of storage and throughput depends on the deployment type, either single-AZ or
multi-AZ
Use cases:
● The workloads which require shared file storage and multiple compute instances use Amazon
FSx for Lustre for high throughput and low latency.
● It is also applicable in media and big data workloads to process a large amount of data.
Price details:
● Charges are applied monthly in GB based on the storage capacity used for the file system.
● Backups are stored incrementally, which helps in storage cost savings.
Basics of S3?
● It is object-based storage.
● Files are stored in Buckets.
● The bucket is a kind of folder.
● Folders can be from 0 to 5 TB.
● S3 bucket names must be unique globally.
● When you upload a file in S3, you will receive an HTTP 200 code if the upload was successful.
● S3 offers Strong consistency for PUTs of new objects, overwrites or delete of current object
and List operations.
● By Default, all the Objects in the bucket are private.
Miscellaneous Topic
● Access Point: By creating Access Point, you can make S3 accessible over the internet.
● Life Cycle: By Configuring Lifecycle, you can make a transition of objects to different storage
classes.
● Replication: This feature will allow you to replicate data between buckets within the same or
different region.
Features:
● It integrates with AWS IAM to allow vaults to grant permissions to the users.
● It integrates with AWS CloudTrail to log and monitor API call activities for auditing.
● A vault is a place for storing archives with certain functionalities like to create, delete, lock, list,
retrieve, tag, and configure.
● Vaults can be set with access policies for additional security by the users.
● Amazon S3 Glacier jobs are the select queries that execute to retrieve archived data.
● It uses Amazon SNS to notify when the jobs complete.
● It uses ‘S3 Glacier Select’ to query specific archive objects or bytes for analytics instead of
complete archives.
● S3 Glacier Select operates on uncompressed comma-separated values (CSV format) and
output results to Amazon S3.
● Amazon S3 Glacier Select uses SQL queries using SELECT, FROM, and WHERE.
● It offers only SSE-KMS and SSE-S3 encryption.
● Amazon S3 Glacier does not provide real-time data retrieval of the archives.
Use Cases:
● It helps to store and archive media data that can increase up to the petabyte level.
● Organizations that generate, analyze, and archive large data can make use of Amazon S3
Glacier and S3 Glacier Deep Archive storage classes.
● Amazon S3 Glacier replaces tape libraries for storage because it does not require high upfront
cost and maintenance.
● Cache Volume Gateway: Only Hot Data / Cached data is Stored on-premise and all other
application data is stored on AWS S3.
Use Cases:
● Cost-Effective Backups and Disaster Recovery Management
● Migration to/from Cloud
● Managed Cache: Integration of Local(on-premises) Storage to Cloud Storage (Hybrid Cloud) ●
To Achieve Low Latency by storing data on-premise and still leverage cloud benefits
Pricing :
● Charges are applied on what you use with the AWS Storage Gateway and based on the type
and amount of storage you use.
Features:
Launch Management Settings: Control recovery instance launches for source servers,
with options for default settings for new servers and bulk modifications for existing
ones.
AZ Modification: Modify the recovery Availability Zone for multiple source servers to
streamline cross-AZ recovery.
Post-launch Actions: Define automatic actions post-launch, including custom AWS SSM
commands or pre-defined actions like CloudWatch agent installation.
Network Components Replication: Replicate and recover network components (subnet
settings, security groups, etc.) to ensure readiness and security.
Automated Network Configuration: Automate VPC configuration replication for
smoother recovery, enhanced security, and resource efficiency.
Use Cases:
● Recovery into Existing Instances: Recover into pre-defined existing instances,
preserving metadata and security parameters.
● Implement AWS Elastic Disaster Recovery to ensure fast and reliable recovery of
on-premises applications in the event of a disaster.
● AWS Elastic Disaster Recovery enables organizations to recover cloud-based
applications swiftly and efficiently.
● Organizations can leverage AWS Elastic Disaster Recovery to perform point-in-time
recovery of applications. By capturing and replicating data at regular intervals,
organizations can restore applications to a specific point in time, reducing data loss and
maintaining data integrity.
● With AWS Elastic Disaster Recovery, organizations can conduct non-disruptive tests to
validate their disaster recovery strategies.
● AWS Elastic Disaster Recovery offers failback capabilities, allowing organizations to
seamlessly return applications to their primary environment once the disaster has been
resolved.
Features:
● It works with existing Git-based repositories, tools, and commands in addition to AWS CLI
commands and APIs.
● CodeCommit repositories support pull requests, version differencing, merge requests between
branches, and notifications through emails about any code changes.
● AWS CodeCommit As compared to Amazon S3 versioning of individual files, AWS
CodeCommit support tracking batched changes across multiple files.
● It provides encryption at rest and in transit for the files in the repositories.
● It provides high availability, durability, and redundancy.
● It eliminates the need to back up and scale the source control servers.
Use Cases:
● AWS CodeCommit offers high availability, scalability, and durability for Git repositories.
● AWS CodeCommit provides built-in security features such as encryption, access
control, and integration with AWS Identity and Access Management (IAM).
● It enables teams to collaborate effectively on codebases regardless of their
geographical locations.
● It integrates seamlessly with other AWS services such as AWS CodePipeline and AWS
CodeBuild to automate the CI/CD process.
Features:
● AWS Code Services family consists of AWS CodeBuild, AWS CodeCommit, AWS
CodeDeploy, and AWS CodePipeline that provide complete and automated continuous
integration and delivery (CI/CD).
● It provides prepackaged and customized build environments for many programming
languages and tools.
● It scales automatically to process multiple separate builds concurrently.
● It can be used as a build or test stage of a pipeline in AWS CodePipeline.
● It requires VPC ID, VPC subnet IDs, and VPC security group IDs to access resources in
a VPC to perform build or test.
● Charges are applied based on the amount of time taken by AWS CodeBuild to
complete the build.
Use Cases:
● AWS services like AWS Lambda, Amazon S3,
Amazon ECR, and AWS CodeArtifact, enabling developers to deploy applications to AWS
cloud services easily.
● It optimizes build performance by automatically provisioning and scaling build
resources based on workload demands.
● It offers pre-configured build environments with popular programming languages,
runtime versions, and build tools pre-installed.
Use Cases:
In-place deployment:
● All the instances in the deployment group are stopped, updated with new revision and
started again after the deployment is complete.
● Useful for EC2/On-premises compute platform.
Blue/green deployment:
● The instances in the deployment group of the original environment are replaced by a
new set of instances of the replacement environment.
● Using Elastic Load Balancer, traffic gets rerouted from the original environment to the
replacement environment and instances of the original environment get terminated
after the deployment is complete.
● Useful for EC2/On-Premises, AWS Lambda and Amazon ECS compute platform.
Pricing:
● AWS CodePipeline has a flexible pay-as-you-go pricing model. It costs $1.00 per active
pipeline per month, and there are no upfront fees.
● You get the first 30 days for free to encourage experimentation. An active pipeline is
one that has been around for more than 30 days and had at least one code change go
through in a month.
● As part of the AWS Free Tier, you receive one free active pipeline monthly, which
applies across all AWS regions.
● Note: Additional charges may apply for storing and accessing pipeline artifacts in
Amazon S3, as well as for actions triggered by other AWS and third-party services
integrated into your pipeline.
Features:
● Cloud-Based IDE: AWS Cloud9 is entirely cloud-based, which means you can access it
from any device with an internet connection.
● Code Collaboration: AWS Cloud9 includes features for real-time collaboration among
developers. Multiple team members can work on the same codebase simultaneously,
making it easier to collaborate on projects.
● Built-In Code Editor: The IDE comes with a built-in code editor that supports popular
programming languages such as Python, JavaScript, Java, and many others. It also
provides code highlighting, autocompletion, and code formatting features.
● Terminal Access: Developers can access a fully functional terminal within the IDE,
enabling them to run commands and manage their AWS resources directly from the
same interface where they write code.
● Integrated Debugger: AWS Cloud9 includes debugging tools that help developers
identify and fix issues in their code. This includes features like breakpoints, step-through
debugging, and variable inspection.
● Version Control Integration: It supports integration with popular version control
systems like Git, allowing developers to easily manage and track changes to their code.
● Serverless Development: AWS Cloud9 is well-suited for serverless application
development. It includes AWS Lambda function support and can be used to build and
test serverless applications.
● Cloud Integration: As part of the AWS ecosystem, AWS Cloud9 can seamlessly
interact with other AWS services, making it easier to deploy and manage applications on
AWS infrastructure.
● Customization: Developers can customize the IDE to suit their preferences by
installing plugins and configuring settings.
● Cost Management: AWS Cloud9 offers cost-efficient pricing models, including a free
tier with limited resources and pay-as-you-go pricing for additional resources.
Best Practices:
● Resource Monitoring: Keep an eye on resource usage, especially if you're using an
EC2 instance for your AWS Cloud9 environment. Monitor CPU, memory, and storage to
ensure you're not over-provisioning or running into performance issues.
● Environment Cleanup: When you're done with a development environment, terminate
it to avoid incurring unnecessary charges. AWS CloudFormation can help automate
environment creation and cleanup.
Features:
● Centralized Artifact Repository: AWS CodeArtifact provides a centralized location for
storing and managing software artifacts.
● Support for Multiple Package Formats: AWS CodeArtifact supports multiple package
formats, including popular ones like npm (Node.js), Maven (Java), PyPI (Python), and
others.
● Security and Access Control: AWS CodeArtifact integrates with AWS Identity and
Access Management (IAM), allowing you to control who can access and publish
artifacts.
● Dependency Resolution: AWS CodeArtifact can be used to resolve dependencies for
your projects.
● Integration with Popular Tools: AWS CodeArtifact seamlessly integrates with popular
build and deployment tools like AWS CodePipeline, AWS CodeBuild, and AWS
CodeDeploy.
Features:
● Project Templates: AWS CodeStar offers pre-configured project templates for various
programming languages and application types. These templates provide a starting point
for developers, saving them time on initial setup and configuration.
● Integrated Development Tools: AWS CodeStar integrates with popular development
tools such as AWS Cloud9, Visual Studio Code, and others, making it easier for
developers to write code and collaborate on projects.
● Continuous Integration/Continuous Deployment (CI/CD): Developers can automate
the building, testing, and deployment of their applications, helping to maintain a reliable
and efficient development workflow all these can be achieved using AWS CodePipeline.
Use Cases:
● Rapid Project Initialization & Deployment: With AWS CodeStar, the startup can select
a pre-configured project template (e.g., a Python web app using Flask deployed on AWS
Elastic Beanstalk). CodeStar automatically provisions the necessary AWS services like
AWS CodeCommit, AWS CodeBuild, AWS CodeDeploy, and AWS CodePipeline. The
startup can then immediately start coding and see their changes deployed in real time.
● Standardizing Development Across Multiple Projects: Using AWS CodeStar, the IT
department can create custom project templates that align with the company's best
practices and standards. Each team can then use these templates when starting a new
project, ensuring a consistent development and deployment process across the
enterprise.
Pricing:
●AWS CodeStar incurs no additional fees. You are exclusively charged for the AWS
resources you allocate within your AWS CodeStar projects, such as Amazon EC2
instances, AWS Lambda executions, Amazon Elastic Block Store volumes, or Amazon
S3 buckets.
● There are no obligatory minimum fees or upfront commitments.
Features:
● AWS CodeGuru offers several features to help developers improve code quality and
application performance.
● CodeGuru provides automated code reviews powered by machine learning algorithms.
It analyzes code for best practices, potential defects, and opportunities for optimization.
●Developers receive actionable recommendations to improve code quality and
maintainability.
● CodeGuru offers detailed insights into code quality metrics, including code
duplication, code complexity, and adherence to coding standards.
● Developers can identify areas for improvement and prioritize refactoring efforts based
on data-driven insights.
● CodeGuru helps optimize AWS resource usage and reduce costs by identifying
inefficient code patterns and resource-intensive operations.
● CodeGuru seamlessly integrates with popular development tools and IDEs, including
AWS CodeCommit, GitHub, and AWS CodePipeline.
Use Cases:
● It can also be used to perform automated code reviews on third-party libraries and
dependencies.
● It can be used to modernize legacy codebases by identifying outdated code patterns,
deprecated APIs, and performance bottlenecks
● It seamlessly integrates with CI/CD pipelines, enabling automated code reviews and
performance profiling as part of the development workflow.
● It’s performance profiler helps developers optimize application performance by
identifying resource-intensive code paths, memory leaks, and performance.
Features:
● User-Friendly Interface: Accessible via AWS Management Console, API, or SDKs,
Elastic Transcoder offers intuitive controls for starting transcoding tasks with system
presets for optimal settings.
● Scalability: Seamlessly handles large volumes of media files and varying sizes,
leveraging AWS services like S3, EC2, DynamoDB, SWF, and SNS for parallel processing
and reliability.
● Cost-Effective Pricing: Pay based on output media duration with no minimum volumes
or long-term commitments, ensuring affordability for transcoding needs.
● Managed Service: Elastic Transcoder manages transcoding tasks, including scaling
and codec updates, freeing users to focus on content creation.
● Secure Content Handling: User assets remain secure within their S3 buckets,
accessed through IAM roles, following best security practices.
● Seamless Content Delivery: Utilizes S3 and CloudFront for storing, transcoding, and
delivering content seamlessly, with simplified permissions for distribution.
● AWS Integration: Integrates with AWS services like Glacier for storage, CloudFront for
distribution, and CloudWatch for monitoring, enabling end-to-end media solutions.
Use Cases:
Transcoding Pipelines: Enable concurrent transcoding workflows, allowing for flexibility
in handling tasks like short or long content transcoding and allocation based on
resolutions or storage.
Transcoding Jobs: Convert media files, generating multiple output files with different
formats and bit rates. Jobs run within pipelines, facilitating simultaneous processing.
System Transcoding Presets: Simplify transcoding settings for various devices with
presets ensuring broad compatibility or optimized quality and size.
Custom Transcoding Presets: Customize presets for specific output targets, ensuring
consistency across pipelines.
A service by AWS built on Blockchain with reliable APIs and without specialized infrastructure
that powers your application with actionable, real-time blockchain data, allowing you to focus on
innovation and speed to market with fully managed blockchain infrastructure.
Features:
Simplify Web3 Development with Amazon Managed Blockchain (AMB), streamlining
development for public and private blockchain networks.
Use cases:
Pricing:
● Pay for actual usage, scaling dynamically without infrastructure investment.
● Choose between dedicated service charging based on node instance, storage,
API requests, and data transfer sizes, or serverless service charging based on
API request count and complexity.
.
Links: https://aws.amazon.com/managed-blockchain/
https://aws.amazon.com/managed-blockchain/pricing/