0% found this document useful (0 votes)
477 views40 pages

AWS Cloud Practitioner Essentials Resume

The document provides an overview of AWS cloud computing concepts including: - The benefits of cloud computing such as variable expenses, stopping data center management, scaling capacity on demand, economies of scale, speed and agility, and global access. - Compute services such as Amazon EC2, AWS Lambda, ECS, EKS, and Fargate. - Networking concepts including VPC, subnets, route tables, security groups, and AWS Direct Connect. - Global infrastructure including regions, availability zones, edge locations, Route 53, and Outposts. - Management tools including the AWS Management Console, CLI, SDKs, CloudFormation, and Elastic Beanstalk.

Uploaded by

Reysa Agrianza H
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
477 views40 pages

AWS Cloud Practitioner Essentials Resume

The document provides an overview of AWS cloud computing concepts including: - The benefits of cloud computing such as variable expenses, stopping data center management, scaling capacity on demand, economies of scale, speed and agility, and global access. - Compute services such as Amazon EC2, AWS Lambda, ECS, EKS, and Fargate. - Networking concepts including VPC, subnets, route tables, security groups, and AWS Direct Connect. - Global infrastructure including regions, availability zones, edge locations, Route 53, and Outposts. - Management tools including the AWS Management Console, CLI, SDKs, CloudFormation, and Elastic Beanstalk.

Uploaded by

Reysa Agrianza H
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 40

AWS Cloud Practitioner Essentials

Module 1 : Introduction to AWS


Benefit of Cloud Computing

1. Trade upfront expense for variable expense


Upfront expense refers to data centers, physical servers, and other resources that you would
need to invest in before using them. Variable expense means you only pay for computing
resources you consume instead of investing heavily in data centers and servers before you
know how you’re going to use them.
By taking a cloud computing approach that offers the benefit of variable expense, companies
can implement innovative solutions while saving on costs.
2. Stop spending money to run and maintain data centers
Computing in data centers often requires you to spend more money and time managing
infrastructure and servers.
A benefit of cloud computing is the ability to focus less on these tasks and more on your
applications and customers.
3. Stop guessing capacity
With cloud computing, you don’t have to predict how much infrastructure capacity you will
need before deploying an application.
For example, you can launch Amazon EC2 instances when needed, and pay only for the
compute time you use. Instead of paying for unused resources or having to deal with limited
capacity, you can access only the capacity that you need. You can also scale in or scale out in
response to demand.
4. Benefit from massive economies of scale
By using cloud computing, you can achieve a lower variable cost than you can get on your
own.
Because usage from hundreds of thousands of customers can aggregate in the cloud,
providers, such as AWS, can achieve higher economies of scale. The economy of scale
translates into lower pay-as-you-go prices.
5. Increase speed and agility
The flexibility of cloud computing makes it easier for you to develop and deploy applications.
This flexibility provides you with more time to experiment and innovate. When computing in
data centers, it may take weeks to obtain new resources that you need. By comparison,
cloud computing enables you to access new resources within minutes.
6. Go global in minutes
The global footprint of the AWS Cloud enables you to deploy applications to customers
around the world quickly, while providing them with low latency. This means that even if
you are located in a different part of the world than your customers, customers are able to
access your applications with minimal delays.
Later in this course, you will explore the AWS global infrastructure in greater detail. You will
examine some of the services that you can use to deliver content to customers around the
world.
Module 2 : Compute in the Cloud
Amazon EC2

Instance family :

- General Purpose
- Compute Optimize
- Memory Optimize
- Storage Optimize
- Accelerated Computing

General purpose instances provide a balance of compute, memory, and networking resources.
Compute optimized instances are more well suited for batch processing workloads than general
purpose instances.

Memory optimized instances are more ideal for workloads that process large datasets in memory,
such as high-performance databases.

Storage optimized instances are designed for workloads that require high, sequential read and write
access to large datasets on local storage. The question does not specify the size of data that will
processed. Batch processing involves processing data in groups. A compute optimized instance is
ideal for this type of workload, which would benefit from a high-performance processor.

Amazon EC2 can be vertical or horizontal scaling

For horizontal scaling, Amazon EC2 have features :

Amazon EC2 Auto Scaling (Automated Horizontal Scaling)

For distribute the request into the pool of Amazon EC2 Auto Scaling, we need :

Amazon Elastic Load Balancing (ELB)

Amazon EC2 have several billing options :

- On Demand (most flexible with no contract)


- Saving Plans (ex : AWS Lambda & AWS Fargate as well as EC2 instances)
- Reserved Instance (with contract for 1 to 3 years for certain level of usage)
- Spot Instance (utilize unused capacity to discount for paying the bill)
- Dedicated Hosts

Messaging and Queueing

Tighly Coupled Architecture = If the frontend and backend or something like that communicate
directly. If one component in failure state, so other component will be affected.

Loosely Coupled Architecture = Single failure won’t cause cascading failures to others. This is
because the component who at the failure state will be isolated. In this concept, we have message
queue on the middle of the communication between components, who queueing the
request/message from one to another.

Loosely Coupled Architecture in AWS Services :

- Amazon Simple Queue Service (SQS)


- Amazon Simple Notification Service (SNS)

Payload : The Data contained within a message

Amazon SQS queues : Where message are placed until they are processed

Amazon SNS topic : A channel for messages to be delivered.

Amazon SNS is broadcaster, who will send the message or notification to services even to end users.

The service/endpoints who have received message from Amazon SNS such as SQS Queues, AWS
Lambda Functions, HTTP or HTTPS Web Hooks. To end users can be using mobile push, SMS, email.

In Amazon SNS, subscribers can be web servers, email addresses, AWS Lambda functions, or several
other options.

Additional Compute Services

AWS Lambda – Serverless Computation

AWS Lambda is a service that lets you run code without needing to provision or manage servers.

How AWS Lambda Works :

1. Upload code to Lambda


2. Set code to trigger from an event source (event source : AWS services, mobile app, or HTTP
endpoints)
3. Code runs only when triggered
4. Pay only for the compute time you use

Container Management :

Amazon ECS (Elastic Container Service) – Amazon Container System

Amazon EKS (Elastic Kubernetes Service)

Both of them use Docker Container Management System

ECS & EKS run on top EC2

AWS Fargate – Serverless Compute platform for ECS & EKS


Compute Services

Amazon EC2

- Host traditional applications


- Full access to the OS

AWS Lambda

- Short running functions


- Service-oriented applications
- Event driven applications
- No provisioning or managing servers

Run Docker container-based workloads on AWS

Choose orchestration tools :

- Amazon ECS
- Amazon EKS

After choose orchestration tools, then choose the platforms :

- Run containers on Amazon EC2 that you manage


- AWS Fargate (Serverless Environment)

Serverless Computing : Your code runs on the servers without you need to provision of manage
these servers.
Module 3 : Global Infrastructure and Reliability
There is 4 factors for you to pick the regions :

1. Compliance Requirements (Compliance with data governance and legal requirements)


2. Proximity. Speed of access, when the customers most live in Singapore, so you should pick
SG region. This will be affect to latency
3. Feature availability, sometimes the closest region do not have any features that you want
4. Pricing

Availability Zone, the AZs are located tens of miles apart from each other.

Amazon CloudFront

- Using CDN (Content Delivery Network)


- Amazon CloudFront use Edge Locations, to accelerate communicate to the customers all
around the world.

Edge Locations are separate from Regions

Amazon Route 53 for DNS

AWS Outposts for running the services for locally access, inside your own building (your own data
center)

AWS Outposts is a service that enables you to run infrastructure in a hybrid cloud approach.

Key Points :

1. Regions are geographically isolated areas


2. Regions cointain Availability Zones
3. Edge Locations run Amazon CloudFront, to delivery content (cache) near the customers.

Provisioning AWS Resources

To interact with the services at the AWS Global Infrastructure, we use an API.

Interacting with AWS services :

1. AWS Management Console


2. AWS Command Line (CLI)
3. AWS Software Development Kits (SDKs)
4. Various other tools (ex : AWS Cloud Formation)
a. AWS Elastic Beanstalk
b. AWS CloudFormation
AWS Management Console :

1. Test environments
2. View AWS bills
3. View monitoring
4. Work with non-technical resources

AWS Command Line : Make API calls using the terminal on your machine

AWS SDKs : Interact with AWS resources through various programming languages

SDKs enable you to use AWS services with your existing applications or create entirely new
applications that will run on AWS.

To help you get started with using SDKs, AWS provides documentation and sample code for each
supported programming language. Supported programming languages include C++, Java, .NET, and
more.

AWS Elastic Beanstalk

Service that helps you to provision Amazon EC2-based environments

Can save configuration and re-deploy easily

AWS Elastic Beanstalk helps you to focus on you business application, not the infrastructure

AWS Elastic Beanstalk perform the following tasks :

- Adjust capacity
- Load balancing
- Automatic scaling
- Application health monitoring

AWS CloudFormation

Service that helps you to create automated and repeatable deployments

AWS CloudFormation is IaaC (Infrasctructure as a Code) tool used to define a wide variety of AWS
resources using JSON or YAML text-based documents, called CloudFormation templates.

AWS Cloud Formation support isn’t just limited to EC2-based solutions, also support to :

- Storage
- Databases
- Analytics
- Machine Learning, more
Module 4 : Networking
Amazon Virtual Private Cloud (VPC)

AWS Direct Connect, connect directly from you Data Center to the AWS

AWS Direct Connect is physical line to the AWS VPC. The private connection that AWS Direct
Connect provides helps you to reduce network costs and increase the amount of bandwidth that can
travel through your network.

Virtual Private Gateway, for VPN connection from your office to the AWS

AWS have a wide range of tools for cover every layer of security :

- Network hardening
- Application security
- User identity
- Authentication and authorization
- Distributed denial of service prevention
- Data integrity
- Encryption, etc

Network ACL : The packet who come will be check, and if its appropriate so will be in. And also do
the same for packet who out.

By default, your account’s default network ACL allows all inbound and outbound traffic, but you can
modify it by adding your own rules. For custom network ACLs, all inbound and outbound traffic is
denied until you add rules to specify which traffic should be allowed. Additionally, all network ACLs
have an explicit deny rule. This rule ensures that if a packet doesn’t match any of the other rules on
the list, the packet is denied.

Security Groups : by default, deny all traffic, just accept specific traffic who have set before (stateful),
and allow all traffic go outside

Amazon Route 53, DNS services.

Routing policies :

- Latency-based routing
- Geolocation DNS
- Geoproximity routing
- Weighted round robin
Module 5 : Storage and Databases
Instance Store and Amazon EBS

Instance stores, Block-level storage volumes behave like physical hard drives.

EBS, Elastic Block Store, is a service that provides block-level storage volumes that you can use with
Amazon EC2 instances. If you stop or terminate an Amazon EC2 instance, all the data on the
attached EBS volume remains available.

An EBS snapshot is an incremental backup. This means that the first backup taken of a volume copies
all the data. For subsequent backups, only the blocks of data that have changed since the most
recent snapshot are saved.

Incremental backups are different from full backups, in which all the data in a storage volume copies
each time a backup occurs. The full backup includes data that has not changed since the most recent
backup.

Amazon S3, save and retrieved data

The benefir of Amazon S3 :

- Store data as objects


- Store objects in buckets
- Upload a maximum object size of 5TB
- Version objects
- Create multiple buckets

In object storage, each object consists of data, metadata, and a key.

Amazon S3 offers unlimited storage space. The maximum file size for an object in Amazon S3 is 5 TB.

Amazon S3 storage classes

With Amazon S3, you pay only for what you use. You can choose from a range of storage classes to
select a fit for your business and cost needs. When selecting an Amazon S3 storage class, consider
these two factors:

- How often you plan to retrieve your data


- How available you need your data to be

Amazon S3 Glacier for archive the data

Amazon S3 tiering / classes :

- S3 Standard
o Designed for frequently accessed data
o Stores data in a minimum of three Availability Zones
S3 Standard provides high availability for objects. This makes it a good choice for a
wide range of use cases, such as websites, content distribution, and data analytics.
S3 Standard has a higher cost than other storage classes intended for infrequently
accessed data and archival storage.
- S3 Standard-Infrequent Access (S3 Standard-IA)
o Ideal for infrequently accessed data
o Similar to S3 Standard but has a lower storage price and higher retrieval price
S3 Standard-IA is ideal for data infrequently accessed but requires high availability
when needed. Both S3 Standard and S3 Standard-IA store data in a minimum of
three Availability Zones. S3 Standard-IA provides the same level of availability as S3
Standard but with a lower storage price and a higher retrieval price.
- S3 One Zone-Infrequent Acccess (S3 One Zone-IA)
o Stores data in a single Availability Zone
o Has a lower storage price than S3 Standard-IA
Compared to S3 Standard and S3 Standard-IA, which store data in a minimum of
three Availability Zones, S3 One Zone-IA stores data in a single Availability Zone. This
makes it a good storage class to consider if the following conditions apply:
 You want to save costs on storage.
 You can easily reproduce your data in the event of an Availability Zone
failure.
- S3 Intelligent-Tiering
o deal for data with unknown or changing access patterns
o Requires a small monthly monitoring and automation fee per object
In the S3 Intelligent-Tiering storage class, Amazon S3 monitors objects’ access
patterns. If you haven’t accessed an object for 30 consecutive days, Amazon S3
automatically moves it to the infrequent access tier, S3 Standard-IA. If you access an
object in the infrequent access tier, Amazon S3 automatically moves it to the
frequent access tier, S3 Standard.
- S3 Glacier
o deal for data with unknown or changing access patterns
o Requires a small monthly monitoring and automation fee per object
In the S3 Intelligent-Tiering storage class, Amazon S3 monitors objects’ access
patterns. If you haven’t accessed an object for 30 consecutive days, Amazon S3
automatically moves it to the infrequent access tier, S3 Standard-IA. If you access an
object in the infrequent access tier, Amazon S3 automatically moves it to the
frequent access tier, S3 Standard.
- S3 Glacier Deep Archive
o deal for data with unknown or changing access patterns
o Requires a small monthly monitoring and automation fee per object
In the S3 Intelligent-Tiering storage class, Amazon S3 monitors objects’ access
patterns. If you haven’t accessed an object for 30 consecutive days, Amazon S3
automatically moves it to the infrequent access tier, S3 Standard-IA. If you access an
object in the infrequent access tier, Amazon S3 automatically moves it to the
frequent access tier, S3 Standard.
Amazon EBS vs Amazon S3

Amazon EBS

- Sizes up to 16 TB
- Survive termination of their EC2 instance
- Solid state by default
- HDD options

Amazon S3

- Unlimited storage
- Individual objects up to 5TBs
- Write once/read many
- 99.999999999999% durability
- Web enabled (every file have an url)
- Regionally distributed
- Offers cost savings
- Serverless

You have photo gallery website, you use S3 for many file of photos

You have 80GB file (video file), you use EBS. Video file breakdown to blocks, small component parts.

S3 > every new change, update must be entire file uploaded

If you are using complete objects, or any occasional changes, S3 is victorious.

EBS > every new change, just recent update, because of block

If you are doing complex read, right, change functions, EBS is knockout winner.

Amazon EFS (Elastic File System)

Multiple instances can access the data in the EFS at the same time. Compared to block storage and
object storage, file storage is ideal for use cases in which a large number of services and resources
need to access the same data at the same time.

It scale and redundant by aws


Amazon EBS vs Amazon EFS

Amazon EBS

- Volumes attach to EC2 instances


- Availability Zone level resource (same AZ with EC2)
- Need to be in the same AZ to attach EC2 instances
- Volumes do not automatically scale

Amazon EFS

- Multiple instances reading and writing silmutaneously


- Linux file system
- Regional resources
- Automatically scales (without provisioning)

Amazon RDS (Relational Database Service)

- Automated patching
- Backups
- Redundancy
- Failover
- Disaster Recovery

You can integrate Amazon RDS with other services to fulfill your business and operational needs,
such as using AWS Lambda to query your database from a serverless application.

Amazon Aurora, for MySQL and PostgreSQL.

- Price 1/10th the cost of commercial databases


- Data Replication (6 copies) in given time
- Up to 15 read replicas
- Continous backups to S3
- Point-in-time recovery, recover data from a specific period

Amazon DynamoDB, a serverless database. A non-relational database (non-SQL database)

Is a key-value database service. It delivers single-digit millisecond performance at any scale.

- Tend to have simple flexible schemas, not complex rigid schemas


- Purpose built (specific use cases)
- Milisecond response time
- Fully managed
- Highly scalable

Non-relational database, you create tables. A table is a place where you can store and query data.
Nonrelational databases are sometimes referred to as “NoSQL databases” because they use
structures other than rows and columns to organize data. One type of structural approach for
nonrelational databases is key-value pairs. With key-value pairs, data is organized into items (keys),
and items have attributes (values). You can think of attributes as being different features of your
data.

In a key-value database, you can add or remove attributes from items in the table at any time.
Additionally, not every item in the table has to have the same attributes.

Example of data in a nonrelational database:

Key Value

Name: John Doe

1 Address: 123 Any Street

Favorite drink: Medium latte

Name: Mary Major

2 Address: 100 Main Street

Birthday: July 5, 1994

Amazon RDS vs Amazon DynamoDB

Amazon RDS

- Automatic high availability; recovery provided


- Customer ownership of data
- Customer ownership of schema
- Customer control of network

Amazon DynamoDB

- Key-value
- Massive throughput capabilities
- PB size potential (PetaByte)
- Granual API access

RDS use case : You have sales supply chain management system that you have to analyze for weak
spots. Using RDS because you need complex relational joins.

DynamoDB use case : you have an employee contact list: names, phone numbers, emails, employee
IDs. Well, this is all single table territory. I could use a relational database for this, but the things that
make relational databases great, all of that complex functionality, creates overhead and lag and
expense if you're not actually using it. This is where non-relational databases, Dynamo DB, delivers
the knockout punch. By eliminating all the overhead, DynamoDB allows you to build powerful,
incredibly fast databases where you don't need complex joint functionality.
Amazon Redshift

Data warehousing as a service for big data analytics. It offers the ability to collect data from many
sources and helps you to understand relationships and trends across your data.

AWS Database Migration Service (DMS)

Enables you to migrate relational databases, nonrelational databases, and other types of data stores.

- The source database remains fully operational during the migration


- Downtime is minimized for applications that rely on that database
- The source and target databases don’t have to be of the same type

From :

- EC2
- On Premises
- RDS

To :

- EC2
- On Premises
- RDS

AWS DMS will be in the middle for migration process.

Homogenous databases : same type (ex: MySQL to RDS MySQL)

Heterogenous databases : from different type, will be convert by AWS Schema Convertion Tool to
target database, and then DMS will be migrate the database.

AWS DMS use cases :

- Development and test database migrations (ex: copy db from production to dev/test db)
- Database consolidation (ex : several db consolidation to one central db)
- Continous database replication (ex : continuous db replication for disaster recovery)

Amazon DocumentDB (with MongoDB compability)

For use case content management system. Great for content management, catalog, user profiles.

Amazon Neptune (a graph database)

Amazon Neptune to build and run applications that work with highly connected datasets, such as
recommendation engines, fraud detection, and knowledge graphs.
Amazon Managed Blockchain

Amazon Managed Blockchain is a service that you can use to create and manage blockchain
networks with open-source frameworks.

Blockchain is a distributed ledger system that lets multiple parties run transactions and share data
without a central authority.

Amazon QLDB (Quantum Ledger Database)

is a ledger database service. You can use Amazon QLDB to review a complete history of all the
changes that have been made to your application data.

Database Accelerator

Amazon ElastiCache

is a service that adds caching layers on top of your databases to help improve the read times of
common requests. It supports two types of data stores: Redis and Memcached.

Amazon DynamoDB Accelator (DAX)

is an in-memory cache for DynamoDB. It helps improve response times from single-digit milliseconds
to microseconds.
Module 6 : Security
Shared Responsibility Model

AWS Identity and Access Management (IAM)

User management for AWS services.

Principle of least privilege : A user is granted access only to what they need

IAM Policy create from JSON file

In the JSON file :

- Effect : (allow or deny)


- Action : (what service and what action) , list any AWS API call
- Resource : (unique ID for the service in the service) , list what AWS resource

IAM Groups. You can attach policy/permission to the group, so all members of the group will have
the permissions.

AWS IAM : By default, when you create a new IAM user in AWS, it has no permissions associated
with it.

- Root user
- Users
- Groups
- Policies
- Roles
- Identity Federation

If you have own identity from your company, you can fedearate those users to AWS using
role based access. Use one login for both your corporate system as well as AWS.
MFA especially for root user.

AWS IAM Roles :

- Associated permissions
- Allow or deny specific actions
- Assumed for temporary amounts of time
- No username and password
- Access to temporary permissions
- Grant access to AWS resources (users, external identities, applications, other AWS services)

AWS IAM Roles : What if a coffee shop employee hasn’t switched jobs permanently, but instead,
rotates to different workstations throughout the day? This employee can get the access they need
through IAM roles.

When an identity assumes a role, it abandons all of the previous permissions that it has and it
assumes the permissions of that role.

You can actually avoid creating IAM users for every person in your organization by federating users
into your account. This means that they could use their regular corporate credentials to log into AWS
by mapping their corporate identities to IAM roles.

AWS Organizations

A central location to manage multiple AWS accounts.

- Centralized management
- Consolidated billing (for bulk discounts)
- Implement hierarchical groupings of accounts (grouping to Organizational Units (OU))
- AWS service and API actions access control (using SCPs (Service Control Policies)

SCP : Specify the maximum permissions for member accounts in the organization. In essence, with
SCPs you can restrict which AWS services, resources, and individual API actions, the users and roles
in each member account can access.

Organizational Unit

Group accounts into organizational units (OUs) to make it easier to manage accounts with similar
business or security requirements. When you apply a policy to an OU, all the accounts in the OU
automatically inherit the permissions specified in the policy.

By organizing separate accounts into OUs, you can more easily isolate workloads or applications that
have specific security requirements. For instance, if your company has accounts that can access only
the AWS services that meet certain regulatory requirements, you can put these accounts into one
OU. Then, you can attach a policy to the OU that blocks access to all other AWS services that do not
meet the regulatory requirements.
AWS Organization work :
Compliance

AWS Artifact, is a service that provides on-demand access to AWS security and compliance reports
and select online agreements. AWS Artifact consists of two main sections :
- AWS Artifact Agreements
Suppose that your company needs to sign an agreement with AWS regarding your use of
certain types of information throughout AWS services. You can do this through AWS Artifact
Agreements.
In AWS Artifact Agreements, you can review, accept, and manage agreements for an
individual account and for all your accounts in AWS Organizations. Different types of
agreements are offered to address the needs of customers who are subject to specific
regulations, such as the Health Insurance Portability and Accountability Act (HIPAA).
- AWS Artifact Reports
Suppose that a member of your company’s development team is building an application and
needs more information about their responsibility for complying with certain regulatory
standards. You can advise them to access this information in AWS Artifact Reports.
AWS Artifact Reports provide compliance reports from third-party auditors. These auditors
have tested and verified that AWS is compliant with a variety of global, regional, and
industry-specific security standards and regulations. AWS Artifact Reports remains up to
date with the latest reports released. You can provide the AWS audit artifacts to your
auditors or regulators as evidence of AWS security controls.

Customer Compliance Center

Contains resources to help you learn more about AWS compliance.

In the Customer Compliance Center, you can read customer compliance stories to discover how
companies in regulated industries have solved various compliance, governance, and audit
challenges.

You can also access compliance whitepapers and documentation on topics such as:

- AWS answers to key compliance questions


- An overview of AWS risk and compliance
- An auditing security checklist

Additionally, the Customer Compliance Center includes an auditor learning path. This learning path
is designed for individuals in auditing, compliance, and legal roles who want to learn more about
how their internal operations can demonstrate compliance using the AWS Cloud.

DDoS (Distributed Denial-of-Service)

UDP Flood (low level network attack), when bad actor send request to Weather Service, the Weather
Service give the data of the request but it sends to fake address (our address), and our server will be
got many data who make our server on the flood condition. Overwhelm traffic.

Solutions : Security Groups

HTTP Level Attack, some attacks are much more sophisticated, which look like normal customers
asking for normal things like complicated product searches over and over and over, all coming from
an army of zombified bot machines. They ask for so much attention that regular customers can't get
in.
Slowloris Attack

Imagine standing in line at the coffee shop, when someone in front of you takes seven minutes to
order their whatever it is they're ordering, and you don't get to order until they finish and get out of
your way. Well, Slowloris attack is the exact same thing. Instead of a normal connection, I would like
to place an order, the attacker pretends to have a terribly slow connection. You get the picture.
Meanwhile, your production servers are standing there waiting for the customer to finish their
request so they can dash off and return the result. But until they get the entire packet, they can't
move on to the next thread, the next customer. A few Slowloris attackers can exhaust the capacity of
your entire front end with almost no effort at all.

Solutions : Elastic Load Balancer (ELB)

AWS Shield with AWS WAF

AWS WAF : Using Web Application Firewall, read the signature of Bad Actors.

Lets you monitor network requests that come into your web applications. AWS WAF works in a
similar way to block or allow traffic. However, it does this by using a web access control list (ACL) to
protect your AWS resources. AWS WAF like htaccess for security, for blocking access based on list.

AWS Shield : Minimize effect of DoS and DDoS

AWS Shield provides two levels of protection :

- Standard
AWS Shield Standard automatically protects all AWS customers at no cost. It protects your
AWS resources from the most common, frequently occurring types of DDoS attacks.
As network traffic comes into your applications, AWS Shield Standard uses a variety of
analysis techniques to detect malicious traffic in real time and automatically mitigates it.
- Advanced
AWS Shield Advanced is a paid service that provides detailed attack diagnostics and the
ability to detect and mitigate sophisticated DDoS attacks.
It also integrates with other services such as Amazon CloudFront, Amazon Route 53, and
Elastic Load Balancing. Additionally, you can integrate AWS Shield with AWS WAF by writing
custom rules to mitigate complex DDoS attacks.

Additional Services :

Encryption at rest. For example, when data is idle, for Amazon DynamoDB, the data on the table has
been to encrypted. Integrate with Amazon KMS (Key Management Service)

AWS KMS

AWS KMS enables you to perform encryption operations through the use of cryptographic keys. A
cryptographic key is a random string of digits used for locking (encrypting) and unlocking
(decrypting) data. You can use AWS KMS to create, manage, and use cryptographic keys. You can
also control the use of keys across a wide range of services and in your applications.
With AWS KMS, you can choose the specific levels of access control that you need for your keys. For
example, you can specify which IAM users and roles are able to manage keys. Alternatively, you can
temporarily disable keys so that they are no longer in use by anyone. Your keys never leave AWS
KMS, and you are always in control of them.

Encryption at transit. For example when data is move from A to B

Amazon Inspector

An automated security assessment to applications (ex: EC2 Instance). It checks applications for
security vulnerabilities and deviations from security best practices, such as open access to Amazon
EC2 instances and installations of vulnerable software versions.

Use case :

Suppose that the developers at the coffee shop are developing and testing a new ordering
application. They want to make sure that they are designing the application in accordance with
security best practices. However, they have several other applications to develop, so they cannot
spend much time conducting manual assessments. To perform automated security assessments,
they decide to use Amazon Inspector.

After Amazon Inspector has performed an assessment, it provides you with a list of security findings.
The list prioritizes by severity level, including a detailed description of each security issue and a
recommendation for how to fix it. However, AWS does not guarantee that following the provided
recommendations resolves every potential security issue. Under the shared responsibility model,
customers are responsible for the security of their applications, processes, and tools that run on
AWS services.

Key points of Amazon Inspector :

- Network configuration reachability piece


- Amazon agent
- Security assessment service

Amazon GuardDuty

Is a service that provides intelligent threat detection for your AWS infrastructure and resources. It
identifies threats by continuously monitoring the network activity and account behavior within your
AWS environment.

GuardDuty then continuously analyzes data from multiple AWS sources, including VPC Flow Logs and
DNS logs. You can also configure AWS Lambda functions to take remediation steps automatically in
response to GuardDuty’s security findings.
Module 7 : Monitoring and Analytics
Monitoring

Monitoring, observing systems, collecting metrics, and then using data to make decisions

Amazon CloudWatch

Monitoring AWS Infrastructure in real-time. CloudWatch is a web service that enables you to
monitor and manage various metrics and configure alarm actions based on data from those metrics.

Amazon CloudWatch Alarm, for alerting you and trigger the action. You can create custom metrics
for your needs. Also integrated with Amazon SNS, to alert you via SMS. With CloudWatch, you can
create alarms that automatically perform actions if the value of your metric has gone above or below
a predefined threshold.

Use Case form alarm :

For example, suppose that your company’s developers use Amazon EC2 instances for application
development or testing purposes. If the developers occasionally forget to stop the instances, the
instances will continue to run and incur charges.

In this scenario, you could create a CloudWatch alarm that automatically stops an Amazon EC2
instance when the CPU utilization percentage has remained below a certain threshold for a specified
period. When configuring the alarm, you can specify to receive a notification whenever this alarm is
triggered.

Benefit using CloudWatch :

- Access all your metrics from a central location


- Gain visibility into your applications, infrastructure, and services
- Reduce MTTR (Mean Time To Resolutions) and improve TCO (Total Cost of Ownership)
If the MTTR of cleaning hours restaurant machines is shorter then we can save on TCO with
them.
- Drive insights to optimize applications and operational resources

AWS CloudTrail

The Comprehensive API auditing tool. Every request gets logged in the CloudTrail Engine (for every
AWS services).

AWS CloudTrail records API calls for your account. The recorded information includes the identity of
the API caller, the time of the API call, the source IP address of the API caller, and more. You can
think of CloudTrail as a “trail” of breadcrumbs (or a log of actions) that someone has left behind
them.

Recall that you can use API calls to provision, manage, and configure your AWS resources. With
CloudTrail, you can view a complete history of user activity and API calls for your applications and
resources. Events are typically updated in CloudTrail within 15 minutes after an API call.
Example: AWS CloudTrail event

Suppose that the coffee shop owner is browsing through the AWS Identity and Access Management
(IAM) section of the AWS Management Console. They discover that a new IAM user named Mary
was created, but they do not who, when, or which method created the user.

To answer these questions, the owner navigates to AWS CloudTrail.

In the CloudTrail Event History section, the owner applies a filter to display only the events for the
“CreateUser” API action in IAM. The owner locates the event for the API call that created an IAM
user for Mary. This event record provides complete details about what occurred:

On January 1, 2020 at 9:00 AM, IAM user John created a new IAM user (Mary) through the AWS
Management Console.

AWS Trusted Advisor

An automated advisor, is a web service that inspects your AWS environment and provides real-time
recommendations in accordance with AWS best practices.

AWS Trusted Advisor has 5 pillars :

- Cost optimization
- Performance
- Security
- Fault tolerance
- Service limits

For each category:

- The green check indicates the number of items for which it detected no problems
- The orange triangle represents the number of recommended investigations
- The red circle represents the number of recommended actions
Module 8 : Pricing and Support
AWS Free Tier

3 types of offers are available:

- Always free
These offers do not expire and are available to all AWS customers.
For example, AWS Lambda allows 1 million free requests and up to 3.2 million seconds of
compute time per month. Amazon DynamoDB allows 25 GB of free storage per month.
- 12 montsh free
These offers are free for 12 months following your initial sign-up date to AWS.
Examples include specific amounts of Amazon S3 Standard Storage, thresholds for monthly
hours of Amazon EC2 compute time, and amounts of Amazon CloudFront data transfer out.
- Trials
Short-term free trial offers start from the date you activate a particular service. The length of
each trial might vary by number of days or the amount of usage in the service.
For example, Amazon Inspector offers a 90-day free trial. Amazon Lightsail (a service that
enables you to run virtual private servers) offers 750 free hours of usage over a 30-day
period.

How AWS pricing works :

- Pay for what you use


For each service, you pay for exactly the amount of resources that you actually use, without
requiring long-term contracts or complex licensing.
- Pay less when you reserve
Some services offer reservation options that provide a significant discount compared to On-
Demand Instance pricing.
For example, suppose that your company is using Amazon EC2 instances for a workload that
needs to run continuously. You might choose to run this workload on Amazon EC2 Instance
Savings Plans, because the plan allows you to save up to 72% over the equivalent On-
Demand Instance capacity.
- Pay less with volume-based discounts when you use more.
Some services offer tiered pricing, so the per-unit cost is incrementally lower with increased
usage.
For example, the more Amazon S3 storage space you use, the less you pay for it per GB.

AWS Pricing Calculator

Let’s you explore AWS Services and grouping to calculate the price for those types of pricing.
AWS Pricing Examples

Amazon S3

For Amazon S3 pricing, consider the following cost components:

Storage - You pay for only the storage that you use. You are charged the rate to store objects in your
Amazon S3 buckets based on your objects’ sizes, storage classes, and how long you have stored each
object during the month.

Requests and data retrievals - You pay for requests made to your Amazon S3 objects and buckets.
For example, suppose that you are storing photo files in Amazon S3 buckets and hosting them on a
website. Every time a visitor requests the website that includes these photo files, this counts
towards requests you must pay for.

Data transfer - There is no cost to transfer data between different Amazon S3 buckets or from
Amazon S3 to other services within the same AWS Region. However, you pay for data that you
transfer into and out of Amazon S3, with a few exceptions. There is no cost for data transferred into
Amazon S3 from the internet or out to Amazon CloudFront. There is also no cost for data transferred
out to an Amazon EC2 instance in the same AWS Region as the Amazon S3 bucket.

Management and replication - You pay for the storage management features that you have enabled
on your account’s Amazon S3 buckets. These features include Amazon S3 inventory, analytics, and
object tagging.

Example :

The AWS account in this example has used Amazon S3 in two Regions: Northern Virginia and
Ohio. For each Region, itemized charges are based on the following factors:

o The number of requests to add or copy objects into a bucket


o The number of requests to retrieve objects from a bucket
o The amount of storage space used

All the usage for Amazon S3 in this example is under the AWS Free Tier limits, so the account
owner would not have to pay for any Amazon S3 usage in this month.
AWS Billing Dashboard

Use the AWS Billing & Cost Management dashboard to pay your AWS bill, monitor your usage, and
analyze and control your costs.

- Compare your current month-to-date balance with the previous month, and get a forecast
of the next month based on current usage.
- View month-to-date spend by service.
- View Free Tier usage by service.
- Access Cost Explorer and create budgets.
- Purchase and manage Savings Plans.
- Publish AWS Cost and Usage Reports.

Consolidated billing (for AWS Organization)

Features :

- Simplifies billing process


- Share savings across accounts
- Free feature

The default maximum number of accounts allowed for an organization is 4, but you can contact AWS
Support to increase your quota, if needed.

Another benefit of consolidated billing is the ability to share bulk discount pricing, Savings Plans, and
Reserved Instances across the accounts in your organization. For instance, one account might not
have enough monthly usage to qualify for discount pricing. However, when multiple accounts are
combined, their aggregated usage may result in a benefit that applies across all accounts in the
organization.

AWS Budgets

You can create budgets to plan your service usage, service costs, and instance reservations.

The information in AWS Budgets updates 3 times a day. You can also set custom alerts when your
usage exceeds.

AWS Cost Explorer

AWS Cost Explorer is a tool that enables you to visualize, understand, and manage your AWS costs
and usage over time.
AWS Support plans

Basic Support :

Free for all AWS customers

- 24/7 customer service


- Documentation
- Whitepapers
- Support forums
- AWS Trusted Advisor
- AWS Personal Health Dashboard

Developer Support :

- Basic Support
- Email access to customer support

Customers in the Developer Support plan have access to features such as:

- Best practice guidance


- Client-side diagnostic tools
- Building-block architecture support, which consists of guidance for how to use AWS
offerings, features, and services together

For example, suppose that your company is exploring AWS services. You’ve heard about a few
different AWS services. However, you’re unsure of how to potentially use them together to build
applications that can address your company’s needs. In this scenario, the building-block
architecture support that is included with the Developer Support plan could help you to identify
opportunities for combining specific services and features.

Business Support :

- Basic and Developer Support


- AWS Trusted Advisor provides full set of best practice check
- Direct phone access to cloud support engineers (4 hour response SLA if your production
system is impaired, and a 1 hour SLA for production systems down)
- Infrastructure event management

Customers with a Business Support plan have access to additional features, including:

- Use-case guidance to identify AWS offerings, features, and services that can best support
your specific needs
- All AWS Trusted Advisor checks
- Limited support for third-party software, such as common operating systems and application
stack components

Suppose that your company has the Business Support plan and wants to install a common third-
party operating system onto your Amazon EC2 instances. You could contact AWS Support for
assistance with installing, configuring, and troubleshooting the operating system. For advanced
topics such as optimizing performance, using custom scripts, or resolving security issues, you
may need to contact the third-party software provider directly.
Enterprise Support :

- Basic, Developer and Business Support


- 15-minute SLA for business critical workloads
- Dedicated Technical Account Manager (TAM), provide infrastructure event management,
well-architecture reviews, operations reviews. TAM work with customers to review
architectures using the Well-Architected framework

In addition to all the features included in the Basic, Developer, and Business Support plans,
customers with an Enterprise Support plan have access to features such as:

- Application architecture guidance, which is a consultative relationship to support your


company’s specific use cases and applications
- Infrastructure event management: A short-term engagement with AWS Support that helps
your company gain a better understanding of your use cases. This also provides your
company with architectural and scaling guidance.
- A Technical Account Manager (TAM)

Technical Account Manager (TAM)

Provide guidance, architectural reviews, and ongoing communication with your company as you
plan, deploy, and optimize your applications.

Your TAM provides expertise across the full range of AWS services. They help you design solutions
that efficiently use multiple services together through an integrated approach.

For example, suppose that you are interested in developing an application that uses several AWS
services together. Your TAM could provide insights into how to best use the services together. They
achieve this, while aligning with the specific needs that your company is hoping to address through
the new application.

Well-Architectured are checked against the 5 pillars :

- Operational Excellence
- Security
- Reliability
- Performance Efficiency
- Cost Optimization

Developer, Business and Enterprise Support Plans have pay-by-the-month pricing and require no
long-term contracts.

aws.amazon.com/premiumsupport (for further information)

AWS Marketplace

AWS Marketplace is a digital catalog that includes thousands of software listings from independent
software vendors. You can use AWS Marketplace to find, test, and buy software that runs on AWS.

For each listing in AWS Marketplace, you can access detailed information on pricing options,
available support, and reviews from other AWS customers.
You can also explore software solutions by industry and use case. For example, suppose that your
company is in the healthcare industry. In AWS Marketplace, you can review use cases that software
helps you to address, such as implementing solutions to protect patient records or using machine
learning models to analyze a patient’s medical history and predict possible health risks.

Most vendors in the marketplace also offer on-demand pay-as-you-go options. Many vendors even
offer free trials or Quick Start plans to help you experiment and learn about their offerings.

AWS Marketplace offers a number of enterprise-focused features :

- Custom terms and pricing


- A private marketplace
- Integration into your procurement systems
- A range of cost management tools

AWS Marketplace categories :

- Business Applicatons
- Data & Analytics
- DevOps
- Infrastructure Software
- Internet of Things
- Machine Learning
- Migration
- Security

aws.amazon.com/marketplace (For further information)


Module 9 : Migration and Innovation
AWS Cloud Adoption Framework (CAF)

Help you manage the process of migration from on-premises or cloud to the AWS through guidance.
The Cloud Adoption Framework exists to provide advice to your company to enable a quick and
smooth migration to AWS.

AWS CAF Perspective :

Business Capabilities :
- Business
- People
- Governance
Technical Capabilities :
- Platform
- Security
- Operations

AWS CAF Action Plan : Helps guide your organization for cloud migration

Explanation for the perspective :

Business Perspective :

The Business Perspective ensures that IT aligns with business needs and that IT investments link to
key business results.

Use the Business Perspective to create a strong business case for cloud adoption and prioritize cloud
adoption initiatives. Ensure that your business strategies and goals align with your IT strategies and
goals.

Common roles in the Business Perspective include:

- Business managers
- Finance managers
- Budget owners
- Strategy stakeholders

People Perspective :

The People Perspective supports development of an organization-wide change management


strategy for successful cloud adoption.

Use the People Perspective to evaluate organizational structures and roles, new skill and process
requirements, and identify gaps. This helps prioritize training, staffing, and organizational changes.

Common roles in the People Perspective include:

- Human resources
- Staffing
- People managers
Governance Perspective :

The Governance Perspective focuses on the skills and processes to align IT strategy with business
strategy. This ensures that you maximize the business value and minimize risks.

Use the Governance Perspective to understand how to update the staff skills and processes
necessary to ensure business governance in the cloud. Manage and measure cloud investments to
evaluate business outcomes.

Common roles in the Governance Perspective include:

- Chief Information Officer (CIO)


- Program managers
- Enterprise architects
- Business analysts
- Portfolio managers

Platform Perspective :

The Platform Perspective includes principles and patterns for implementing new solutions on the
cloud, and migrating on-premises workloads to the cloud.

Use a variety of architectural models to understand and communicate the structure of IT systems
and their relationships. Describe the architecture of the target state environment in detail.

Common roles in the Platform Perspective include:

- Chief Technology Officer (CTO)


- IT managers
- Solutions architects

Security Perspective :

The Security Perspective ensures that the organization meets security objectives for visibility,
auditability, control, and agility.

Use the AWS CAF to structure the selection and implementation of security controls that meet the
organization’s needs.

Common roles in the Security Perspective include:

- Chief Information Security Officer (CISO)


- IT security managers
- IT security analysts

Operations Perspective :

The Operations Perspective helps you to enable, run, use, operate, and recover IT workloads to the
level agreed upon with your business stakeholders.

Define how day-to-day, quarter-to-quarter, and year-to-year business is conducted. Align with and
support the operations of the business. The AWS CAF helps these stakeholders define current
operating procedures and identify the process changes and training needed to implement successful
cloud adoption.
Common roles in the Operations Perspective include:

- IT operations managers
- IT support managers

6s (6 strategies) for migration

- Rehosting
Rehosting also known as “lift-and-shift” involves moving applications without changes.
In the scenario of a large legacy migration, in which the company is looking to implement its
migration and scale quickly to meet a business case, the majority of applications are
rehosted.
- Replatforming
Also known as “lift, tinker, and shift,” involves making a few cloud optimizations to realize a
tangible benefit. Optimization is achieved without changing the core architecture of the
application. This is what I did for my last work as freelancer.
- Refactoring/re-architecting
(also known as re-architecting) involves reimagining how an application is architected and
developed by using cloud-native features. Refactoring is driven by a strong business need to
add features, scale, or performance that would otherwise be difficult to achieve in the
application’s existing environment.
- Repurchasing
Repurchasing involves moving from a traditional license to a software-as-a-service model.
For example, a business might choose to implement the repurchasing strategy by migrating
from a customer relationship management (CRM) system to Salesforce.com.
- Retaining
Retaining consists of keeping applications that are critical for the business in the source
environment. This might include applications that require major refactoring before they can
be migrated, or, work that can be postponed until a later time.
- Retiring
Is the process of removing applications that are no longer needed.

AWS Snow Family

AWS Snow Family, is a collection of physical devices that help to physically transport up to exabytes
of data into and out of AWS.

AWS Snow Family is composed of :

- AWS Snowcone
AWS Snowcone is a small, rugged, and secure edge computing and data transfer device.
It features 2 CPUs, 4 GB of memory, and 8 TB of usable storage.
- AWS Snowball
2 types of devices :
o Snowball Edge Storage Optimized, well suited for large-scale data migrations and
recurring transfer workflows, in addition to local computing with higher capacity
needs.
 Storage : 80 TB of hard disk drive (HDD) capacity for block volumes and
Amazon S3 compatible object storage, and 1 TB of SATA solid state drive
(SSD) for block volumes.
 Compute : 40 vCPUs, and 80 GiB of memory to support Amazon EC2 sbe1
instances (equivalent to C5).
o Snowball Edge Compute Optimized, provides powerful computing resources for use
cases such as machine learning, full motion video analysis, analytics, and local
computing stacks.
 Storage : 42-TB usable HDD capacity for Amazon S3 compatible object
storage or Amazon EBS compatible block volumes and 7.68 TB of usable
NVMe SSD capacity for Amazon EBS compatible block volumes.
 Compute : 52 vCPUs, 208 GiB of memory, and an optional NVIDIA Tesla V100
GPU. Devices run Amazon EC2 sbe-c and sbe-g instances, which are
equivalent to C5, M5a, G3, and P3 instances.
- AWS Snowmobile
AWS Snowmobile is an exabyte-scale data transfer service used to move large amounts of
data to AWS. You can transfer up to 100 petabytes of data per Snowmobile, a 45-foot long
ruggedized shipping container, pulled by a semi trailer truck.

Innovation with AWS

When examining how to use AWS services, it is important to focus on the desired outcomes. You are
properly equipped to drive innovation in the cloud if you can clearly articulate the following
conditions:

- The current state


- The desired state
- The problems you are trying to solve

VMware Cloud on AWS

Amazon SageMaker : Quickly build, train, and deploy machine learning models at scale.

AWS DeepRacer : A chance for your developers to experiment with reinforcement learning. An
autonomous 1/18 scale race car that you can use to test reinforcement learning models

Amazon Textract : Extracting text and data from documents to make them more usable for your
enterprise instead of them just being locked away in a repository.

Amazon Augmented AI (A2I) : Provide a machine learning platform that any business can build upon
without needing PhD level expertise in-house

Amazon Lex : Helps you build interactive chat bots.

AWS Ground Station : Your own satellite for only pay that time you actually use.
Path to Cloud Journey :

- Serverless applications
serverless refers to applications that don’t require you to provision, maintain, or administer
servers. You don’t need to worry about fault tolerance or availability. AWS handles these
capabilities for you.
AWS Lambda is an example of a service that you can use to run serverless applications. If
you design your architecture to trigger Lambda functions to run your code, you can bypass
the need to manage a fleet of servers.
- Artificial Intelligence
You can perform the following tasks:
o Convert speech to text with Amazon Transcribe.
o Discover patterns in text with Amazon Comprehend.
o Identify potentially fraudulent online activities with Amazon Fraud Detector.
o Build voice and text chatbots with Amazon Lex.
- Machine Learning
Traditional machine learning (ML) development is complex, expensive, time consuming, and
error prone. AWS offers Amazon SageMaker to remove the difficult work from the process
and empower you to build, train, and deploy ML models quickly.
You can use ML to analyze data, solve complex problems, and predict outcomes before they
happen.
Module 10 : The Cloud Journey
AWS Well-Architectured Framework

This a tool you can use to evaluate the architectures you build for excellence in a few different
categories. Helps you understand how to design and operate reliable, secure, efficient, and cost-
effective systems in the AWS Cloud. It provides a way for you to consistently measure your
architecture against best practices and design principles and identify areas for improvement.

These pillars of Well-Architectured Framework :

- Operational Excellence
Focuses on running and monitoring systems to deliver business value, and with that,
continually improving processes and procedures. For example, automating changes with
deployment pipelines, or responding to events that are triggered.
Design principles for operational excellence in the cloud include performing operations as
code, annotating documentation, anticipating failure, and frequently making small,
reversible changes.
- Security
The ability to protect information, systems, and assets while delivering business value
through risk assessments and mitigation strategies. Checking integrity of data and, for
example, protecting systems by using encryption.
When considering the security of your architecture, apply these best practices:
o Automate security best practices when possible.
o Apply security at all layers.
o Protect data in transit and at rest.
- Reliability
Focuses on recovery planning, such as recovery from an Amazon DynamoDB disruption. Or
EC2 node failure, to how you handle change to meet business and customer demand.
The ability of a system to do the following:
o Recover from infrastructure or service disruptions
o Dynamically acquire computing resources to meet demand
o Mitigate disruptions such as misconfigurations or transient network issues

Reliability includes testing recovery procedures, scaling horizontally to increase


aggregate system availability, and automatically recovering from failure.

- Performance Efficiency
It entails using IT and computing resources efficiently. For example, using the right Amazon
EC2 type, based on workload and memory requirements, to making informed decisions, to
maintain efficiency as business needs evolve.
The ability to use computing resources efficiently to meet system requirements and to
maintain that efficiency as demand changes and technologies evolve.
Evaluating the performance efficiency of your architecture includes experimenting more
often, using serverless architectures, and designing systems to be able to go global in
minutes.
- Cost Optimization
The ability to run systems to deliver business value at the lowest price point. Which looks
at optimizing full cost. This is controlling where money is spent. And, for example, checking if
you have overestimated your EC2 server size. You can then lower cost by choosing a more
cost-effective size.

6 main benefits of using the AWS Cloud

1. Trade upfront expense for variable expense.

On-premises data center costs

- Physical space
- Hardware
- Staff for racking and stacking
- Overhead for running data center
- Fixed cost

Save money with AWS

- Turn off unused instances


- Delete old resources
- Optimize your applications
- Receive recommendations from AWS Trusted Advisor

2. Benefit from massive economies of scale

Achieve a lower variable cost than you could running a data center on your own

3. Stop guessing capacity

Guessing your capacity upfront can be problematic if you over or under estimate

Scaling on AWS

- Provision resources you need for the now


- Scale up and down
- Scaling can take minutes, not weeks or months

4. Increase speed and agility

Experiment on AWS

- Spin up test environments


- Run experiments
- Delete resources
- Stop incurring costs

5. Stop spending money running and maintain data centers

6. Go Global in minutes
Glossary
Access Keys are used for programmatic access to AWS, but not for controlling S3 bucket access. You
must provide your AWS access keys to make programmatic calls to AWS or to use the AWS
Command Line Interface or AWS Tools for PowerShell.

Identities are the IAM resource objects that are used to identify and group. You can attach a policy
to an IAM identity. These include users, groups, and roles.

A Principal is a person or application that uses the AWS account root user, an IAM user, or an IAM
role to sign in and make requests to AWS.

Entities are the IAM resource objects that AWS uses for authentication. These include IAM users,
federated users, and assumed IAM roles.

Resource Groups, you can use resource groups to organize your AWS resources. Resource groups
make it easier to manage and automate tasks on large numbers of resources at one time.

Amazon Macie is a fully managed data security and data privacy service that uses machine learning
and pattern matching to discover and protect your sensitive data in AWS.

AWS X-Ray helps developers analyze and debug production, distributed applications, such as those
built using a microservices architecture. With X-Ray, you can understand how your application and
its underlying services are performing to identify and troubleshoot the root cause of performance
issues and errors. X-Ray provides an end-to-end view of requests as they travel through your
application, and shows a map of your application’s underlying components.

Classic Load Balancer provides basic load balancing across multiple Amazon EC2 instances and
operates at both the request level and connection level. Classic Load Balancer is intended for
applications that were built within the EC2-Classic network.

The Application Load Balancer is best suited for HTTP traffic.

Application Load Balancer is best suited for load balancing of HTTP and HTTPS traffic and provides
advanced request routing targeted at the delivery of modern application architectures, including
microservices and containers.

Network Load Balancer is best suited for load balancing of Transmission Control Protocol (TCP), User
Datagram Protocol (UDP) and Transport Layer Security (TLS) traffic where extreme performance is
required. Operating at the connection level (Layer 4), Network Load Balancer routes traffic to targets
within Amazon Virtual Private Cloud (Amazon VPC) and is capable of handling millions of requests
per second while maintaining ultra-low latencies.

AWS Systems Manager Run Command lets you remotely and securely manage the configuration of
your managed instances. A managed instance is any EC2 instance or on-premises machine in your
hybrid environment that has been configured for Systems Manager.

AWS CodeDeploy is a fully managed deployment service that automates software deployments to a
variety of compute services such as Amazon EC2, AWS Fargate, AWS Lambda, and your on-premises
servers. CodeDeploy does not house git repositories.
AWS CodePipeline is a fully managed continuous delivery service that helps you automate your
release pipelines for fast and reliable application and infrastructure updates. CodePipeline
automates the build, test, and deploy phases of your release process every time there is a code
change, based on the release model you define. This enables you to rapidly and reliably deliver
features and updates.

AWS CodeCommit is a fully-managed source control service that hosts secure Git-based repositories.
It makes it easy for teams to collaborate on code in a secure and highly scalable ecosystem.
CodeCommit eliminates the need to operate your own source control system or worry about scaling
its infrastructure. You can use CodeCommit to securely store anything from source code to binaries,
and it works seamlessly with your existing Git tools.

AWS CloudFormation simplifies provisioning and management on AWS. You can create templates
for the service or application architectures you want and have AWS CloudFormation use those
templates for quick and reliable provisioning of the services or applications (called “stacks”). You can
also easily update or replicate the stacks as needed.

S3 Lifecycle Policy, you can add rules in an S3 Lifecycle configuration to tell Amazon S3 to transition
objects to another Amazon S3 storage class. For example:

When you know that objects are infrequently accessed, you might transition them to the S3
Standard-IA storage class.

You might want to archive objects that you don't need to access in real time to the S3 Glacier
storage class.

Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud. Spot
Instances are available at up to a 90% discount compared to On-Demand prices. You can use Spot
Instances for various stateless, fault-tolerant, or flexible applications such as big data, containerized
workloads, CI/CD, web servers, high-performance computing (HPC), and other test & development
workloads. The key phrase in this question is, “It is alright if there are interruptions in the
application”. If the application could not accept interruptions, then the best option would be on-
demand. (Spot instance, it’s alright if there are interruptions. The process needs to run
uninterrupted from start to finish. This eliminates spot instances)

You can use AWS Cost and Usage Reports (AWS CUR) to publish your AWS billing reports to an
Amazon Simple Storage Service (Amazon S3) bucket that you own. You can receive reports that
break down your costs by the hour or day, by product or product resource, or by tags that you define
yourself. AWS updates the report in your bucket once a day in comma-separated value (CSV) format.
You can view the reports using spreadsheet software such as Microsoft Excel or Apache OpenOffice
Calc, or access them from an application using the Amazon S3 API.

AWS Personal Health Dashboard provides alerts and remediation guidance when AWS is
experiencing events that may impact you. While the Service Health Dashboard displays the general
status of AWS services, Personal Health Dashboard gives you a personalized view into the
performance and availability of the AWS services underlying your AWS resources.
The latest Amazon DynamoDB update added support for JSON data, making it easy to store JSON
documents in a DynamoDB table while preserving their complex and possibly nested shape. Now,
the AWS SDK for .NET has added native JSON support, so you can use raw JSON data when working
with DynamoDB. This is especially helpful if your application needs to consume or produce JSON (for
instance, if your application is talking to a client-side component that uses JSON to send and receive
data), as you no longer need to manually parse or compose this data.

Amazon Rekognition makes it easy to add image and video analysis to your applications using
proven, highly scalable, deep learning technology that requires no machine learning expertise to use.

Amazon WorkSpaces provides a Desktop as a Service (DaaS) solution.

You might also like