0% found this document useful (0 votes)
40 views23 pages

Aws Cpe

The document discusses cloud computing concepts including infrastructure as a service (IaaS), platform as a service (PaaS), software as a service (SaaS), and deployment models. It outlines key AWS services for compute, storage, databases, security, and development and how they provide scalability, availability, and pay-as-you-go pricing.

Uploaded by

mangelique9880
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views23 pages

Aws Cpe

The document discusses cloud computing concepts including infrastructure as a service (IaaS), platform as a service (PaaS), software as a service (SaaS), and deployment models. It outlines key AWS services for compute, storage, databases, security, and development and how they provide scalability, availability, and pay-as-you-go pricing.

Uploaded by

mangelique9880
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Foundations of a Cloud Computing ○ a secure, highly scalable, managed source control service that hosts

private Git repositories


What is Cloud Computing? Security
Is the delivery of computing services over the internet ● IAM
Compute ○ Service where AWS user accounts and their access to various AWS
● EC2 services are managed
○ A virtual computer, very similar to a desktop/laptop computer ● Macie
● Lambda ○ a fully managed data security and data privacy service that uses
○ Serverless computing that will replace EC2 instances, for the most part machine learning and pattern matching to discover and protect your
Networking sensitive data in AWS
● VPC Databases
○ A private subsection of AWS you control and in which you can place ● RDS
AWS resources ○ SQL database service that provides a wide range of SQL database
● Direct Connect options to select from
○ network service that provides an alternative to using the Internet to ● DynamoDB
utilize AWS cloud services ○ NoSQL database service that does not provide other NoSQL software
Storage options
● S3
○ Online bulk storage service you can access from almost any device Virtual Machines
● EBS ● Virtualization lets you divide hardware resources on a single physical
○ Provides persistent block storage volumes for use of EC2 instances. server into smaller units
Analytics ● Physical Server
● Athena ○ The smaller units are called virtual machines (VMs)
○ enables data analysts to perform interactive queries in the web-based
cloud storage service, Amazon S3 Usage
● Redshift ● Your usage is placed on a meter
○ data warehouse product built by AWS – used for large scale data ● You pay only when you access it and only for what you use
storage and analysis, and is frequently used to perform large database 1. On Demand
migrations a. no long-term commitments or upfront payments
Development 2. Pay as You Go
● Cloud9 a. Pay by the hour or the second for only what you use
○ a cloud-based integrated development environment (IDE) that lets you
write, run, and debug your code with just a browser Exploring the Advantages of Cloud Computing
● CodeCommit
6 Advantages to Cloud Computing
1. Go global in minutes CapEx and OpEx have different implications for cost control
a. You can deploy your applications around the world at the click of a ● Capital Expenditures -- CapEx
button ○ Are upfront purchases toward fixed assets
2. Stop spending money running and maintaining data centers ● Operating Expenses -- OpEx
a. You can focus on building your applications instead of managing ○ Are funds used to run day-to-day operations
hardware Cloud Computing Models
3. Benefit massive economies of scale There are 3 common cloud computing models
a. Volume discounts are passed on to you, which provides lower Infrastructure as a Service (IaaS)
pay-as-you-go prices ● Deals with EC2
4. Increase speed and agility ● Building Blocks
a. The provide services allow you to innovate more quickly and deliver ○ Fundamental building blocks that can be rented
your applications faster ● Web Hosting
5. Stop guessing capacity ○ Monthly subscription to have a hosting company server you website
a. Your capacity is matched exactly to your demand Software as a Service (SaaS)
6. Trade capital expense for variable expense ● Complete Application
a. You pay for what you use instead of making huge upfront investments ○ Using complete application, on demand, that someone offers to users
● Email Provider
Benefits of Cloud Computing ○ Your personal email that you access through web browser is SaaS
● High Availability ● You are using a complete application or software suite hosted on
○ Highly available systems are designed to operate continuously without someone else’s servers
failure for a long time Platform as a Service (PaaS)
○ These systems avoid loss of service by reducing or managing failures ● Used by Developers
● Elasticity ○ Develop software using web-based tools without worrying about the
○ You don't have to plan ahead of time how much capacity you need underlying
○ You can provision only what you need and then grow and shrink based ● Storefront Website
on demand ○ Tools provide to build a storefront application that runs on another
● Agility company's server
○ The cloud gives you increased agility
○ All the services you have access to help you innovate faster, giving you Cloud Deployment Models
speed to market There are 3 common cloud deployment models
● Durability ● Private Cloud
○ Durability is all about long-term data protection ○ Also called "on-premises"
○ This means your data will remain intact without corruption ○ Exists in your internal data center
○ Doesn't offer the advantages of cloud computing
CapEx vs. OpEx ● Public Cloud
○ Offered by AWS Edge Locations
○ You aren't responsible for the physical hardware Edge locations cache content for fast delivery to your users
○ Provides all the advantages of cloud computing ● Edge locations reduces latency
● Hybrid Cloud -- a combination of public and private clouds ● Latency
○ Sample architecture for a hybrid solution ○ the time that passes between a user request and resulting response
○ Highly sensitive data stored locally ● Low Latency
○ Web application runs on AWS infrastructure ○ a GOOD THING
○ AWS provides tools so they talk to each other ○ takes less time for websites to load

Leverages the AWS Global Infrastructure Technology


Regions EC2
A Region is a physical location EC2 allows you to rent and manage virtual servers in the cloud
● AWS logically groups its Regions into geographic locations Elastic compute power
● Region Characteristics Virtual servers
○ South America (São Paulo)
○ Africa (Cape Town) The bigger picture
○ Europe(Ireland) 1. Region
● Fully Independent and Isolated a. A single Region contains multiple AZ
○ If one Region is impacted, the others will not be 2. Availability Zone
● Resource and Service Specific a. single AZ contains multiple data centers
○ Regions are isolated, and resources aren't automatically replicated 3. Data Center
across them a. A single data center contains multiple servers

Availability Zones Servers


Availability Zones (AZ) consists of one or more physically separated data ● Physical compute hardware running in a data center
centers, each with redundant power, networking, and connectivity, housed
in separate facilities. EC2 instances
● AZ Characteristics
● The virtual servers running on these physical servers
○ AZ are connected among themselves in a single Region
■ Physically Separated
Instances
■ Connected through low-latency links
● Are not considered serverless
■ Fault tolerant
EC2 foundational service is used for managing your virtual instances
■ Allows for high availability
1. You're able to provision an EC2 instance at the click of a button
2. You can use preconfigured template called an Amazon Machine Image
3. You can deploy your applications directly to EC2 instances ○ This is best suited for load balancing of Transmission Control Protocol
4. You receive 750 compute hours per month on the Free Tier plan (TCP), User Datagram Protocol (UDP), and Transport Layer Security
(TLS) traffic where extreme performance is required. Operating at the
EC2 in the Real World connection level (Layer 4), Network Load Balancer routes traffic to
● Deploy a database targets within Amazon Virtual Private Cloud (Amazon VPC) and is
○ Deploying a database to EC2 gives you full control over the database capable of handling millions of requests per second while maintaining
● Deploy a web application ultra-low latencies. Network Load Balancer is also optimized to handle
○ Deploy a multiple AZs to make the web application highly available sudden and volatile traffic patterns.
● Classic Load Balancer
Methods to Access an EC2 Instance ○ This provides basic load balancing across multiple Amazon EC2
● AWS Management Console instances and operates at both the request level and connection level.
○ You're able to configure and manage your instances via a web browser Classic Load Balancer is intended for applications that were built within
● Secure Shell (SSH) the EC2-Classic network.
○ SSH allows you to establish a secure connection to your instance from ● Gateway Load Balancer
your local laptop ○ This provides both Layer 3 gateway and Layer 4 load balancing
● EC2 Instance Connect (EIC) capabilities. It is a transparent bump-in-the-wire device that does not
○ EIC allows you to use IAM policies to control SSH access to your change any part of the packet. It is architected to handle millions of
instances, removing the need to manage SSH keys requests/second, volatile traffic patterns, and introduces extremely low
● AWS Systems Manager latency.
○ Systems Manager allows you to manage your EC2 instances via a web
browser or the AWS CLI EC2 Auto Scaling
● Adds or replace EC2 instances automatically across AZs based on need an
EC2 Features changing demand
EC2 instances offer load balancing and Auto Scaling ● Horizontal Scaling or Scaling Out
● Elastic Load Balancing ○ Auto scaling reduces the impact of system failures and improves the
○ Automatically distributes your incoming application traffic across availability of your applications
multiple EC2 instances ● Do not confuse horizontal scaling with vertical scaling (or scaling up),
● Application Load Balancer which upgrades an EC2 instance by adding more power (CPU, RAM) to
○ This is best suited for load balancing of HTTP and HTTPS traffic and an existing server
provides advanced request routing targeted at the delivery of modern
application architectures, including microservices and containers. The most common way to connect to Linux EC2 instance via Security
Operating at the individual request level (Layer 7), Application Load Shell (SSH)
Balancer routes traffic to targets within Amazon Virtual Private Cloud 1. Generate a key pair
(Amazon VPC) based on the content of the request. a. A key pair, which consists of a private key and a public key,, proves
● Network Load Balancer your identity when connecting to an EC2 instance
2. Connect via SSH
a. User => Uses a private key (SSH Client on Laptop) => Uses a public
key (EC2 instance)
EC2 Pricing

Type Run down When to use Fun fact

On-Demand A fixed price in which you You can reserve capacity using On-demand
are billed down to the second 1. Low cost without upfront payment or Capacity Reservations.
based on the instance type. long-term commitment The EC2 capacity is held for you whether or not
No contract and pay only for 2. Applications have unpredictable you run the instance.
what you use workloads that can't be interrupted
3. Applications are under development
4. Workloads will. not run longer than a
year

Spot Spot instances let you take


advantage of unused EC2 1. You're not concerned about the start or 1. You can save up to 90% off On-Demand
capacity. Your request is stop time of your application prices
fulfilled only if capacity is 2. Workloads can be interrupted 2. You pay the spot price that's in effect at the
available 3. Your application is only feasible at very beginning of each hour
pricing construct adjusts its low compute prices
price based on supply and
demand

Reserved RI's allow you to commit to a


Instances (RI) specific instance type in a 1. Application has steady state usage an 1. Save up to 75% off On-Demand prices
particular Region for 1-3 you can commit to 1-3 years 2. Required to sign a contract
years 2. Pay money upfront in order to receive a 3. Reserve capacity in an AZ for any duration
discount on On-Demand prices 4. Can pay All upfront, Partial Upfront, or
3. Application requires a capacity No Upfront for the max term earns the
reservation highest discount
5. Provides convertible types at 54% discount
Dedicated Allows you to pay for a
Hosts physical server that is fully 1. Want to bring your own server-bound 1. Can save up to 70% off On-Demand prices
dedicated to running your software license from vendors like 2. Bring your existing per-socket, per-core, or
instances Microsoft or Oracle per-VM software
3. There is no multi-tenancy, the server is not
shared with other customers
4. Dedicated host is a physical server,
whereas a Dedicated Instance runs on the
host

Savings Plan Allows you to commit to


compute usage (measured per 1. You want to lower your bill across 1. Save up to 72% off On-Demand prices
hour) for 1-3 years multiple compute services 2. You're not making a commitment to a
2. You want the flexibility to change Dedicated Host for compute usage
compute services, instance types, 3. Savings can be shared across. various
operating systems, or Regions compute services like EC2. Fargate, and
Lambda
Lambda Fargate is a serverless compute engine for containers
Lambda is a serverless compute service that lets you run code without ● Fargate allows you to manage containers like Docker
managing servers ● Scales automatically
● Your author application code, called functions, using many popular ● Serverless means you don't worry about provisioning, configuring, or
languages scaling servers
● Scales automatically
● Serverless means you don't worry about managing servers like with EC2 Lightsail – compared to a wordpress application
Lambda allows developers to focus on core business logic for the apps they Lightsail allows you to quickly launch all the resources you need for small
are developing instead of worrying about managing servers projects.
A service that helps the student launch the website with a low, predictable
Lambda in the Real World monthly fee
Lambda is a building block for many serverless applications ● Deploy pre configured applications like WordPress websites at the click of
1. Real-time file processing a button
2. Sending email notifications ● Simple screens for people with no cloud experience
3. Backend business logic ● Includes a virtual machine, SSD-based storage, data transfer, DNS
management, and a static IP
Features ● Provides a low, predictable monthly fee as low as $3.50
1. Supports popular programming languages like Java, Go, PowerShell,
Node.js, C#, Python, and Ruby Outposts
2. Your author code using your favorite development environment or via the Outposts allows you to run cloud services in your internal data center
console - Supports workloads that need to remain on-premises due to latency or data
3. Lambda can execute your code in response to events sovereignty needs
4. Lambda functions have a 15 minute timeout - AWS delivers and installs servers in your internal data center
- Used for a hybrid experience
You are charged based on the duration and number of requests - Have access to the cloud services and APIs to develop apps on-premises
1. Compute time
a. Pay only for compute time used -- there is no charge if your code is not Batch
running Batch allows you to process large workloads in smaller chunks (or
2. Request count batches)
a. A request is counted each time it starts execution ● Runs hundreds and thousands of smaller batch processing jobs
b. Test invokes in the console count as well ● Dynamically provisions instances based on volume
3. Always free
a. The free usage tier includes 1 million free requests each month AWS Batch is a regional service that simplifies running batch jobs across
multiple Availability Zones within a region. You can create AWS Batch compute
Fargate environments within a new or existing VPC. After a compute environment is up
and associated with a job queue, you can define job definitions that specify S3 Storage Classes
which Docker container images to run your jobs. Amazon S3 offers several storage classes designed for different use cases
S3 Standard
Storage ● General-purpose storage
● Data stored across multiple Availability zones
S3 ● Low Latency and high throughput
S3 is an object storage service for the cloud that is highly available ● Recommended for:
Bucket -- Object ○ Frequently accessed data
● Objects (or files) are stored in buckets (or directories) ○ Durability = 99.999999999%
● Essentially unlimited storage that can hold millions of object per bucket ○ Availability = 99.99%
● Objects can be public or private
● You can upload objects via the console, the CLI, or programmatically S3 Intelligent-Tiering
from within code using SDK (Software Development Kit) ● Automatically moves your data to the most cost-efficient storage class
● Automatic cost savings
A Closer Look ● No retrieval fees
● You can set security at the bucket level or individual object level using ● Data stored across multiple AZ
Access Control Lists (ACLs), bucket policies, or access point policies ● Used for data with unknown or changing patterns (new applications)
● You can enable versioning to create multiple versions of your file in ● Recommended for:
order to protect against accidental deletion and to use previous version ○ Data with unknown or changing access pattern
● You can use S3 Access Logs to track the access to your buckets and ○ Durability = 99.999999999%
objects ○ Availability = 99.9%
● S3 is a regional service, but bucket names must be globally unique
S3 Standard-Infrequent Access (IA)
Data Accessibility ● Data accessed less frequently but requires rapid access
Durability and Availability are 2 very different aspects of data accessibility ● Data stored across multiple AZ
1. Durability ● Cheaper than S3 Standard
a. Durability is important so your objects are never lost or ● Recommended for:
compromised ○ Long-lived data
b. Amazon S3 Standard is designed for 99.999999999% of durability (11 ○ Infrequent accessed
9’s) ○ Millisecond access when needed
2. Availability ○ Durability = 99.999999999%
a. Availability is important so you can access your data quickly when ○ Availability = 99.9%
you need it
b. Amazon S3 Standard is designed for 99.99% availability S3 One Zone-Infrequent Access (IA)
● Linke S3 Standard-IA but data is stored in a single Availability Zone
● Costs 20% less than S3 Standard-IA
● Data is stored in this storage class can be lost ● Provides object storage on-premises
● Recommended for: ● A single storage class
○ Re-creatable data ● Store data across multiple devices and servers
○ Infrequent accessed with millisecond access ● Recommended for:
○ Availability and durability not essential ○ Data that needs to be kept local
○ Durability = 99.999999999% ○ Demanding application performance needs
○ Availability = 99.5% ○ Durability = 0
○ Availability = 0
S3 Glacier
● Long-term data storage and archival for lower costs S3 in the Real World
● Data retrieval takes longer 1. Static Websites
● 3 retrieval options: a. Deploy static websites to S3 and use CloudFront for global distribution
○ 1-5 minutes 2. Data Archive
○ 3-5 hours a. Archive data using Amazon Glacier as a storage option for Amazon S3
○ 5-12 hours 3. Analytics Systems
● Data stored across multiple AZ a. Store data in Amazon S3 for use with analytics services like Redshift
● Recommended for: and Athena
○ Long-term backups 4. Mobile Applications
○ Cheaper storage option a. Mobile application users can upload files to an Amazon S3 bucket
○ Durability = 99.999999999%
○ Availability = 0 Bucket policy and user policy are two of the access policy options available
for you to grant permission to your Amazon S3 resources. Both use
S3 Glacier Deep Archive JSON-based access policy language. You add a bucket policy to a bucket to
● Like S3 Glacier but longer access times grant other AWS accounts or IAM users access permissions for the bucket and
● 2 retrieval options: the objects in it. User policies are policies that allow an IAM user access to
○ 12 hours one of your buckets.
○ 48 hours
● Cheapest of all S3 options Amazon Elastic Block Store (EBS)
● Data stored across multiple AZ EBS is a storage device (called a volume) that can be attached to (or
● Recommended for: removed from) your instance
○ Long-term data archival accessed once or twice a year ● Data persists when the instance is not running
○ Retaining data for regulatory compliance requirements ● Tied to one Availability Zone
○ Durability = 99.999999999% ● Can only be attached to one instance in the same AZ
○ Availability = 0 ● Recommended for
○ Quickly accessible data
S3 Outposts
○ Running a database on an instance An Instance Store is local storage that is physically attached to the host
○ Long-term data storage computer and cannot be removed
○ You can create point-in-time backups through EBS snapshots ● Storage on disks physically attached to an instance
○ EBS backups are stored durably in Amazon S3 ● Faster with higher I/O (input/output) speeds
● Storage is temporary since data loss occurs when the EC2 instance is
Instances that use Amazon EBS for the root device automatically have an stopped
Amazon EBS volume attached. When you launch an Amazon EBS-backed ● Recommended for:
instance, we create an Amazon EBS volume for each Amazon EBS snapshot ○ Temporary storage needs
referenced by the AMI you use. You can optionally use other Amazon EBS ○ Data replicated across multiple instances
volumes or instance store volumes, depending on the instance type. ● AWS storage service offers faster disk read and write performance and
provides temporary block-level storage for your instance
General Purpose SSD
● Recommended for most workloads; Can be used as system boot volumes; Amazon Elastic File System (EFS)
Best for development and test environments EFS is a serverless network file system for sharing files.
Provisioned IOPS SSD (Input/Output Operations Per Second) ● Only supports the Linux file system
● Meant for critical business applications that require sustained IOPS ● Accessible across different AZ in the same Region
performance; Best used for large database workload ● Recommended for:
Throughput Optimized HDD ○ Main directories for business critical apps
● Meant for streaming workloads requiring consistent, fast throughput at a ○ Lift-and-Shift existing enterprise app
low price, big data, data warehouses, and log processing. It cannot be a
boot volume Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully
Cold HDD managed elastic NFS file system for use with AWS Cloud services and
● Meant for throughput-oriented storage for large volumes of data that is on-premises resources. It is built to scale on-demand to petabytes without
infrequently accessed, or scenarios where the lowest storage cost is disrupting applications, growing and shrinking automatically as you add and
important. It cannot be a boot volume remove files, eliminating the need to provision and manage capacity to
accommodate growth.
Amazon EFS is a regional service storing data within and across multiple
Availability Zones (AZs) for high availability and durability. Amazon EC2
instances can access your file system across AZs, regions, and VPCs, while
on-premises servers can access using AWS Direct Connect or AWS VPN.

Storage Gateway
Storage Gateway is a hybrid storage service
● Connect on-premises and cloud data
● Supports a hybrid model
EC2 Instance Store
● Recommended for: (POPs). If your content is not already cached in an edge location, CloudFront
○ Moving backups to the cloud retrieves it from an origin that you’ve identified as the source for the definitive
○ Reducing costs for hybrid cloud storage version of the content.
○ Low latency access to data
AWS Backup Global Accelerator
AWS Backup helps you manage data backups across multiple AWS Global Accelerator sends your users through the AWS global network
services when accessing your content, speeding up delivery
● Integrates with resources like EC2, EBS, EFS, and more ● Improves latency and availability of single-Region applications
● Create a backup plan that includes frequency and retention ● Sends traffic through AWS Global Network Infrastructure
● 60% performance boost
Instance metadata is the data about your instance that you can use to configure ● Automatically reroutes traffic to healthy available regional endpoints
or manage the running instance. You can get the instance ID, public keys,
public IP address and many other information from the instance metadata by S3 Transfer Acceleration
entering the following URL in your instance. S3 Transfer Acceleration improves content uploads and downloads to and
from S3 buckets
Content Delivery Network (CDN) ● Fast transfer of files over long distances
● Uses CloudFront's globally distributed edge locations
CloudFront ● Customers around the world can upload to a central bucket
CloudFront is a CDN that delivers data and applications globally with low
latency Networking
● Makes content available globally or restricts it based on location
● Speeds up delivery of static and dynamic web content Networking connects computers together and allows for the sharing of data
● Uses edge locations to cache content and applications, around the globe, in a secure manner using virtual routers,
firewalls, and network management services.
CloudFront in the Real World
1. S3 static websites Route 53
a. CloudFront is often used with S3 to deploy content globally Route 53 is a DNS service that routes users to applications
2. Prevent attacks ● Domain name registration
a. CloudFront can stop certain web attacks, like DDoS ● Performs health checks on AWS resources
3. IP address blocking ● Supports hybrid cloud architectures
a. Geo-restriction prevents users in certain countries from accessing ○ Route Table
content ○ Hosted Zone

Amazon CloudFront is a global service that delivers your content through a Virtual Private Cloud (VPC)
worldwide network of data centers called edge locations or points of presence
VPC is a foundational service that allows you to create a secure private
network in the AWS Cloud where you launch your resources
● Private virtual network
● Launch resources like EC2 instances inside the VPC
● Isolate and protect resources
● A VPC spans Availability Zones in a Region
○ Internet Gateway
○ Peering Connection
Amazon VPC lets you provision a logically isolated section of the Amazon Web VPC Peering
Services (AWS) cloud where you can launch AWS resources in a virtual ● VPC Peering allows you to connect 2 VPCs together
network that you define. You have complete control over your virtual ● Peering facilitates the transfer data in a secure manner
networking environment, including selection of your own IP address ranges,
creation of subnets, and configuration of route tables and network gateways.

Subnet
A subnet allows you to split the network inside the VPC. This is where you
launch resources like EC2 Instances
● Private Subnet
○ not accessible from the internet – put resources here that you want
private (e.g. database) Direct Connect
● Public Subnet Direct Connect is a dedicated physical network connection from your
○ accessible from the internet – put resources here that you want to be on-premises data center to AWS
public ● Dedicated physical network connection
○ Components in a Public Subnet ● Connects your on-premises data center to AWS
■ NACL (Network Access Control List) ● Data travels over a private network
● Ensure the proper traffic is allowed into the subnet ● Supports a hybrid environment
● Can be used to block traffic to a particular instance1
■ Router and Route Table Direct Connect in the Real World
● Defines where network traffic is routed/directed 1. Large datasets
■ Internet Gateway a. Transfer large datasets to AWS
● Allows public traffic to the internet from a VPC 2. Business-critical data
a. Transfer internal data directly to AWS, bypassing your internet service
provider
3. Hybrid model
a. Build hybrid environments
AWS Virtual Private Network (VPN) ● Launch read replicas across Regions in order to provide enhanced
Site-to-Site VPN creates a secure connection between your internal performance and durability
networks and your AWS VPCs ○ Amazon Aurora
● Similar to Direct Connect but the data travels over the public internet ○ PostgreSQL
● Data is automatically encrypted ○ MySQL
● Connects your on-premises data center to AWS ○ MariaDB
● Supports a hybrid environment ○ Oracle
○ Microsoft SQL Server
Site-to-Site VPN in the Real World ● Amazon RDS Read Replicas
● Moving Applications ○ Provide enhanced performance and durability for database (DB)
○ A Site-to-Site VPN makes moving applications to the cloud easier instances. This feature makes it easy to elastically scale out beyond the
capacity constraints of a single DB instance for read-heavy database
API Gateway workloads.
API Gateway allows you to build and manage APIs ● Simplifies the management of time-consuming database administration
● Share data between systems tasks
Integrate with services like Lambda ● Makes it easy to set up, operate, and scale a relational database
API Gateway is a fully managed service that makes it easy for developers to publish,
maintain, monitor, and secure application programming interfaces at any scale. It acts
as a “front door” for applications to access data, business logic, or functionality from Amazon Aurora
your back-end services. Aurora is a relational database compatible with MySQL and PostgreSQL
that was created by AWS
Databases ● Supports MySQL and PostgreSQL database engines
● 5x faster than normal MySQL and 3x faster than normal PostgreSQL
Database allow us to collect, store, retrieve, sort, graph, and manipulate data ● Scales automatically while providing durability and high availability
In the AWS ecosystem, there are many different types of databases that support ● Managed by RDS
different use cases ○ PostgreSQL
○ MySQL
Relational Database Service (RDS)
RDS is a service that makes it easy to launch and manage relational Amazon DocumentDB
databases DocumentDB is a fully managed document database that supports
● Support popular database engines MongoDB
● Offers high availability and fault tolerance using Multi-Availability Zone ● Document database
deployment option ● MongoDB compatible
● AWS manages the database with automatic software patching, automated ● Fully managed and serverless
backups, operating system maintenance, and more ● Non-relational
b. Amazon Aurora
Amazon DynamoDB 3. Alleviate database load for data that is accessed often
DynamoDB is a fully managed NoSQL key-value and document database a. ElastiCache
● NoSQL key-value databases 4. Process large sets of user profiles and social interaction
● Fully managed and serverless a. Amazon Neptune
● Non-relational 5. NoSQL database fast enough to handle millions of request per second
● Scales automatically to massive workloads with fast performance a. Amazon DynamoDB
○ Table 6. Operate MongoDB workloads at scale
○ Item a. Amazon DocumentDB
○ Global Secondary Index
Migration and Transfer
Amazon ElastiCache
ElastiCache is a fully managed in-memory datastore compatible with A lot of companies are migrating to the cloud, and they need inexpensive, fast,
Redis or Memcached and secure ways to move their on-premise data to AWS
● In-memory datastore
● Compatible with Redis or Memcached engines Database Migration Service (DMS)
● Data can be lost DMS helps you migrate databases to or within AWS
● Offers high performance and low latency ● Migrate on-premises databases to AWS
○ ElastiCache for Memcached ● Continuous data replication
○ ElastiCache for Redis ● Supports homogeneous and heterogeneous migrations
Amazon Neptune ● Virtually no downtime
Neptune is a fully managed graph database that supports highly connected
datasets DMS in the Real World
● Graph database service 1. Oracle to Aurora MySQL
● Supports highly connected datasets like social media networks a. Migrate an on-premises Oracle database to Aurora MySQL
● Fully managed and serverless 2. Oracle to Oracle
● Fast and reliable a. Migrate an on-premises Oracle database to Oracle on EC2
3. RDS Oracle to Aurora MySQL
A closer look of Databases in the Real World a. Migrate an RDS Oracle database to Aurora MySQL
Although the databases on AWS support multiple use cases, let's look at
the BEST option for each use case Server Migration Service (SMS)
1. Migrate an on-premises Oracle database to the cloud SMS allows you to migrate on-premises servers to AWS
a. RDS ● Migrates on-premises servers to AWS
2. Migrate on-premises PostgreSQL database to the cloud ● Server saved as a new Amazon Machine Image (AMI)
a. RDS ● Use AMI to launch servers as EC2 instances
GPU for use cases such as advanced machine learning and full-motion
video analysis in disconnected environments.
DataSync
DataSync allows for online data transfer from on-premises to AWS storage Snowmobile
services like S3 or EFS ● Multi-petabyte or exabyte scale
● Migrates data from on-premises to AWS ● Data loaded to S3
● Copy data over Direct Connect or the internet ● Securely transported
● Copy data between AWS storage services
● Replicate data cross-Region or cross-account Machine Learning

Snow Family Rekognition


The Snow Family allows you to transfer large amounts of on-premises ● Rekognition allows you to automate your image and video analysis.
data to AWS using a physical device
Comprehend
Snowcone ● Comprehend is a natural-language processing (NLP) service that
● Smallest member of data transport devices finds relationships in text.
● 8 terabytes of usable storage
● Offline Shipping SageMaker
● Online with DataSync ● SageMaker helps you build, train, and deploy machine learning
models quickly.
Snowball
● Petabyte-scale data transport solution Translate
● Transfer data in and out ● Translate provides language translation.
● Cheaper than internet transfer
Lex
Snowball Edge ● Lex helps you build conversational interfaces like chatbots.
● Snowball Edge supports EC2 Lambda
● data transport solution that accelerates moving terabytes to petabytes of
Developer Tools
data into and out of AWS using appliances with on-board storage and
compute capabilities
Cloud9
● Snowball Edge Storage Optimized provides both block storage and
● Cloud9 allows you to write code within an integrated development
Amazon S3-compatible object storage, and 24 vCPUs. It is well suited for
environment (IDE) from within your web browser.
local storage and large scale data transfer. Snowball Edge Compute
○ Integrated development environment (IDE)
Optimized provides 52 vCPUs, block and object storage, and an optional
○ Write and debug code
○ Supports popular programming languages
● CodeDeploy manages the deployment of code to compute services in
the cloud or on-premises
○ Deploys code to EC2, Fargate, Lambda, and on-premises
Cloud9 in the Real World ○ Maintains application uptime
● Build serverless applications
○ Cloud9 pre configures the development environment with the needed ● AWS CodeDeploy automates code deployments to any instance, including
SDKs and libraries. You can easily write the code for your Lambda Amazon EC2 instances and instances running on-premises.
function directly in your web browser ● AWS CodeDeploy makes it easier to rapidly release new features, avoids
downtime during application deployment, and handles the complexity of
CodeCommit updating applications.
● CodeCommit is a source control system for private Git repositories.
○ Create repositories to store code CodeDeploy in the Real World
○ Commit, branch, and merge code ● CodeDeploy eliminates the downtime of your application when deploying
○ Collaborate with other software developers a new version due to its rolling deployments

CodeCommit in the Real World OpsWorks


● Manage Versions of source code for your applications ● AWS OpsWorks is a configuration management service that helps
○ CodeCommit can be used to manage source code and the different customers configure and operate applications, both on-premises and
versions of application files. in the AWS Cloud, using Chef and Puppet.
○ Code commit is similar to GitHub
CodePipeline
CodeBuild ● CodePipeline automates the software release process
● CodeBuild allows you to build and test your application source code ○ Quickly deliver new features and updates
○ Compiles source code and runs tests ○ Integrates with CodeBuild to run builds and unit tests
○ Enables CICD (continuous integration and delivery) ○ Integrates with CodeCommit to retrieve source code
○ Produces build artifacts ready to be deployed ○ Integrates with CodeDeploy to deploy your changes

CodeBuild in the Real World CodePipeline in the Real World


● Run tests before deploying a new version of an application to ● Add automation to the building, testing, and deployment of your
production application
○ Allows you to run as many parallel streams of tests as needed, allowing ○ DEV => TEST => PROD
you to deploy your changes to production more quickly ○ When combined with other developer tools, CodePipeline helps
development teams implement DevOps practices that automate testing
CodeDeploy and the movement of code to production
X-Ray
● X-Ray helps you debug production applications
○ Analyze and debug production applications
○ Map application components
○ View requests end to end

X-Ray in the Real World


● Trace call to an RDS database
○ X-Ray can help you map requests made to your RDS database from
within your application. You can track information about the SQL
queries generated and more

CodeStar
● CodeStar helps developers collaboratively work on development
projects
○ Developers connect their development environment
○ Integrates with CodeCommit, CodeBuild, and CodeDeploy
○ Contains issue tracking dashboard

CodeStar in the Real World Deployment and Infrastructure Management


● CodeStar can manage the development pipeline
Infrastructure as Code (IAC)
● IaC Allows you to write a script to provision AWS resources
● The benefit is that you provision in a reproducible manna that saves
time
● There is no need to use S3 Management Console to create a bucket

CloudFormation
● CloudFormation allows you to provision AWS resources using Elastic Beanstalk in the Real World
Infrastructure as Code (IaC) ● Quickly deploy a scalable Java-based application to AWS
● Template and Stack ○ After you upload your Java code, Elastic Beanstalk deploys it and
○ Provides a repeatable process for provisioning resources handles capacity provisioning, load balancing, and Auto Scaling
○ Works with most AWS services ○ Elastic Beanstalk monitors the health of your application
○ Create templates for the resources you want to provision
OpsWorks
CloudFormation in the Real World ● OpsWorks allows you to use Chef or Puppet to automate the
● Automate the infrastructure-provisioning process for EC2 servers configuration of your servers and deploy code.
○ You can use CloudFormation to automate the creation of EC2 instances ○ Deploy code and manage applications
in your AWS account ○ Manage on-premises servers or EC2 instances in AWS Cloud
○ Works with Chef and Puppet automation platforms

OpsWorks in the Real World


● Automate software configurations and infrastructure management for
your application
○ OpsWork allows you to define software installation scripts and
automate configuration for your application servers
AWS CloudFormation provides a common language for you to describe and provision Messaging and Integration
all the infrastructure resources in your cloud environment. CloudFormation allows
you to use programming languages or a simple text file to model and provision, in an Coupling defines the interdependencies or connections between components of
automated and secure manner, all the resources needed for your applications across a system. Loose coupling helps reduce the risk of cascading failures between
all regions and accounts. By turning your infra into code, you can deploy the code in
components.
your other regions.
AWS CloudFormation allows you to model your entire infrastructure with either a text
file or programming languages. This provides a single source of truth for your AWS
resources and helps you to standardize infrastructure components used across your
organization, enabling configuration compliance and faster troubleshooting.

Elastic Beanstalk
● Elastic Beanstalk allows you to deploy your web applications and web
services to AWS
○ Orchestration service that provisions resources
○ Automatically handles the deployment
○ Monitors application health via a health dashboard
Simple Queue Service (SQS)
● SQS is a message queuing service that allows you to build loosely Auditing, Monitoring, and Logging
coupled systems
○ Allows component-to-component communication using messages CloudWatch
○ Multiple components (or producers) can add messages to the queue ● CloudWatch is a collection of services that help you monitor and
○ Messages are processed in asynchronous manner observe your cloud resources
○ Collects metrics, logs, and events
Simple Queue Service (SQS) in the Real World ○ Detect anomalies in your environment
● Build a money transfer app that performs well under heavy load ○ Set alarms
○ SQS lets you build an app that is loosely coupled, allowing components ○ Visualize logs
to send, store, and receive massages. The use of a messaging queue ● CloudWatch Alarms
helps to improve performance and scalability ○ Set high resolution alarms
● CloudWatch Logs
Simple Notification Service (SNS) ○ Monitor application logs
● SNS allows you to send emails and text messages from your ● CloudWatch Metrics
applications ○ Visualize time-series data
○ Send email and test ● CloudWatch Events
○ Publish messages to a topic ○ Trigger an event based on a condition
○ Subscribers receive messages
Simple Notification Service (SNS) in the Real World CloudWatch in the Real World
● Send an email when CPU utilization of an EC2 instance goes above ● Provide real-time monitoring on EC2 instances
80% ○ CloudWatch Alarms can notify you if an EC2 instance goes into the
○ SNS works with CloudWatch when an alarm’s metric threshold is stopped state or usage goes above a certain utilization
Amazon CloudWatch monitors your Amazon Web Services (AWS) resources and the
breached to send an email
applications you run on AWS in real time. You can use CloudWatch to collect and
track metrics, which are variables you can measure for your resources and
Simple Email Service (SES) applications.
● SES is an email service that allows you to send richly formatted
HTML emails from your applications CloudTrail
○ Ideal choice for marketing campaigns or professional emails ● CloudTrail tracks user activity and API calls within your account.
○ Unlike SNS, SES sends HTML emails ○ Log and retain account activity
○ Track activity through the console, SDKs, and CLI
Simple Email Service (SES) in the Real World ○ Identify which user made changes
● Send marketing email and track open or click-through rates ○ Detect unusual activity in your account
○ SES allows you to send richly formatted HTML emails in bulk and gain
valuable insights about the effectiveness of your campaign CloudTrail in the Real World
● Track the time a particular event occurred in your account
○ You can troubleshoot events over the past 90 days using the CloudTrail ● Building Security
event history log to find the specific time an event occurred on a ○ AWS controls access to its data centers where your data resides
per-Region basis. You can create a custom trail to extend past 90 days ● Networking Components
● Things you can track with CloudTrail ○ AWS maintains networking components:
○ Username ■ Generators
○ Event time and nam ■ Uninterruptible power supply (UPS) systems
○ Access key ■ Computer room air conditioning (CRAC) units
○ Region ■ Fire Suppressions Systems, and more
○ IP Address ● Software
○ Error Code ○ AWS is responsible for any managed service:
AWS CloudTrail is a service that enables governance, compliance, operational ■ RDS, S3, ECS, or Lambda
auditing, and risk auditing of your AWS account. With CloudTrail, you can log, ■ Patching of host operating systems
continuously monitor, and retain account activity related to actions across your ■ Data access endpoints
AWS infrastructure. CloudTrail provides event history of your AWS account
activity, including actions taken through the AWS Management Console, AWS Your Responsibility -- responsible for how the services are implemented
SDKs, command line tools, and other AWS services. Creating a multi-region and managing your application
trail will allow you to keep your activity records in an S3 bucket and prevent
● Security IN the cloud
them from getting rewritten automatically. ● Application Data
○ Responsible for managing your application data, which includes
Security and Compliance encryption options
● Patching
Shared Responsibility Model ○ responsible for the guest operating system (OS), which includes updates
● The shared responsibility model outlines your responsibilities vs AWS' and security patches
when it comes to security and compliance ● Network Traffic
● In public cloud, there is a shared security responsibility between you ○ responsible for network traffic protection, which includes security group
and AWS firewall configuration
● Security Configuration
AWS' Responsibility -- responsible for protecting and securing ○ responsible for securing your account and API calls, rotating credentials,
infrastructure restricting internet access from your VPCs, and more
● Security OF the cloud ● Identity and Access Management (IAM)
● AWS Global Infrastructure ○ responsible for application security and identity access management
○ AWS is responsible for its global infrastructure elements: ● Install software
■ Regions ○ responsible for application code, install software, and more
■ Edge Locations ○ You should frequently scan for patch vulnerabilities in your code
■ Availability Zones
Who is responsible for what?
You (the customer) AWS Patch Configuration Awareness and
Management Management Training
Firewall configuration Data center security for the physical
building AWS Patching Configuring AWS employees
infrastructure infrastructure
Encryption of EBS volume Language versions of Lambda devices
Taking database backups in RDs Updating firmware on the underlying You Patching guest Configuring Your employees
EC2 hosts OS and databases and
application applications
Ensuring data is encrypted at rest Managing network infrastructure

Patching the guest operating system Physically destroying storage media


for EC2 at end of life Well-Architected Framework

EC2 Shared Responsibility Model The well-architected framework describes design principles and best practices
You AWS for running workloads in the cloud

Installed applications EC2 service Operational Excellence


Patching the guest operating system Patching the operating system ● This pillar focuses on create applications that effectively support
production workloads
Security controls Security of the physical server ○ Plan for and anticipate failure
○ Deploy smaller, reversible changes
Lambda Shared Responsibility Model ○ Script operations as code
○ Learn from failure and refine
You AWS ● You can use AWS CodeCommit for version control to enable tracking
Security of code ● Lambda service of code changes and to version-control CloudFormation templates of
● Upgrading Lambda languages your infrastructure.

Storage of sensitive data ● Lambda endpoints Security


● Operating system
● This pillar focuses on putting mechanisms in place that help protect
IAM for permissions ● Underlying infrastructure your systems and data
● Software dependencies ○ Automate security tasks
○ Assign only the least privileges required
Which security responsibilities are shared? ○ Encrypt data in transit and at rest
○ Track who did what and when
○ Ensure security at all application layers
● You can configure central logging of all actions performed in your
account using CloudTrail.

Reliability
This pillar focuses on designing systems that work consistently and
recover quickly
Recover from failure
Scale horizontally for resilience
Reduce idle resources
Manage change through automation
Test recovery procedures
You can use Multi-AZ deployments for enhanced availability and
reliability of RDS databases.

Performance Efficiency
This pillar focuses on the effective use of computing resources to meet
system and business. requirements while removing bottlenecks
Use serverless architectures first
Use multi-region deployments
Delegate tasks to a cloud vendor
Experiment with virtual resources
You can use AWS Lambda to run code with zero administration.

Cost Optimization
This pillar focuses on delivering optimum and resilient solutions at the
least cost to the user
Utilize consumption-based pricing
Implement cloud Financial Management
Measure overall efficiency
Pay only for resources your application requires
You can use S3 Intelligent-Tiering to automatically move your data between
access tiers based on your usage patterns.

You might also like