0% found this document useful (0 votes)
26 views27 pages

AWS Cloud Essentials

AWS Cloud Essentials outlines the benefits of cloud computing, emphasizing its on-demand delivery of IT resources without the need for large upfront investments or hardware management. It highlights the flexibility, scalability, and security of AWS services, which allow businesses to customize their infrastructure and pay only for what they use. The document also details the extensive AWS global infrastructure and various service categories, including compute, storage, and analytics, that support diverse application needs.

Uploaded by

commguy5295
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views27 pages

AWS Cloud Essentials

AWS Cloud Essentials outlines the benefits of cloud computing, emphasizing its on-demand delivery of IT resources without the need for large upfront investments or hardware management. It highlights the flexibility, scalability, and security of AWS services, which allow businesses to customize their infrastructure and pay only for what they use. The document also details the extensive AWS global infrastructure and various service categories, including compute, storage, and analytics, that support diverse application needs.

Uploaded by

commguy5295
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 27

AWS Cloud Essentials

With cloud computing, you can stop thinking of infrastructure as hardware and instead think of it (and use it) as software .
Cloud computing is the on-demand delivery of IT resources and applications through the internet.
•There are no large upfront investments.
•You won't need to spend time or resources on hardware management.
•You can provision exactly the right type and size when needed (dynamic abilities).
•You can have as many resources as you need and pay for what you use.
There are three key benefits of using cloud computing. PROGRAMABLE RESOURCES – DYNAMIC ABILITIES – PAY AS YOU GO

Cloud services provider


With cloud services, the infrastructure (physical hardware) is taken care of by the cloud vendor. Your resources are virtualized, and you can access them through the
internet. You determine when and how often you need them, so your interactions are customized to fit your business needs. This adds elasticity to your business.

On premises
In traditional on-premises environments, you figure out how much capacity you need, purchase the hardware, wait for it, set it up, and access the resources over your
network. The challenge with an on-premises environment is if there is not enough capacity, you must purchase more and wait for the servers to be delivered to your
site. Another challenge is if you have too much capacity, you are left with an over-provisioned environment.

Interacting with AWS


You provision what you need. You can interact with AWS and manage your resources using the AWS Management Console, the AWS Command Line Interface (AWS
CLI), or AWS SDKs.

Customized cloud solutions


The result of using the cloud solution is you have the control to customize your storage, business applications, and configuration of your entire network.
The following is a list of the AWS categories of
services
•Analytics Core service
•Cost Management areas:
•Internet of Things •Compute
•Storage •Storage
•Application Integration •Databases
•Customer Engagement •Networking
•Machine Learning •Security
•Robotics
•AR and VR
•Database
•Management and Governance
•Satellite
•Blockchain
•Developer Tools
•Media Services
•Networking and Content Delivery
•Business Applications
•End User Computing
•Migration and Transfer
•Security, Identity, and Compliance
•Compute
•Game Tech
•Mobile
AWS Global Infrastructure
AWS Global Infrastructure Map
The AWS Cloud spans 105 Availability Zones within 33 geographic regions around the world, with announced plans for 12 more Availability Zones and 4 more AWS
Regions in Germany, Malaysia, New Zealand, and Thailand.

33 Launched Regions each with multiple Availability Zones (AZs) - 105 Availability Zones - 600+ Points of Presence and 13 Regional Edge Caches

36 Local Zones - 29 Wavelength Zones for ultralow latency applications - 245 Countries and Territories Served - 115 Direct Connect Locations
Benefits
Security
Security at AWS starts with our core infrastructure. Custom-built for the cloud and designed to meet the most stringent security requirements in the world, our
infrastructure is monitored 24/7 to help ensure the confidentiality, integrity, and availability of your data. All data flowing across the AWS global network that
interconnects our datacenters and Regions is automatically encrypted at the physical layer before it leaves our secured facilities. You can build on the most secure
global infrastructure, knowing you always control your data, including the ability to encrypt it, move it, and manage retention at any time.
Availability
AWS delivers the highest network availability of any cloud provider. Each region is fully isolated and comprised of multiple AZs, which are fully isolated partitions of
our infrastructure. To better isolate any issues and achieve high availability, you can partition applications across multiple AZs in the same region. In addition, AWS
control planes and the AWS management console are distributed across regions, and include regional API endpoints, which are designed to operate securely for at
least 24 hours if isolated from the global control plane functions without requiring customers to access the region or its API endpoints via external networks during
any isolation.
Performance
The AWS Global Infrastructure is built for performance. AWS Regions offer low latency, low packet loss, and high overall network quality. This is achieved with a fully
redundant 400 GbE fiber network backbone, often providing many terabits of capacity between Regions. AWS Local Zones and AWS Wavelength, with our telco
providers, provide performance for applications that require single-digit millisecond latencies by delivering AWS infrastructure and services closer to end-users and 5G
connected devices. Whatever your application needs, you can quickly spin up resources as you need them, deploying hundreds or even thousands of servers in
minutes.
Scalability
The AWS Global Infrastructure enables companies to be extremely flexible and take advantage of the conceptually infinite scalability of the cloud. Customers used to
over provision to ensure they had enough capacity to handle their business operations at the peak level of activity. Now, they can provision the amount of resources
that they actually need, knowing they can instantly scale up or down along with the needs of their business, which also reduces cost and improves the customer’s
ability to meet their user’s demands. Companies can quickly spin up resources as they need them, deploying hundreds or even thousands of servers in minutes.
Flexibility
The AWS Global Infrastructure gives you the flexibility of choosing how and where you want to run your workloads, and when you do you are using the same network,
control plane, API’s, and AWS services. If you would like to run your applications globally you can choose from any of the AWS Regions and AZs. If you need to run your
applications with single-digit millisecond latencies to mobile devices and end-users you can choose AWS Local Zones or AWS Wavelength. Or if you would like to run
your applications on-premises you can choose AWS Outposts. If you are in a public sector organization or highly regulated industry, you can read our plans to launch
the AWS European Sovereign Cloud.
Global Footprint
AWS has the largest global infrastructure footprint of any provider, and this footprint is constantly increasing at a significant rate. When deploying your applications
and workloads to the cloud, you have the flexibility in selecting a technology infrastructure that is closest to your primary target of users. You can run your workloads
on the cloud that delivers the best support for the broadest set of applications, even those with the highest throughput and lowest latency requirements. And If your
data lives off this planet, you can use AWS Ground Station, which provides satellite antennas in close proximity to AWS infrastructure Regions.
•What Is AWS?
•What Is Cloud Computing?
•AWS Inclusion, Diversity & Equity
•What Is DevOps?
•What Is a Container?
•What Is a Data Lake?
•What is Generative AI?
•AWS Cloud Security
•What's New
•Blogs
•Press Releases
Resources for AWS
•Getting Started
•Training and Certification
•AWS Solutions Library
•Architecture Center
•Product and Technical FAQs
•Analyst Reports
•AWS Partners
Developers on AWS
•Developer Center
•SDKs & Tools
•.NET on AWS
•Python on AWS
•Java on AWS
•PHP on AWS
•JavaScript on AWS
Trade upfront costs for variable costs
AWS takes care of purchasing and handling the infrastructure, so you don’t have to worry about capacity. Heavy initial investments into hardware and facilities are no longer needed, so fixed
costs are traded for variable costs.

Increase speed and agility


Now that the infrastructure has been addressed, it will take minimal effort and time to increase your agility and give you more time to focus on your applications.

Move to a global presence


Cloud computing gives untethered access to launch your application in different geographical areas without leaving your office.

Stop spending money running and maintaining data centers


Focus on projects that differentiate your business, not the infrastructure. Cloud computing lets you focus on your own customers, rather that on the heavy lifting of racking, stacking, and
powering servers.

Stop guessing capacity


Eliminate guessing on your infrastructure capacity needs. When you make a capacity decision prior to deploying an application, you often end up either sitting on expensive idle resources or
dealing with limited capacity. With cloud computing, these problems go away. You can access as much or as little capacity as you need, and scale up and down as required with only a few
minutes' notice.

Benefit from massive economies of scale


By using cloud computing, you can achieve a lower variable cost than you can get on your own. Because usage from hundreds of thousands of customers is aggregated in the cloud, providers
such as AWS can achieve higher economies of scale, which translates into lower pay as-you-go prices.
Amazon Elastic Compute Cloud Category AWS service
Instances (virtual •Amazon Elastic Compute Cloud (Amazon EC2) — Secure and
Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the Amazon machines) resizeable compute capacity (virtual servers) in the cloud
Web Services (AWS) Cloud. In the compute area, there are various options for the types of resources •Amazon EC2 Spot Instances— Run fault-tolerant workloads for
up to 90% off
you might want to launch, such as the following: •Amazon EC2 Auto Scaling — Automatically add or remove
compute capacity to meet changes in demand
Virtual machines •Amazon Lightsail — Easy-to-use cloud platform that offers you
Containers everything you need to build an application or website
•AWS Batch — Fully managed batch processing at any scale
Batch processing compute resources
Serverless compute
Compute Services Containers •Amazon Elastic Container Service (Amazon ECS) — Highly
secure, reliable, and scalable way to run containers
Amazon EC2 is one of the first core services we often discuss on AWS. Amazon EC2 benefits include •Amazon ECS Anywhere — Run containers on customer managed
the following: infrastructure
•Amazon Elastic Container Registry (Amazon ECR) — Easily store,
Complete control of your computing resources manage, and deploy container images
Resizable compute capacity •Amazon Elastic Kubernetes Service (Amazon EKS) — Fully
managed Kubernetes service
Reduced time required to obtain and boot new server instances •Amazon EKS Anywhere — Create and operate Kubernetes
clusters on your own infrastructure
There are a broad variety of instance types so you can pick one specific to what your workload •AWS Fargate — Serverless compute for containers
needs. •AWS App Runner — Build and run containerized applications on
a fully managed service

Topics Serverless •AWS Lambda — Run code without thinking about servers. Pay
•Compare AWS compute services only for the compute time you consume.
•Amazon EC2 Edge and hybrid •AWS Outposts — Run AWS infrastructure and services on
•Amazon EC2 Auto Scaling premises for a truly consistent hybrid experience
•AWS Snow Family — Collect and process data in rugged or
•Amazon EC2 Image Builder disconnected edge environments
•Amazon Lightsail •AWS Wavelength — Deliver ultra-low latency application for 5G
devices
•Amazon Linux 2023 •VMware Cloud on AWS — Preferred service for all vSphere
•AWS App Runner workloads to rapidly extend and migrate to the cloud
•AWS Local Zones — Run latency sensitive applications closer to
•AWS Batch end-users
•AWS Elastic Beanstalk
•AWS Fargate Cost and capacity •AWS Savings Plan — Flexible pricing model that provides savings
management of up to 72% on AWS compute usage
•AWS Lambda •AWS Compute Optimizer — Recommends optimal AWS compute
•AWS Serverless Application Repo resources for your workloads to reduce costs and improve
performance
sitory •AWS Elastic Beanstalk — Easy-to-use service for deploying and
•AWS Outposts scaling web applications and services
•EC2 Image Builder — Build and maintain secure Linux or
•AWS Wavelength Windows Server images
•VMware Cloud on AWS •Elastic Load Balancing (ELB) — Automatically distribute incoming
application traffic across multiple targets
Instance types

Amazon EC2 passes on to you the financial benefits of Amazon scale. You pay a very low rate for the compute capacity you actually consume. For a more detailed
description, refer to Amazon EC2 pricing.
•On-Demand Instances — With On-Demand Instances, you pay for compute capacity by the hour or the second depending on which instances you run. No longer-term
commitments or upfront payments are needed. You can increase or decrease your compute capacity depending on the demands of your application and only pay the
specified per hourly rates for the instance you use. On-Demand Instances are recommended for:
•Users that prefer the low cost and flexibility of Amazon EC2 without any up-front payment or long-term commitment
•Applications with short-term, spiky, or unpredictable workloads that cannot be interrupted
•Applications being developed or tested on Amazon EC2 for the first time

•Spot Instances —Spot Instances are available at up to a 90% discount compared to On-Demand prices and let you take advantage of unused Amazon EC2 capacity in
the AWS Cloud. You can significantly reduce the cost of running your applications, grow your application’s compute capacity and throughput for the same budget, and
enable new types of cloud computing applications. Spot Instances are recommended for:
•Applications that have flexible start and end times
•Applications that are only feasible at very low compute prices
•Users with urgent computing needs for large amounts of additional capacity

•Reserved Instances — Reserved Instances provide you with a significant discount (up to 72%) compared to On-Demand Instance pricing. You have the flexibility to
change families, operating system types, and tenancies while benefiting from Reserved Instance pricing when you use Convertible Reserved
Instances.

•C7g Instances — C7g Instances, powered by the latest generation AWS Graviton3 processors, provide the best price performance in Amazon EC2 for compute-intensive
workloads. C7g instances are ideal for high performance computing (HPC), batch processing, electronic design automation (EDA), gaming, video encoding, scientific
modeling, distributed analytics, CPU-based ML inference, and ad serving.
•Inf2 Instances — Inf2 Instances are purpose--built for deep learning inference. They deliver high performance at the lowest cost in Amazon EC2 for
generative AI models, including large language models (LLMs) and vision transformers. Inf2 instances are powered by AWS Inferentia2, the second-generation AWS
Inferentia accelerator.
•M7g Instances — M7g instances, powered by the latest generation AWS Graviton3 processors, provide the best price performance in Amazon EC2 for general
purpose workloads. M7g instances are ideal for applications built on open-source software such as application servers, microservices, gaming servers, mid-
size data stores, and caching fleets.
•R7g Instances — R7g Instances, powered by the latest generation AWS Graviton3 processors, provide the best price performance in Amazon EC2 for memory-
intensive workloads. R7g instances are ideal for memory-intensive workloads such as open-source databases, in-memory caches, and near real-time big data analytics.
•Trn1 Instances — Trn1 Instances, powered by AWS Trainium accelerators, are purpose-built for high-performance deep learning training of generative AI
models, including LLMs and latent diffusion models. Trn1 instances offer up to 50% cost-to-train savings over other comparable Amazon EC2 instances.

•Savings Plans — Savings Plans are a flexible pricing model that offer low prices on EC2 and Fargate usage, in exchange for a commitment to a consistent amount of
usage (measured in $/hour) for a one or three year term.

•Dedicated Hosts — A Dedicated Host is a physical EC2 server dedicated for your use. Dedicated Hosts can help you reduce costs by allowing you to use your
About Amazon EC2
Amazon EC2 is a virtual machine launched on AWS hardware.
AWS takes care of the hardware, whereas you focus on setting up Amazon EC2 to match your application needs.

What you launch


The applications you can launch on Amazon EC2 are similar to what you would do on a typical server:
Websites
Databases
Analytical applications and more

Flexibility and control


When you need compute capacity, you can do the following: Select and configure the EC2 instance to match your application need and have complete control over this resource. Start and stop this instance when
you need or terminate it when you do not need capacity anymore.

Size
If, at some point, you realize you need more or fewer resources to support your app, you have the opportunity to scale your machine up or down by changing the EC2 instance type and size.

Instance types
There are many types of instances, each built to provide a specific set of resources, and they run on certain hardware (including Intel families and Graviton). There are a broad variety of instance types so that you
can do the following:
Pick the most suitable type of virtual machine for your specific application. Match your needs as precisely as possible.
What is Amazon EC2?
Amazon Elastic Compute Cloud (Amazon EC2) provides on-demand, scalable computing capacity in the Amazon Web Services (AWS)
Cloud. Using Amazon EC2 reduces hardware costs so you can develop and deploy applications faster. You can use Amazon EC2 to
launch as many or as few virtual servers as you need, configure security and networking, and manage storage. You can add
capacity (scale up) to handle compute-heavy tasks, such as monthly or yearly processes, or spikes in website traffic.
When usage decreases, you can reduce capacity (scale down) again.
The following diagram shows a basic architecture of an Amazon EC2 instance deployed within an Amazon Virtual Private Cloud (VPC).
In this example, the EC2 instance is within an Availability Zone in the Region. The EC2 instance is secured with a security group,
which is a virtual firewall that controls incoming and outgoing traffic. A private key is stored on the local computer and a public key is
stored on the instance. Both keys are specified as a key pair to prove the identity of the user. In this scenario, the instance is backed
by an Amazon EBS volume. The VPC communicates with the internet using an internet gateway. For more information about Amazon
VPC, see the Amazon VPC User Guide.
Amazon EC2 supports the processing,
storage, and transmission of credit
card data by a merchant or service
provider, and has been validated as
being compliant with Payment Card
Industry (PCI) Data Security Standard
(DSS). For more information about PCI
DSS, including how to request a copy
of the AWS PCI Compliance Package,
see PCI DSS Level 1.
According to instance types, determine your use case: General purpose - High performance - In-memory databases - Machine learning (ML) - Distributed file systems

Memory optimized instances are helpful when the focus is on memory.


When the most critical resource is RAM (memory), you will choose instances from within this family.
Open-source databases, in-memory caches, and real-time big data analytics could be run on such instances.

Accelerated computing instances are helpful when the focus is on the graphics processing unit (GPU).
Running ML models, computational fluid dynamics, graphical workloads, or other workloads needing GPU are 'use cases' for this instance type.

Storage optimized instances are helpful when focusing on maximizing the number of transactions per second (TPS) for I/O intensive and business-critical workloads .

General purpose instances provide a balance of compute, memory and networking resources, and can be used for a variety of diverse workloads. These instances are ideal for applications that
use these resources in equal proportions such as web servers and code repositories.

Compute optimized instances are ideal for compute bound applications that benefit from high performance processors . Instances belonging to this family are well suited for batch processing
workloads, media transcoding, high performance web servers, high performance computing (HPC), scientific modeling, dedicated gaming servers and ad server engines, machine learning
inference and other compute intensive applications.

HPC optimized, High performance computing (HPC) instances, are purpose build to offer the best price performance for running HPC workloads at scale on AW S. HPC instances are ideal for
applications that benefit from high-performance processors such as large, complex simulations and deep learning overloads.

General purpose
Examples of the Instance types: a1, m4, m5, t2, t3
Use case: High performance file systems

Compute optimized: Instance types: c4, c5; Use case: High performance.
Examples of the Instance types: c4, c5
Use case: Network intensive workloads

Hpc6a Instance types: Use case: High performance


HPC optimized
Examples of the Instance types: Hpc6a, Hpc6id
Use case: Best price performance for running HPC workloads
Amazon EC2 Auto Scaling helps maintain application availability and more
Amazon EC2 Auto Scaling helps you maintain application availability and automatically add or remove EC2 instances using scaling policies that you define .
Dynamic or predictive scaling policies let you add or remove EC2 instance capacity to service established or real-time demand patterns. In addition, the
fleet management features of Amazon EC2 Auto Scaling helps maintain the health and availability of your fleet.

•With dynamic scaling capabilities, the service will be able to match the demand in a live manner, understanding when resources are over-provisioned
or under-provisioned based on the CPU utilization or such metrics.
•With Amazon EC2, you can add or remove EC2 instances and unhealthy applications automatically and replace the instances without
intervention. However, you do not need to physically monitor and adjust capacities continuously.
•Scale horizontally to precisely match the current demand and avoid over-provisioning or under-provisioning. The process is all done
automatically.
•Amazon EC2 Auto Scaling detects impaired EC2 instances and unhealthy applications and replaces the instances without intervention.
•Amazon EC2 Auto Scaling provides several scaling options: manual, scheduled, dynamic or on demand, and predictive. When you know that you
will have significant (or not enough) traffic in a certain period, you can schedule the service to launch the resources in advance to be ready to serve the
traffic.
With the elasticity of the cloud, you can also provision the amount of resources to match the demand as closely as possible. Also, because cloud
resources are disposable, you can be flexible when you launch or remove resources.

Serverless computing is building and running applications and services without managing servers

Serverless doesn't run idle resources


With serverless resources, you save time from configuring servers, take care of scaling the capacity with usage, and have built-in availability and fault tolerance.
Serverless services do not run idle resources, so you pay for what you need. No servers to provision, or manage, Scales with usage. Scales with usage, Never pay for
idle servers, Availability and fault tolerance built in.

For example, if you are running an EC2 instance for 5 hours, you only pay for 5 hours. But what if your workload is such that in 5 hours, your application actively used the resources for 2 hours?
You paid for what you used. Now, with serverless, the service (such as AWS Lambda) would only actively use the resources when the application needs them to be able to do the job.
AWS Lambda is a fully managed serverless compute service
AWS Lambda benefits include the following:
•It supports multiple languages.
•It runs stateless code.
•When you upload the code in the language you prefer, Lambda can run your code on a schedule or in response to events, such as changes to data in an Amazon Simple Storage Service (Amazon
S3) bucket or Amazon DynamoDB table.
•It offers per-millisecond pricing of the code being run.
•It is a great solution to be used in event-driven architectures.
•Provisioning, scaling, and underlying resources are taken care of by Lambda itself.
•It provides high availability.
•Lambda is an on-demand compute service that runs custom code in response to events. Most AWS services generate events, and many can act as an event source for Lambda. Within Lambda,
your code is stored in a code deployment package. All interaction with the code occurs through the Lambda API and there is no direct invocation of functions from outside of the service. The
main purpose of Lambda functions is to process events.

Unlike traditional servers, Lambda functions do not run constantly. When a function is triggered by an event, this is called an invocation. Lambda functions are purposefully limited to 15
minutes in duration but on average, across all AWS customers, most invocations only last for less than a second . In some intensive compute operations, it may take several minutes to process a
single event but in the majority of cases the duration is brief.

An event triggering a Lambda function could be almost anything, from an HTTP request through API Gateway, a schedule managed by an EventBridge rule, an IOT event, or an S3 event. Even
the smallest Lambda-based application uses at least one event.
Building Lambda-based applications follows many of the best practices of building any
event-based architecture. A number of development approaches have emerged to help
developers create eBuilding Lambda-based applications follows many of the
best practices of building any event-based architecture. A number of
The event itself is a JSON object that contains
development approaches have emerged to help developers create event-
information about what happened. Events are facts
driven systems. Event storming, which is an interactive approach to
about a change in the system state, they are
domain-driven design (DDD), is one popular methodology. As you explore
immutable, and the time when they happen is
the events in your workload, you can group these as bounded contexts to
significant. The first parameter of every Lambda handler
develop the boundaries of the microservices in your application.
contains the event. An event could be custom-generated
To learn more about event-driven architectures, read
from another microservice, such as new order generated
What is an Event-Driven Architecture? and
in an ecommerce application:
What do you mean by Event-Driven?
vent-driven systems. Event storming, which is an interactive approach to domain-driven
design (DDD), is one popular methodology. As you explore the events in your workload,
you can group these as bounded contexts to develop the boundaries of the microservices
in your application.
Serverless application use cases Containers orchestration
We will now explore serverless application use cases. Web applications, Amazon Elastic Container Service (Amazon ECS) and Amazon Elastic
data processing applications, chatbots, and IT automation are all kinds of Kubernetes Service (Amazon EKS) are the container orchestrating services
solutions that can be run on serverless technologies.
that help you schedule, maintain, and scale the fleet of nodes running
Web applications your containers. They also give you a centralized way of monitoring and
Static websites controlling how you want your containers launched.
Complex web applications
Packages for Flask and Express Orchestrate the execution of containers
Backends
Maintain and scale the fleet of nodes running your containers
Applications and services Remove the complexity of standing up the infrastructure.
Mobile
Internet of Things (IoT)

Data processing
Amazon ECS
Real time Amazon ECS is an AWS container orchestration tool giving you seamless control
MapReduce over your containerized application.
Batch Amazon EKS
ML inference Amazon EKS is a managed service that you can use to run Kubernetes on AWS
without needing to install, operate, and maintain your own Kubernetes control
Chatbots
plane or nodes. Kubernetes is an open-source system for automating the
Powering
Chatbox logic deployment, scaling, and management of containerized applications.

Amazon Alexa
Powering voice-enabled applications
Alexa Skills Kit

IT automation
Policy engines
Extending AWS services
Infrastructure management
Which of these are benefits associated with serverless resources? (Select THREE.)
There are no physical servers to provision or manage.
Serverless resources can scale based on usage.
You have to pay for idle servers. What are the benefits of Amazon EC2 Auto Scaling? (Select THREE.)
Serverless resources come with built-in availability.
With Amazon EC2 Auto Scaling, you can add or remove EC2 instances and unhealthy
Serverless resources do not come with built-in fault tolerance.
Serverless resources do not scale.
applications automatically and replace the instances without intervention.
You can scale horizontally to precisely match the current demand and avoid overprovisioning or
There are no physical servers to provision or manage. Serverless resources can scale underprovisioning.
based on usage. Serverless resources come with built-in availability. However, you do not You will need to physically monitor and adjust capacities continuously.
have to pay for idle servers. Serverless resources do come with built-in fault tolerance. With Amazon EC2, you can add or remove EC2 instances and unhealthy applications and
replace the instances with intervention.
What are advantages of AWS Lambda? (Select THREE.) The service will be able to match the demand in a live manner.
Lambda offers per-millisecond pricing of the code being run. Amazon EC2 does not need to maintain and scale the fleet of nodes running your containers.
Lambda supports two languages.
Lambda can run your code in response to events. With Amazon EC2, you can add or remove EC2 instances and unhealthy applications
Lambda provides medium availability.
automatically and replace the instances without intervention. You can scale horizontally to
Lambda offers per-minute pricing of the code being run.
Lambda can run your code on a schedule.
precisely match the current demand and avoid overprovisioning or underprovisioning. The
service will be able to match the demand in a live manner. However, Amazon EC2 does
Lambda offers per-millisecond pricing of the code being run. Lambda can run your maintain and scale the fleet of nodes running your containers.
code on a schedule. Lambda can run your code in response to events.
However, Lambda supports multiple languages and provides high availability.

What are three main benefits of Amazon Elastic Container Service (Amazon ECS) and Amazon Elastic Kubernetes Service (Amazon EKS)? (Select THREE.)

Amazon ECS and Amazon EKS give you a centralized way of monitoring and controlling how you want your containers launched.
Amazon EKS is a managed Kubernetes service to run Kubernetes in the AWS Cloud and on-premises data centers. In the cloud, Amazon EKS automatically manages the availability and scalability of the Kubernetes
control plane nodes responsible for scheduling containers, managing application availability, storing cluster data, and other key tasks. With Amazon EKS, you can take advantage of all the performance, scale,
reliability, and availability of AWS infrastructure and the integrations with AWS networking and security services.
Amazon ECS and Amazon EKS give you a decentralized way of monitoring and controlling how you want your containers launched.
Amazon EKS does not need to be fully managed.
Amazon ECS is a fully managed container orchestration service that helps you easily deploy, manage, and scale containerized applications. It deeply integrates with the rest of the AWS platform to provide a secure
and easy-to-use solution for running container workloads in the cloud and now on your infrastructure with Amazon ECS Anywhere.
Amazon ECS and Amazon EKS add to the complexity of standing up the infrastructure.

Amazon ECS and Amazon EKS give you a centralized way of monitoring and controlling how you want your containers launched. If you have in-house skills running Kubernetes, there is a fully managed
Amazon EKS for you. Amazon ECS and Amazon EKS are container orchestrating services that help you schedule the fleet of nodes running your containers. Amazon ECS and Amazon EKS remove the
complexity of standing up the infrastructure.
Core Services Overview: Storage
Amazon Simple Storage Service overview
Scalable, highly durable object storage in the cloud.
Amazon S3 - Amazon Simple Storage Service (Amazon S3) is a fully managed, serverless, low-cost, object-level storage service. With Amazon S3, you store unlimited
amounts of data (with different formats) on AWS. Amazon S3 offers multiple storage options.

Amazon S3 storage classes


Amazon S3 offers a range of storage classes that you can choose from based on your workload's data access, resiliency, and cost requirements. For example, Amazon
S3 storage classes are purpose-built to provide the lowest cost storage for different access patterns.

Amazon S3 Standard
Amazon S3 Standard is appropriate for various use cases, including cloud applications, dynamic websites, content distribution, mobile and gaming applications, and big
data analytics.

Amazon S3 Intelligent-Tiering
Amazon S3 Intelligent-Tiering delivers milliseconds latency and high throughput performance for frequently, infrequently, and rarely accessed data in the Frequent,
Infrequent, and Archive Instant Access tiers. You can use S3 Intelligent-Tiering as the default storage class for virtually any workload, especially data lakes, data
analytics, new applications, and user-generated content. You can use S3 Intelligent-Tiering as the default storage class for virtually any workload, especially data lakes,
data analytics, new applications, and user-generated content.

Amazon S3 Standard-IA
Amazon S3 Standard-IA is for data accessed less frequently but requires rapid access when needed. S3 Standard-IA offers the high durability, high throughput, and low
latency of Amazon S3 Standard with a low per GB storage price and per GB retrieval charge. This combination of low cost and high performance makes S3 Standard-IA
ideal for long-term storage, backups, and data store for disaster recovery files.

Amazon S3 One Zone-IA


Amazon S3 One Zone-IA is for data accessed less frequently but requires rapid access when needed. For example, S3 One Zone-IA offers the same high durability, high
throughput, and latency as Amazon S3 Standard with a low per GB storage price and per GB retrieval charge
Amazon S3 storage classes are ideal for various use cases, including those with demanding performance needs, data residency requirements, unknown or changing access patterns, or archival
storage needs.

Amazon S3 Glacier Instant Retrieval


Amazon S3 Glacier Instant Retrieval is an archive storage class that delivers the lowest-cost storage for long-lived data that is rarely accessed and requires retrieval in milliseconds .

Amazon S3 Glacier Flexible Retrieval


Amazon S3 Glacier Flexible Retrieval delivers the most flexible retrieval options that balance cost with access times ranging from minutes to hours and with free bulk retrievals. It is an ideal
solution for backup, disaster recovery, and offsite data storage needs. When some data occasionally needs to be retrieved in minutes, you don’t want to worry about costs.

Amazon S3 Glacier Deep Archive


Amazon S3 Glacier Deep Archive is lowest-cost storage class in Amazon S3 and supports long-term retention and digital preservation for data that can be accessed once or twice a year. It is
designed for customers that retain data sets for 7—10 years or longer to meet regulatory compliance requirements. S3 Glacier Deep Archive can also be used for backup and disaster recovery
use cases. It is a cost-effective and easy-to-manage alternative to magnetic tape systems, whether on-premises libraries or off-premises services .

Amazon S3 Outposts
Amazon S3 on Outposts delivers object storage to your on-premises AWS Outposts environment. Using the S3 APIs and features available in AWS Regions today, S3 on Outposts makes it easy to
store and retrieve data on your Outpost, secure the data, control access, tag, and report on it. AWS Outposts rack - an industry standard 42U form factor. It provides the same AWS
infrastructure, services, APIs, and tools to virtually any datacenter or co-location space. Outposts rack provides AWS compute, storage, database, and other services locally, while still allowing
you to access the full range of AWS services available in the Region for a truly consistent hybrid experience. Scale from a single 42U rack to multiple rack deployments of up to 96 racks to create
pools of compute and storage capacity. AWS Outposts servers come in a 1U or 2U form factor. They provide the same AWS infrastructure, services, APIs, and tools to on-premises and edge
locations that have limited space or smaller capacity requirements, such as retail stores, branch offices, healthcare provider locations, or factory floors. Outposts servers provide local
compute and networking services.

The S3 storage classes include S3 Intelligent-Tiering for automatic cost savings for data with unknown or changing
access patterns, S3 Standard for frequently accessed data, S3 Express One Zone for your most frequently accessed
data, S3 Standard-Infrequent Access (S3 Standard-IA) and S3 One Zone-Infrequent Access (S3 One Zone-IA) for less
frequently accessed data, S3 Glacier Instant Retrieval for archive data that needs immediate access, S3 Glacier
Flexible Retrieval (formerly S3 Glacier) for rarely accessed long-term data that does not require immediate access,
and Amazon S3 Glacier Deep Archive (S3 Glacier Deep Archive) for long-term archive and digital preservation with
Amazon S3 provides the most durable storage in the cloud. Based on its unique architecture, S3 is designed to exceed 99.999999999% (11 nines) data durability.
retrieval in hours at the lowest cost storage in the cloud.
Additionally, S3 stores data redundantly across a minimum of 3 Availability Zones by default, providing built-in resilience against widespread disaster. Customers can
store data in a single AZ to minimize storage cost or latency, in multiple AZs for resilience against the permanent loss of an entire data center, or in multiple AWS
Regions to meet geographic resilience requirements. If you have data residency requirements that can’t be met by an existing AWS Region, you can use the S3
Outposts storage class to store your S3 data on premises.
Feature comparison
AWS Outposts rack AWS Outposts servers

The Outposts rack is 80 inches tall, 24 inches wide, and 48 The Outposts rack-mountable servers fit inside 19" width, EIA-310
inches deep. Inside are hosts, switches, a network patch panel, cabinets. The 1U high server is 24” deep, and uses AWS Graviton2
Form factors processors. The 2U high server is 30” deep and uses 3rd generation
a power shelf, and blank panels. Intel Xeon Scalable processors.

AWS delivers Outposts racks fully assembled and ready to be AWS delivers Outposts servers directly to you, installed by either
rolled into final position. Racks are installed by AWS and simply onsite personnel install or a 3rd-party vendor. Once connected to
Installation your network, AWS will remotely provision compute and storage
need to be plugged into power and network. resources.

Amazon Elastic Compute Cloud (EC2), Amazon Elastic Container


Service (ECS), Amazon Elastic Kubernetes Service (EKS), Amazon
Elastic Block Store (EBS), Amazon EBS Snapshots, Amazon Amazon EC2, Amazon ECS, AWS IoT Greengrass, and Amazon
Locally Simple Storage Service (S3), Amazon Relational Database Sagemaker Edge Manager. Seamlessly extend Amazon Virtual Private
Service (RDS), Amazon Elasticache, Amazon EMR, Application Cloud on premises and run select AWS services locally on Outposts
supported Load Balancer (ALB), Amazon Route 53 Resolver, CloudEndure,
services and VMware Cloud. Seamlessly extend Amazon Virtual Private servers, and connect to a broad range of services available in the
Cloud on premises and run select AWS services locally on AWS Region.
Outposts rack, and connect to a broad range of services
available in the AWS Region.

• Includes integrated networking gear. • Does not include integrated networking gear.
Networking • Supports Local Gateway, which requires Border Gateway • Supports a simplified network integration experience providing a
Protocol (BGP) over a routed network. local Layer 2 presence.

• Supports three power configurations: 5 kVA, 10 kVA, or 15


kVA. The configuration of the power shelf depends on the total • Requires 1-2 kVA of power.
power draw of the Outpost capacity. • Supports standard alternating current (AC) and direct current (DC)
Power • Centralized redundant power conversion unit and a direct
current (DC) distribution system in the backplane handled by power options.
line mate connectors.
Amazon S3 storage benefits
With this support, you can build event-driven applications. To create thumbnails of your photos to arrive in your S3 bucket:
You can create event triggers on each object "put the request." This triggers the AWS Lambda function that runs a code to transform the image into a thumbnail,
sending the result to another S3 bucket.
Content storage and distribution
Benefit from the unlimited storage capacity for big data workloads, using Amazon S3 as a data lake for large amounts of data.
Store various types of content, including media content.
Backup and archiving
Use Amazon S3 for durably storing backups (even from different AWS services). Amazon S3 Glacier is a great choice when:
You have to archive your data for long periods of time. You need low-cost storage. You must make sure your archives will not be deleted for a period of time (vault lock).
Build a data lake
Run big data analytics, artificial intelligence (AI), machine learning (ML), and high performance computing (HPC) applications to unlock data insights.
Backup and restore critical data
Meet Recovery Time Objectives (RTO), Recovery Point Objectives (RPO), and compliance requirements with S3's robust replication features.
Archive data
Move data archives to the Amazon S3 Glacier storage classes to lower costs, eliminate operational complexities, and gain new insights.
Run cloud-native apps
Build fast, powerful mobile and web-based cloud-native apps that scale automatically in a highly available configuration.
Other AWS Storage Services
Amazon Elastic File System (Amazon EFS) - Scalable network file storage for Amazon EC2 instances. When you need a serverless shared file system, you
can use EFS. EFS provides serverless, fully elastic file storage so that you can share file data without provisioning or managing
storage capacity and performance. With EFS, you can build high performing and cost-optimized file systems on AWS
benefitting from the built-in-elasticity, durability, and availability.

Amazon Elastic Block Store (Amazon EBS)

Amazon EBS allows you to create storage volumes and attach them to Amazon EC2 instances. Once attached, you can create
a file system on top of these volumes, run a database, or use them in any other way you would use block storage. Amazon EBS
volumes are placed in a specific Availability Zone where they are automatically replicated to protect you from the failure of a single
component. All EBS volume types offer durable snapshot capabilities and are designed for high availability.
Purpose focused volume types - Network-attached volumes that provide durable block-level storage for Amazon EC2 instances
Amazon EBS has the following benefits:
•Persistent network-attached block storage for instances that can persist even after the EC2 instance to which this storage is
attached is terminated
•Different drive types
•Scalable
•Pay only for what you provision
•Snapshot functionality
•Encryption available to enhance security
More information about Amazon EBS includes the following:
•As data is very important, you have the opportunity to take incremental snapshots of the volume. You can keep them indefinitely
while having the opportunity to recover the volumes when needed.
•When you encrypt Amazon EBS volume, all the data in the volume, data traveling between the instance and EBS are encrypted.
•When you encrypt EBS volume, snapshots taken from these volumes are encrypted.
•Just like Amazon EC2, you have control over how much and what type SSD/HDD of storage you provision and if you need to scale it
you can modify your volume.
https://aws.amazon.com/ebs/volume-types/
Amazon EBS volume types - Solid State Drives (SSD)
Volume Type EBS Provisioned IOPS SSD (io2 EBS Provisioned IOPS SSD (io1) EBS General Purpose EBS General Purpose SSD (gp2)
Block Express) SSD (gp3)

Highest performance SSD volume Highest performance SSD volume Lowest cost SSD volume that General Purpose SSD volume that
designed for business-critical balances price performance balances price performance for a
Short Description latency-sensitive transactional designed for latency-sensitive for a wide variety of wide variety of transactional
workloads transactional workloads transactional workloads workloads

Durability 99.999% 99.8% - 99.9% durability 99.8% - 99.9% durability 99.8% - 99.9% durability

Largest, most I/O intensive, mission I/O-intensive NoSQL and relational Virtual desktops, medium Virtual desktops, medium sized single
critical deployments of NoSQL and databases sized single instance instance databases such as Microsoft
relational databases such as Oracle, databases such as SQL Server and Oracle, latency
SAP HANA, Microsoft SQL Server, and Microsoft SQL Server and sensitive interactive applications,
SAS Analytics Oracle, latency sensitive boot volumes, and dev/test
Use Cases interactive applications, environments
boot volumes, and dev/test
environments

API Name io2 io1 gp3 gp2


Volume Size 4 GB – 64 TB 4 GB - 16 TB 1 GB - 16 TB 1 GB - 16 TB
Max IOPS/Volume 256,000 64,000 16,000 16,000
Max Throughput*/Volume 4,000 MB/s 1,000 MB/s 1,000 MB/s 250 MB/s
Max IOPS/Instance 400,000 400,000 260,000 260,000
Max Throughput/Instance 12,500 MB/s 12,500 MB/s 12,500 MB/s 7,500 MB/s
Latency sub-millisecond single digit millisecond single digit millisecond single digit millisecond

$0.125/GB-month $0.08/GB-month
$0.065/provisioned IOPS-month up to 3,000 IOPS free and
32,000 IOPS $0.125/GB-month $0.005/provisioned IOPS-
Price $0.046/provisioned IOPS-month from $0.065/provisioned IOPS-month month over 3,000; $0.10/GB-month
32,001 to 64,000 125 MB/s free and
$0.032/provisioned IOPS-month for $0.04/provisioned MB/s-
greater than 64,000 IOPS month over 125

IOPS, throughput, latency, capacity, and


Dominant Performance Attribute volume durability IOPS IOPS IOPS
Hard Disk Drives (HDD)
Throughput Optimized HDD
(st1) Cold HDD (sc1)

Low cost HDD volume Lowest cost HDD volume


Short Description designed for frequently designed for less frequently
accessed, throughput
intensive workloads accessed workloads

99.8% - 99.9% durability


Durability 99.8% - 99.9% durability

Big data, data warehouses, Colder data requiring fewer


Use Cases log processing scans per day
API Name st1 sc1
Volume Size 125 GB - 16 TB 125 GB - 16 TB
Max IOPS**/Volume 500 250
Max Throughput***/Volume 500 MB/s 250 MB/s
Max Throughput/Instance 12,500 MB/s 7,500 MB/s
Price $0.045/GB-month $0.015/GB-month
Dominant Performance
Attribute MB/s MB/s
Amazon FSx makes it easy and cost effective to launch, run, and scale feature-rich, high-performance file systems in the cloud .
It supports a wide range of workloads with its reliability, security, scalability, and broad set of capabilities.
Amazon FSx is built on the latest AWS compute, networking, and disk technologies to provide high performance and lower total cost of ownership (TCO). And as a fully managed service, it
handles hardware provisioning, patching, and backups—freeing you up to focus on your applications, your end users, and your business.
You can choose between four widely used file systems: NetApp ONTAP, OpenZFS, Windows File Server, and Lustre.

What are the two main benefits of Amazon Elastic Block Store (Amazon EBS)? (Select TWO.)

You use one main drive type.


Pay only for what you provision. As with Amazon EC2, you have control over how much and what type of solid-state drive (SSD) or hard disk drive (HDD) storage you provision. And if you need to
scale it, you can modify your volume.
Data is very important. So you have the opportunity to take incremental snapshots of the volume, and the snapshots are encrypted. You can keep them indefinitely and have the opportunity to
recover the volumes when needed.
Persistent network-attached block storage for instances does not persist after the EC2 instance to which this storage is attached is terminated.
When you need a serverless shared file system, you can use Amazon EBS.

Pay only for what you provision. As with Amazon EC2, you have control over how much and what type of solid-state drive (SSD) or hard disk drive (HDD) storage you provision. And if you
need to scale it, you can modify your volume.

Data is very important. So you can take incremental snapshots of the volume, and the snapshots are encrypted. You can keep them indefinitely and have the opportunity to recover the volumes
when needed. However, Amazon EBS uses different drive types. Persistent network-attached block storage for instances can persist even after the EC2 instance to which this storage is attached is
terminated. When you need a serverless shared file system, you can use Amazon Elastic File System (Amazon EFS), not Amazon EBS.

What are three Amazon S3 use cases? (Select THREE.)


Creating cloud non-native applications
Building a data lake
Content storage and distribution
Deleting backup
Releasing critical data
Archiving data

Building a data lake, archiving data, and content storage and distribution are Amazon S3 use cases. Backup and archiving is also an Amazon S3 use case, so you make sure your archives will not
be deleted for a period of time (vault lock). Restoring critical data is an Amazon S3 use case.

Creating cloud-native applications is an Amazon S3 use case.


Which of these are reasons to use Amazon Elastic File System (Amazon EFS)? (Select TWO.)

When you need to build high-performing and cost-optimized file systems on AWS benefitting from the built-in-elasticity, durability, and availability
When you don't need storage classes
When full AWS compute integration is not a priority
When you need a serverless shared file system
Use fully managed EFS because it is not compatible with Network File System (NFS) or Server Message Block (SMB).

A reason for using Amazon EFS is when you need to build high-performing and cost-optimized file systems on AWS, benefitting from the built-in elasticity, durability, and availability .
A reason for using Amazon EFS is when you need a serverless shared file system.
However, Amazon EFS is used when you need four storage classes and when full AWS compute integration is a priority. Fully managed EFS is compatible with Network File System (NFS) and Server Message Block
(SMB).

You might also like