AWS Cloud Essentials
AWS Cloud Essentials
With cloud computing, you can stop thinking of infrastructure as hardware and instead think of it (and use it) as software .
Cloud computing is the on-demand delivery of IT resources and applications through the internet.
•There are no large upfront investments.
•You won't need to spend time or resources on hardware management.
•You can provision exactly the right type and size when needed (dynamic abilities).
•You can have as many resources as you need and pay for what you use.
There are three key benefits of using cloud computing. PROGRAMABLE RESOURCES – DYNAMIC ABILITIES – PAY AS YOU GO
On premises
In traditional on-premises environments, you figure out how much capacity you need, purchase the hardware, wait for it, set it up, and access the resources over your
network. The challenge with an on-premises environment is if there is not enough capacity, you must purchase more and wait for the servers to be delivered to your
site. Another challenge is if you have too much capacity, you are left with an over-provisioned environment.
33 Launched Regions each with multiple Availability Zones (AZs) - 105 Availability Zones - 600+ Points of Presence and 13 Regional Edge Caches
36 Local Zones - 29 Wavelength Zones for ultralow latency applications - 245 Countries and Territories Served - 115 Direct Connect Locations
Benefits
Security
Security at AWS starts with our core infrastructure. Custom-built for the cloud and designed to meet the most stringent security requirements in the world, our
infrastructure is monitored 24/7 to help ensure the confidentiality, integrity, and availability of your data. All data flowing across the AWS global network that
interconnects our datacenters and Regions is automatically encrypted at the physical layer before it leaves our secured facilities. You can build on the most secure
global infrastructure, knowing you always control your data, including the ability to encrypt it, move it, and manage retention at any time.
Availability
AWS delivers the highest network availability of any cloud provider. Each region is fully isolated and comprised of multiple AZs, which are fully isolated partitions of
our infrastructure. To better isolate any issues and achieve high availability, you can partition applications across multiple AZs in the same region. In addition, AWS
control planes and the AWS management console are distributed across regions, and include regional API endpoints, which are designed to operate securely for at
least 24 hours if isolated from the global control plane functions without requiring customers to access the region or its API endpoints via external networks during
any isolation.
Performance
The AWS Global Infrastructure is built for performance. AWS Regions offer low latency, low packet loss, and high overall network quality. This is achieved with a fully
redundant 400 GbE fiber network backbone, often providing many terabits of capacity between Regions. AWS Local Zones and AWS Wavelength, with our telco
providers, provide performance for applications that require single-digit millisecond latencies by delivering AWS infrastructure and services closer to end-users and 5G
connected devices. Whatever your application needs, you can quickly spin up resources as you need them, deploying hundreds or even thousands of servers in
minutes.
Scalability
The AWS Global Infrastructure enables companies to be extremely flexible and take advantage of the conceptually infinite scalability of the cloud. Customers used to
over provision to ensure they had enough capacity to handle their business operations at the peak level of activity. Now, they can provision the amount of resources
that they actually need, knowing they can instantly scale up or down along with the needs of their business, which also reduces cost and improves the customer’s
ability to meet their user’s demands. Companies can quickly spin up resources as they need them, deploying hundreds or even thousands of servers in minutes.
Flexibility
The AWS Global Infrastructure gives you the flexibility of choosing how and where you want to run your workloads, and when you do you are using the same network,
control plane, API’s, and AWS services. If you would like to run your applications globally you can choose from any of the AWS Regions and AZs. If you need to run your
applications with single-digit millisecond latencies to mobile devices and end-users you can choose AWS Local Zones or AWS Wavelength. Or if you would like to run
your applications on-premises you can choose AWS Outposts. If you are in a public sector organization or highly regulated industry, you can read our plans to launch
the AWS European Sovereign Cloud.
Global Footprint
AWS has the largest global infrastructure footprint of any provider, and this footprint is constantly increasing at a significant rate. When deploying your applications
and workloads to the cloud, you have the flexibility in selecting a technology infrastructure that is closest to your primary target of users. You can run your workloads
on the cloud that delivers the best support for the broadest set of applications, even those with the highest throughput and lowest latency requirements. And If your
data lives off this planet, you can use AWS Ground Station, which provides satellite antennas in close proximity to AWS infrastructure Regions.
•What Is AWS?
•What Is Cloud Computing?
•AWS Inclusion, Diversity & Equity
•What Is DevOps?
•What Is a Container?
•What Is a Data Lake?
•What is Generative AI?
•AWS Cloud Security
•What's New
•Blogs
•Press Releases
Resources for AWS
•Getting Started
•Training and Certification
•AWS Solutions Library
•Architecture Center
•Product and Technical FAQs
•Analyst Reports
•AWS Partners
Developers on AWS
•Developer Center
•SDKs & Tools
•.NET on AWS
•Python on AWS
•Java on AWS
•PHP on AWS
•JavaScript on AWS
Trade upfront costs for variable costs
AWS takes care of purchasing and handling the infrastructure, so you don’t have to worry about capacity. Heavy initial investments into hardware and facilities are no longer needed, so fixed
costs are traded for variable costs.
Topics Serverless •AWS Lambda — Run code without thinking about servers. Pay
•Compare AWS compute services only for the compute time you consume.
•Amazon EC2 Edge and hybrid •AWS Outposts — Run AWS infrastructure and services on
•Amazon EC2 Auto Scaling premises for a truly consistent hybrid experience
•AWS Snow Family — Collect and process data in rugged or
•Amazon EC2 Image Builder disconnected edge environments
•Amazon Lightsail •AWS Wavelength — Deliver ultra-low latency application for 5G
devices
•Amazon Linux 2023 •VMware Cloud on AWS — Preferred service for all vSphere
•AWS App Runner workloads to rapidly extend and migrate to the cloud
•AWS Local Zones — Run latency sensitive applications closer to
•AWS Batch end-users
•AWS Elastic Beanstalk
•AWS Fargate Cost and capacity •AWS Savings Plan — Flexible pricing model that provides savings
management of up to 72% on AWS compute usage
•AWS Lambda •AWS Compute Optimizer — Recommends optimal AWS compute
•AWS Serverless Application Repo resources for your workloads to reduce costs and improve
performance
sitory •AWS Elastic Beanstalk — Easy-to-use service for deploying and
•AWS Outposts scaling web applications and services
•EC2 Image Builder — Build and maintain secure Linux or
•AWS Wavelength Windows Server images
•VMware Cloud on AWS •Elastic Load Balancing (ELB) — Automatically distribute incoming
application traffic across multiple targets
Instance types
Amazon EC2 passes on to you the financial benefits of Amazon scale. You pay a very low rate for the compute capacity you actually consume. For a more detailed
description, refer to Amazon EC2 pricing.
•On-Demand Instances — With On-Demand Instances, you pay for compute capacity by the hour or the second depending on which instances you run. No longer-term
commitments or upfront payments are needed. You can increase or decrease your compute capacity depending on the demands of your application and only pay the
specified per hourly rates for the instance you use. On-Demand Instances are recommended for:
•Users that prefer the low cost and flexibility of Amazon EC2 without any up-front payment or long-term commitment
•Applications with short-term, spiky, or unpredictable workloads that cannot be interrupted
•Applications being developed or tested on Amazon EC2 for the first time
•Spot Instances —Spot Instances are available at up to a 90% discount compared to On-Demand prices and let you take advantage of unused Amazon EC2 capacity in
the AWS Cloud. You can significantly reduce the cost of running your applications, grow your application’s compute capacity and throughput for the same budget, and
enable new types of cloud computing applications. Spot Instances are recommended for:
•Applications that have flexible start and end times
•Applications that are only feasible at very low compute prices
•Users with urgent computing needs for large amounts of additional capacity
•Reserved Instances — Reserved Instances provide you with a significant discount (up to 72%) compared to On-Demand Instance pricing. You have the flexibility to
change families, operating system types, and tenancies while benefiting from Reserved Instance pricing when you use Convertible Reserved
Instances.
•C7g Instances — C7g Instances, powered by the latest generation AWS Graviton3 processors, provide the best price performance in Amazon EC2 for compute-intensive
workloads. C7g instances are ideal for high performance computing (HPC), batch processing, electronic design automation (EDA), gaming, video encoding, scientific
modeling, distributed analytics, CPU-based ML inference, and ad serving.
•Inf2 Instances — Inf2 Instances are purpose--built for deep learning inference. They deliver high performance at the lowest cost in Amazon EC2 for
generative AI models, including large language models (LLMs) and vision transformers. Inf2 instances are powered by AWS Inferentia2, the second-generation AWS
Inferentia accelerator.
•M7g Instances — M7g instances, powered by the latest generation AWS Graviton3 processors, provide the best price performance in Amazon EC2 for general
purpose workloads. M7g instances are ideal for applications built on open-source software such as application servers, microservices, gaming servers, mid-
size data stores, and caching fleets.
•R7g Instances — R7g Instances, powered by the latest generation AWS Graviton3 processors, provide the best price performance in Amazon EC2 for memory-
intensive workloads. R7g instances are ideal for memory-intensive workloads such as open-source databases, in-memory caches, and near real-time big data analytics.
•Trn1 Instances — Trn1 Instances, powered by AWS Trainium accelerators, are purpose-built for high-performance deep learning training of generative AI
models, including LLMs and latent diffusion models. Trn1 instances offer up to 50% cost-to-train savings over other comparable Amazon EC2 instances.
•Savings Plans — Savings Plans are a flexible pricing model that offer low prices on EC2 and Fargate usage, in exchange for a commitment to a consistent amount of
usage (measured in $/hour) for a one or three year term.
•Dedicated Hosts — A Dedicated Host is a physical EC2 server dedicated for your use. Dedicated Hosts can help you reduce costs by allowing you to use your
About Amazon EC2
Amazon EC2 is a virtual machine launched on AWS hardware.
AWS takes care of the hardware, whereas you focus on setting up Amazon EC2 to match your application needs.
Size
If, at some point, you realize you need more or fewer resources to support your app, you have the opportunity to scale your machine up or down by changing the EC2 instance type and size.
Instance types
There are many types of instances, each built to provide a specific set of resources, and they run on certain hardware (including Intel families and Graviton). There are a broad variety of instance types so that you
can do the following:
Pick the most suitable type of virtual machine for your specific application. Match your needs as precisely as possible.
What is Amazon EC2?
Amazon Elastic Compute Cloud (Amazon EC2) provides on-demand, scalable computing capacity in the Amazon Web Services (AWS)
Cloud. Using Amazon EC2 reduces hardware costs so you can develop and deploy applications faster. You can use Amazon EC2 to
launch as many or as few virtual servers as you need, configure security and networking, and manage storage. You can add
capacity (scale up) to handle compute-heavy tasks, such as monthly or yearly processes, or spikes in website traffic.
When usage decreases, you can reduce capacity (scale down) again.
The following diagram shows a basic architecture of an Amazon EC2 instance deployed within an Amazon Virtual Private Cloud (VPC).
In this example, the EC2 instance is within an Availability Zone in the Region. The EC2 instance is secured with a security group,
which is a virtual firewall that controls incoming and outgoing traffic. A private key is stored on the local computer and a public key is
stored on the instance. Both keys are specified as a key pair to prove the identity of the user. In this scenario, the instance is backed
by an Amazon EBS volume. The VPC communicates with the internet using an internet gateway. For more information about Amazon
VPC, see the Amazon VPC User Guide.
Amazon EC2 supports the processing,
storage, and transmission of credit
card data by a merchant or service
provider, and has been validated as
being compliant with Payment Card
Industry (PCI) Data Security Standard
(DSS). For more information about PCI
DSS, including how to request a copy
of the AWS PCI Compliance Package,
see PCI DSS Level 1.
According to instance types, determine your use case: General purpose - High performance - In-memory databases - Machine learning (ML) - Distributed file systems
Accelerated computing instances are helpful when the focus is on the graphics processing unit (GPU).
Running ML models, computational fluid dynamics, graphical workloads, or other workloads needing GPU are 'use cases' for this instance type.
Storage optimized instances are helpful when focusing on maximizing the number of transactions per second (TPS) for I/O intensive and business-critical workloads .
General purpose instances provide a balance of compute, memory and networking resources, and can be used for a variety of diverse workloads. These instances are ideal for applications that
use these resources in equal proportions such as web servers and code repositories.
Compute optimized instances are ideal for compute bound applications that benefit from high performance processors . Instances belonging to this family are well suited for batch processing
workloads, media transcoding, high performance web servers, high performance computing (HPC), scientific modeling, dedicated gaming servers and ad server engines, machine learning
inference and other compute intensive applications.
HPC optimized, High performance computing (HPC) instances, are purpose build to offer the best price performance for running HPC workloads at scale on AW S. HPC instances are ideal for
applications that benefit from high-performance processors such as large, complex simulations and deep learning overloads.
General purpose
Examples of the Instance types: a1, m4, m5, t2, t3
Use case: High performance file systems
Compute optimized: Instance types: c4, c5; Use case: High performance.
Examples of the Instance types: c4, c5
Use case: Network intensive workloads
•With dynamic scaling capabilities, the service will be able to match the demand in a live manner, understanding when resources are over-provisioned
or under-provisioned based on the CPU utilization or such metrics.
•With Amazon EC2, you can add or remove EC2 instances and unhealthy applications automatically and replace the instances without
intervention. However, you do not need to physically monitor and adjust capacities continuously.
•Scale horizontally to precisely match the current demand and avoid over-provisioning or under-provisioning. The process is all done
automatically.
•Amazon EC2 Auto Scaling detects impaired EC2 instances and unhealthy applications and replaces the instances without intervention.
•Amazon EC2 Auto Scaling provides several scaling options: manual, scheduled, dynamic or on demand, and predictive. When you know that you
will have significant (or not enough) traffic in a certain period, you can schedule the service to launch the resources in advance to be ready to serve the
traffic.
With the elasticity of the cloud, you can also provision the amount of resources to match the demand as closely as possible. Also, because cloud
resources are disposable, you can be flexible when you launch or remove resources.
Serverless computing is building and running applications and services without managing servers
For example, if you are running an EC2 instance for 5 hours, you only pay for 5 hours. But what if your workload is such that in 5 hours, your application actively used the resources for 2 hours?
You paid for what you used. Now, with serverless, the service (such as AWS Lambda) would only actively use the resources when the application needs them to be able to do the job.
AWS Lambda is a fully managed serverless compute service
AWS Lambda benefits include the following:
•It supports multiple languages.
•It runs stateless code.
•When you upload the code in the language you prefer, Lambda can run your code on a schedule or in response to events, such as changes to data in an Amazon Simple Storage Service (Amazon
S3) bucket or Amazon DynamoDB table.
•It offers per-millisecond pricing of the code being run.
•It is a great solution to be used in event-driven architectures.
•Provisioning, scaling, and underlying resources are taken care of by Lambda itself.
•It provides high availability.
•Lambda is an on-demand compute service that runs custom code in response to events. Most AWS services generate events, and many can act as an event source for Lambda. Within Lambda,
your code is stored in a code deployment package. All interaction with the code occurs through the Lambda API and there is no direct invocation of functions from outside of the service. The
main purpose of Lambda functions is to process events.
Unlike traditional servers, Lambda functions do not run constantly. When a function is triggered by an event, this is called an invocation. Lambda functions are purposefully limited to 15
minutes in duration but on average, across all AWS customers, most invocations only last for less than a second . In some intensive compute operations, it may take several minutes to process a
single event but in the majority of cases the duration is brief.
An event triggering a Lambda function could be almost anything, from an HTTP request through API Gateway, a schedule managed by an EventBridge rule, an IOT event, or an S3 event. Even
the smallest Lambda-based application uses at least one event.
Building Lambda-based applications follows many of the best practices of building any
event-based architecture. A number of development approaches have emerged to help
developers create eBuilding Lambda-based applications follows many of the
best practices of building any event-based architecture. A number of
The event itself is a JSON object that contains
development approaches have emerged to help developers create event-
information about what happened. Events are facts
driven systems. Event storming, which is an interactive approach to
about a change in the system state, they are
domain-driven design (DDD), is one popular methodology. As you explore
immutable, and the time when they happen is
the events in your workload, you can group these as bounded contexts to
significant. The first parameter of every Lambda handler
develop the boundaries of the microservices in your application.
contains the event. An event could be custom-generated
To learn more about event-driven architectures, read
from another microservice, such as new order generated
What is an Event-Driven Architecture? and
in an ecommerce application:
What do you mean by Event-Driven?
vent-driven systems. Event storming, which is an interactive approach to domain-driven
design (DDD), is one popular methodology. As you explore the events in your workload,
you can group these as bounded contexts to develop the boundaries of the microservices
in your application.
Serverless application use cases Containers orchestration
We will now explore serverless application use cases. Web applications, Amazon Elastic Container Service (Amazon ECS) and Amazon Elastic
data processing applications, chatbots, and IT automation are all kinds of Kubernetes Service (Amazon EKS) are the container orchestrating services
solutions that can be run on serverless technologies.
that help you schedule, maintain, and scale the fleet of nodes running
Web applications your containers. They also give you a centralized way of monitoring and
Static websites controlling how you want your containers launched.
Complex web applications
Packages for Flask and Express Orchestrate the execution of containers
Backends
Maintain and scale the fleet of nodes running your containers
Applications and services Remove the complexity of standing up the infrastructure.
Mobile
Internet of Things (IoT)
Data processing
Amazon ECS
Real time Amazon ECS is an AWS container orchestration tool giving you seamless control
MapReduce over your containerized application.
Batch Amazon EKS
ML inference Amazon EKS is a managed service that you can use to run Kubernetes on AWS
without needing to install, operate, and maintain your own Kubernetes control
Chatbots
plane or nodes. Kubernetes is an open-source system for automating the
Powering
Chatbox logic deployment, scaling, and management of containerized applications.
Amazon Alexa
Powering voice-enabled applications
Alexa Skills Kit
IT automation
Policy engines
Extending AWS services
Infrastructure management
Which of these are benefits associated with serverless resources? (Select THREE.)
There are no physical servers to provision or manage.
Serverless resources can scale based on usage.
You have to pay for idle servers. What are the benefits of Amazon EC2 Auto Scaling? (Select THREE.)
Serverless resources come with built-in availability.
With Amazon EC2 Auto Scaling, you can add or remove EC2 instances and unhealthy
Serverless resources do not come with built-in fault tolerance.
Serverless resources do not scale.
applications automatically and replace the instances without intervention.
You can scale horizontally to precisely match the current demand and avoid overprovisioning or
There are no physical servers to provision or manage. Serverless resources can scale underprovisioning.
based on usage. Serverless resources come with built-in availability. However, you do not You will need to physically monitor and adjust capacities continuously.
have to pay for idle servers. Serverless resources do come with built-in fault tolerance. With Amazon EC2, you can add or remove EC2 instances and unhealthy applications and
replace the instances with intervention.
What are advantages of AWS Lambda? (Select THREE.) The service will be able to match the demand in a live manner.
Lambda offers per-millisecond pricing of the code being run. Amazon EC2 does not need to maintain and scale the fleet of nodes running your containers.
Lambda supports two languages.
Lambda can run your code in response to events. With Amazon EC2, you can add or remove EC2 instances and unhealthy applications
Lambda provides medium availability.
automatically and replace the instances without intervention. You can scale horizontally to
Lambda offers per-minute pricing of the code being run.
Lambda can run your code on a schedule.
precisely match the current demand and avoid overprovisioning or underprovisioning. The
service will be able to match the demand in a live manner. However, Amazon EC2 does
Lambda offers per-millisecond pricing of the code being run. Lambda can run your maintain and scale the fleet of nodes running your containers.
code on a schedule. Lambda can run your code in response to events.
However, Lambda supports multiple languages and provides high availability.
What are three main benefits of Amazon Elastic Container Service (Amazon ECS) and Amazon Elastic Kubernetes Service (Amazon EKS)? (Select THREE.)
Amazon ECS and Amazon EKS give you a centralized way of monitoring and controlling how you want your containers launched.
Amazon EKS is a managed Kubernetes service to run Kubernetes in the AWS Cloud and on-premises data centers. In the cloud, Amazon EKS automatically manages the availability and scalability of the Kubernetes
control plane nodes responsible for scheduling containers, managing application availability, storing cluster data, and other key tasks. With Amazon EKS, you can take advantage of all the performance, scale,
reliability, and availability of AWS infrastructure and the integrations with AWS networking and security services.
Amazon ECS and Amazon EKS give you a decentralized way of monitoring and controlling how you want your containers launched.
Amazon EKS does not need to be fully managed.
Amazon ECS is a fully managed container orchestration service that helps you easily deploy, manage, and scale containerized applications. It deeply integrates with the rest of the AWS platform to provide a secure
and easy-to-use solution for running container workloads in the cloud and now on your infrastructure with Amazon ECS Anywhere.
Amazon ECS and Amazon EKS add to the complexity of standing up the infrastructure.
Amazon ECS and Amazon EKS give you a centralized way of monitoring and controlling how you want your containers launched. If you have in-house skills running Kubernetes, there is a fully managed
Amazon EKS for you. Amazon ECS and Amazon EKS are container orchestrating services that help you schedule the fleet of nodes running your containers. Amazon ECS and Amazon EKS remove the
complexity of standing up the infrastructure.
Core Services Overview: Storage
Amazon Simple Storage Service overview
Scalable, highly durable object storage in the cloud.
Amazon S3 - Amazon Simple Storage Service (Amazon S3) is a fully managed, serverless, low-cost, object-level storage service. With Amazon S3, you store unlimited
amounts of data (with different formats) on AWS. Amazon S3 offers multiple storage options.
Amazon S3 Standard
Amazon S3 Standard is appropriate for various use cases, including cloud applications, dynamic websites, content distribution, mobile and gaming applications, and big
data analytics.
Amazon S3 Intelligent-Tiering
Amazon S3 Intelligent-Tiering delivers milliseconds latency and high throughput performance for frequently, infrequently, and rarely accessed data in the Frequent,
Infrequent, and Archive Instant Access tiers. You can use S3 Intelligent-Tiering as the default storage class for virtually any workload, especially data lakes, data
analytics, new applications, and user-generated content. You can use S3 Intelligent-Tiering as the default storage class for virtually any workload, especially data lakes,
data analytics, new applications, and user-generated content.
Amazon S3 Standard-IA
Amazon S3 Standard-IA is for data accessed less frequently but requires rapid access when needed. S3 Standard-IA offers the high durability, high throughput, and low
latency of Amazon S3 Standard with a low per GB storage price and per GB retrieval charge. This combination of low cost and high performance makes S3 Standard-IA
ideal for long-term storage, backups, and data store for disaster recovery files.
Amazon S3 Outposts
Amazon S3 on Outposts delivers object storage to your on-premises AWS Outposts environment. Using the S3 APIs and features available in AWS Regions today, S3 on Outposts makes it easy to
store and retrieve data on your Outpost, secure the data, control access, tag, and report on it. AWS Outposts rack - an industry standard 42U form factor. It provides the same AWS
infrastructure, services, APIs, and tools to virtually any datacenter or co-location space. Outposts rack provides AWS compute, storage, database, and other services locally, while still allowing
you to access the full range of AWS services available in the Region for a truly consistent hybrid experience. Scale from a single 42U rack to multiple rack deployments of up to 96 racks to create
pools of compute and storage capacity. AWS Outposts servers come in a 1U or 2U form factor. They provide the same AWS infrastructure, services, APIs, and tools to on-premises and edge
locations that have limited space or smaller capacity requirements, such as retail stores, branch offices, healthcare provider locations, or factory floors. Outposts servers provide local
compute and networking services.
The S3 storage classes include S3 Intelligent-Tiering for automatic cost savings for data with unknown or changing
access patterns, S3 Standard for frequently accessed data, S3 Express One Zone for your most frequently accessed
data, S3 Standard-Infrequent Access (S3 Standard-IA) and S3 One Zone-Infrequent Access (S3 One Zone-IA) for less
frequently accessed data, S3 Glacier Instant Retrieval for archive data that needs immediate access, S3 Glacier
Flexible Retrieval (formerly S3 Glacier) for rarely accessed long-term data that does not require immediate access,
and Amazon S3 Glacier Deep Archive (S3 Glacier Deep Archive) for long-term archive and digital preservation with
Amazon S3 provides the most durable storage in the cloud. Based on its unique architecture, S3 is designed to exceed 99.999999999% (11 nines) data durability.
retrieval in hours at the lowest cost storage in the cloud.
Additionally, S3 stores data redundantly across a minimum of 3 Availability Zones by default, providing built-in resilience against widespread disaster. Customers can
store data in a single AZ to minimize storage cost or latency, in multiple AZs for resilience against the permanent loss of an entire data center, or in multiple AWS
Regions to meet geographic resilience requirements. If you have data residency requirements that can’t be met by an existing AWS Region, you can use the S3
Outposts storage class to store your S3 data on premises.
Feature comparison
AWS Outposts rack AWS Outposts servers
The Outposts rack is 80 inches tall, 24 inches wide, and 48 The Outposts rack-mountable servers fit inside 19" width, EIA-310
inches deep. Inside are hosts, switches, a network patch panel, cabinets. The 1U high server is 24” deep, and uses AWS Graviton2
Form factors processors. The 2U high server is 30” deep and uses 3rd generation
a power shelf, and blank panels. Intel Xeon Scalable processors.
AWS delivers Outposts racks fully assembled and ready to be AWS delivers Outposts servers directly to you, installed by either
rolled into final position. Racks are installed by AWS and simply onsite personnel install or a 3rd-party vendor. Once connected to
Installation your network, AWS will remotely provision compute and storage
need to be plugged into power and network. resources.
• Includes integrated networking gear. • Does not include integrated networking gear.
Networking • Supports Local Gateway, which requires Border Gateway • Supports a simplified network integration experience providing a
Protocol (BGP) over a routed network. local Layer 2 presence.
Amazon EBS allows you to create storage volumes and attach them to Amazon EC2 instances. Once attached, you can create
a file system on top of these volumes, run a database, or use them in any other way you would use block storage. Amazon EBS
volumes are placed in a specific Availability Zone where they are automatically replicated to protect you from the failure of a single
component. All EBS volume types offer durable snapshot capabilities and are designed for high availability.
Purpose focused volume types - Network-attached volumes that provide durable block-level storage for Amazon EC2 instances
Amazon EBS has the following benefits:
•Persistent network-attached block storage for instances that can persist even after the EC2 instance to which this storage is
attached is terminated
•Different drive types
•Scalable
•Pay only for what you provision
•Snapshot functionality
•Encryption available to enhance security
More information about Amazon EBS includes the following:
•As data is very important, you have the opportunity to take incremental snapshots of the volume. You can keep them indefinitely
while having the opportunity to recover the volumes when needed.
•When you encrypt Amazon EBS volume, all the data in the volume, data traveling between the instance and EBS are encrypted.
•When you encrypt EBS volume, snapshots taken from these volumes are encrypted.
•Just like Amazon EC2, you have control over how much and what type SSD/HDD of storage you provision and if you need to scale it
you can modify your volume.
https://aws.amazon.com/ebs/volume-types/
Amazon EBS volume types - Solid State Drives (SSD)
Volume Type EBS Provisioned IOPS SSD (io2 EBS Provisioned IOPS SSD (io1) EBS General Purpose EBS General Purpose SSD (gp2)
Block Express) SSD (gp3)
Highest performance SSD volume Highest performance SSD volume Lowest cost SSD volume that General Purpose SSD volume that
designed for business-critical balances price performance balances price performance for a
Short Description latency-sensitive transactional designed for latency-sensitive for a wide variety of wide variety of transactional
workloads transactional workloads transactional workloads workloads
Durability 99.999% 99.8% - 99.9% durability 99.8% - 99.9% durability 99.8% - 99.9% durability
Largest, most I/O intensive, mission I/O-intensive NoSQL and relational Virtual desktops, medium Virtual desktops, medium sized single
critical deployments of NoSQL and databases sized single instance instance databases such as Microsoft
relational databases such as Oracle, databases such as SQL Server and Oracle, latency
SAP HANA, Microsoft SQL Server, and Microsoft SQL Server and sensitive interactive applications,
SAS Analytics Oracle, latency sensitive boot volumes, and dev/test
Use Cases interactive applications, environments
boot volumes, and dev/test
environments
$0.125/GB-month $0.08/GB-month
$0.065/provisioned IOPS-month up to 3,000 IOPS free and
32,000 IOPS $0.125/GB-month $0.005/provisioned IOPS-
Price $0.046/provisioned IOPS-month from $0.065/provisioned IOPS-month month over 3,000; $0.10/GB-month
32,001 to 64,000 125 MB/s free and
$0.032/provisioned IOPS-month for $0.04/provisioned MB/s-
greater than 64,000 IOPS month over 125
What are the two main benefits of Amazon Elastic Block Store (Amazon EBS)? (Select TWO.)
Pay only for what you provision. As with Amazon EC2, you have control over how much and what type of solid-state drive (SSD) or hard disk drive (HDD) storage you provision. And if you
need to scale it, you can modify your volume.
Data is very important. So you can take incremental snapshots of the volume, and the snapshots are encrypted. You can keep them indefinitely and have the opportunity to recover the volumes
when needed. However, Amazon EBS uses different drive types. Persistent network-attached block storage for instances can persist even after the EC2 instance to which this storage is attached is
terminated. When you need a serverless shared file system, you can use Amazon Elastic File System (Amazon EFS), not Amazon EBS.
Building a data lake, archiving data, and content storage and distribution are Amazon S3 use cases. Backup and archiving is also an Amazon S3 use case, so you make sure your archives will not
be deleted for a period of time (vault lock). Restoring critical data is an Amazon S3 use case.
When you need to build high-performing and cost-optimized file systems on AWS benefitting from the built-in-elasticity, durability, and availability
When you don't need storage classes
When full AWS compute integration is not a priority
When you need a serverless shared file system
Use fully managed EFS because it is not compatible with Network File System (NFS) or Server Message Block (SMB).
A reason for using Amazon EFS is when you need to build high-performing and cost-optimized file systems on AWS, benefitting from the built-in elasticity, durability, and availability .
A reason for using Amazon EFS is when you need a serverless shared file system.
However, Amazon EFS is used when you need four storage classes and when full AWS compute integration is a priority. Fully managed EFS is compatible with Network File System (NFS) and Server Message Block
(SMB).