0% found this document useful (0 votes)
27 views22 pages

AWS Cloud Practitioner Essentials

Uploaded by

Sharmin N Alam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views22 pages

AWS Cloud Practitioner Essentials

Uploaded by

Sharmin N Alam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 22

EC2 Instances:

General purpose instances provide a balance of compute, memory, and networking


resources. You can use them for a variety of workloads, such as:
● application servers
● gaming servers
● backend servers for enterprise applications
● small and medium databases

Suppose that you have an application in which the resource needs for compute, memory, and
networking are roughly equivalent. You might consider running it on a general purpose instance
because the application does not require optimization in any single resource area.

Compute optimized instances are ideal for compute-bound applications that benefit from
high-performance processors. Like general purpose instances, you can use compute optimized
instances for workloads such as web, application, and gaming servers.

However, the difference is compute optimized applications are ideal for high-performance web
servers, compute-intensive applications servers, and dedicated gaming servers. You can also
use compute optimized instances for batch processing workloads that require processing many
transactions in a single group.

Memory optimized instances are designed to deliver fast performance for workloads that
process large datasets in memory. In computing, memory is a temporary storage area. It holds
all the data and instructions that a central processing unit (CPU) needs to be able to complete
actions. Before a computer program or application is able to run, it is loaded from storage into
memory. This preloading process gives the CPU direct access to the computer program.

Suppose that you have a workload that requires large amounts of data to be preloaded before
running an application. This scenario might be a high-performance database or a workload that
involves performing real-time processing of a large amount of unstructured data. In these types
of use cases, consider using a memory optimized instance. Memory optimized instances enable
you to run workloads with high memory needs and receive great performance.

Accelerated computing instances use hardware accelerators, or coprocessors, to perform


some functions more efficiently than is possible in software running on CPUs. Examples of these
functions include floating-point number calculations, graphics processing, and data pattern
matching.

In computing, a hardware accelerator is a component that can expedite data processing.


Accelerated computing instances are ideal for workloads such as graphics applications, game
streaming, and application streaming.

Storage optimized instances are designed for workloads that require high, sequential read
and write access to large datasets on local storage. Examples of workloads suitable for storage
optimized instances include distributed file systems, data warehousing applications, and high-
frequency online transaction processing (OLTP) systems.

In computing, the term input/output operations per second (IOPS) is a metric that measures
the performance of a storage device. It indicates how many different input or output operations
a device can perform in one second. Storage optimized instances are designed to deliver tens of
thousands of low-latency, random IOPS to applications.

You can think of input operations as data put into a system, such as records entered into a
database. An output operation is data generated by a server. An example of output might be
the analytics performed on the records in a database. If you have an application that has a high
IOPS requirement, a storage optimized instance can provide better performance over other
instance types not optimized for this kind of use case.
Scalability

Scalability involves beginning with only the resources you need and designing your
architecture to automatically respond to changing demand by scaling out or in. As a result, you
pay for only the resources you use. You don’t have to worry about a lack of computing capacity
to meet your customers’ needs.

Elastic Load Balancing

Elastic Load Balancing is the AWS service that automatically distributes incoming application
traffic across multiple resources, such as Amazon EC2 instances.

A load balancer acts as a single point of contact for all incoming web traffic to your Auto Scaling
group. This means that as you add or remove Amazon EC2 instances in response to the amount
of incoming traffic, these requests route to the load balancer first. Then, the requests spread
across multiple resources that will handle them. For example, if you have multiple Amazon EC2
instances, Elastic Load Balancing distributes the workload across the multiple instances so that
no single instance has to carry the bulk of it.

Although Elastic Load Balancing and Amazon EC2 Auto Scaling are separate services, they work
together to help ensure that applications running in Amazon EC2 can provide high performance
and availability.
Serverless computing
Earlier in this module, you learned about Amazon EC2, a service that lets you run virtual servers
in the cloud. If you have applications that you want to run in Amazon EC2, you must do the
following:

● Provision instances (virtual servers).


● Upload your code.
● Continue to manage the instances while your application is running.

The term “serverless” means that your code runs on servers, but you do not need to provision
or manage these servers. With serverless computing, you can focus more on innovating new
products and features instead of maintaining servers.

An AWS service for serverless computing is AWS Lambda

AWS Lambda(opens in a new tab) is a service that lets you run code without needing to
provision or manage servers.

Containers

Containers provide you with a standard way to package your application's code and
dependencies into a single object. You can also use containers for processes and workflows in
which there are essential requirements for security, reliability, and scalability.
Docker(opens in a new tab) is a software platform that enables you to build, test, and deploy
applications quickly.

AWS Fargate

AWS Fargate(opens in a new tab) is a serverless compute engine for containers. It works with
both Amazon ECS and Amazon EKS.
An Availability Zone is a single data center or a group of data centers within a Region.
Availability Zones are located tens of miles apart from each other. This is close enough to have
low latency (the time between when content requested and received) between Availability
Zones. However, if a disaster occurs in one part of the Region, they are distant enough to
reduce the chance that multiple Availability Zones are affected.
Edge locations

An edge location is a site that Amazon CloudFront uses to store cached copies of your content
closer to your customers for faster delivery.
AWS Direct Connect
is a service that lets you to establish a dedicated private connection between your data center
and a VPC.

Security groups
A security group is a virtual firewall that controls inbound and outbound traffic for an Amazon
EC2 instance.
An instance store provides temporary block-level storage for an Amazon EC2 instance. An
instance store is disk storage that is physically attached to the host computer for an EC2
instance, and therefore has the same lifespan as the instance. When the instance is terminated,
you lose any data in the instance store.

Amazon Elastic Block Store (Amazon EBS) is a service that provides block-level storage
volumes that you can use with Amazon EC2 instances. If you stop or terminate an Amazon EC2
instance, all the data on the attached EBS volume remains available.

In object storage, each object consists of data, metadata, and a key.

The data might be an image, video, text document, or any other type of file. Metadata contains
information about what the data is, how it is used, the object size, and so on. An object’s key is
its unique identifier.

Amazon Simple Storage Service (Amazon S3) (opens in a new tab) is a service that
provides object-level storage. Amazon S3 stores data as objects in buckets.
You can upload any type of file to Amazon S3, such as images, videos, text files, and so on. For
example, you might use Amazon S3 to store backup files, media files for a website, or archived
documents. Amazon S3 offers unlimited storage space. The maximum file size for an object in
Amazon S3 is 5 TB.

In a relational database, data is stored in a way that relates it to other pieces of data.

An example of a relational database might be the coffee shop’s inventory management system.
Each record in the database would include data for a single item, such as product name, size,
price, and so on.

Relational databases use structured query language (SQL) to store and query data. This
approach allows data to be stored in an easily understandable, consistent, and scalable way.

Nonrelational databases are sometimes referred to as “NoSQL databases” because they use
structures other than rows and columns to organize data. One type of structural approach for
nonrelational databases is key-value pairs. With key-value pairs, data is organized into items
(keys), and items have attributes (values). You can think of attributes as being different
features of your data.

In a key-value database, you can add or remove attributes from items in the table at any time.
Additionally, not every item in the table has to have the same attributes

You might also like