UCS531-Cloud Computing Amazon Web Services
UCS531-Cloud Computing Amazon Web Services
A large enterprise spread across the globe can utilize the AWS to
deliver the training to the distributed workforce.
One EC2 Compute Unit (ECU) provides the equivalent CPU capacity of a 1.0-1.2 GHz
2007 Opteron or 2007 Xeon processor
EC2 High Performance Instances
▪ High-Memory Instances: – High-Memory Extra Large Instance 17.1 GiB memory,
6.5 ECU (2 virtual cores with 3.25 EC2 Compute Units each), 420 GB of local instance
storage, 64-bit platform
– High-Memory Double Extra Large Instance 34.2 GiB of memory, 13 EC2 Compute
Units (4 virtual cores with 3.25 EC2 Compute Units each), 850 GB of local instance
storage, 64-bit platform
– High-Memory Quadruple Extra Large Instance 68.4 GiB of memory, 26 EC2
Compute Units (8 virtual cores with 3.25 EC2 Compute Units each), 1690 GB of local
instance storage, 64-bit platform ▪ High-CPU Instances
– High-CPU Medium Instance 1.7 GiB of memory, 5 EC2 Compute Units (2 virtual
cores with 2.5 EC2 Compute Units each), 350 GB of local instance storage, 32-bit or
64-bit platform
– High-CPU Extra Large Instance 7 GiB of memory, 20 EC2 Compute Units (8
virtual cores with 2.5 EC2 Compute Units each), 1690 GB of local instance storage, 64-
bit platform ▪ High Storage Instances
– High Storage Eight Extra Large 117 GiB memory, 35 EC2 Compute Units, 24 * 2
TB of hard disk drive local instance storage, 64-bit platform, 10 Gigabit Ethernet ▪ High
I/O Instances
– High I/O Quadruple Extra Large 60.5 GiB memory, 35 EC2 Compute Units, 2 *
1024 GB of SSD-based local instance storage, 64-bit platform, 10 Gigabit Ethernet
EC2 Cluster Instances
▪ Cluster Compute Instances provide proportionally high CPU resources with
increased network performance and are well suited for High Performance
Compute (HPC) applications and other demanding network-bound applications.
– Cluster Compute Eight Extra Large 60.5 GiB memory, 88 EC2 Compute
Units, 3370 GB of local instance storage, 64-bit platform, 10 Gigabit Ethernet
▪ High Memory Cluster Instances provide proportionally high CPU and
memory resources with increased network performance, and are well suited for
memory-intensive applications including in-memory analytics, graph analysis,
and scientific computing.
– High Memory Cluster Eight Extra Large 244 GiB memory, 88 EC2
Compute Units, 240 GB of local instance storage, 64- bit platform, 10 Gigabit
Ethernet
▪ Cluster GPU Instances provide general-purpose graphics processing units
(GPUs) with proportionally high CPU and increased network performance for
applications benefitting from highly parallelized processing, including HPC,
rendering and media processing applications.
– Cluster GPU Quadruple Extra Large 22 GiB memory, 33.5 EC2 Compute
Units, 2 x NVIDIA Tesla “Fermi” M2050 GPUs, 1690 GB of local instance
storage, 64-bit platform, 10 Gigabit Ethernet.
EC2 Payment methods
Amazon Amazon
EFS Glacier
Database
Amazon Elastic Block Store (EBS)
▪ Provides block level storage volumes (1 GB to 1 TB ) for use with
Amazon EC2 instances. – Multiple volumes can be mounted to the same
instance.
– EBS volumes are network-attached, and persist independently from the
life of an instance.
– Storage volumes behave like raw, unformatted block devices, allowing
users to create a file system on top of Amazon EBS volumes, or use them in
any other way you would use a block device (like a hard drive).
▪ EBS volumes are placed in a specific Availability Zone, and can then be
attached to instances also in that same Availability Zone.
▪ Each storage volume is automatically replicated within the same
Availability Zone.
▪ EBS provides the ability to create point-in-time snapshots of volumes,
which are persisted to Amazon S3.
– These snapshots can be used as the starting point for new Amazon EBS
volumes, and protect data for long-term durability.
– The same snapshot can be used to instantiate as many volumes as you
wish. – These snapshots can be copied across AWS regions.
EBS Volumes
▪ Standard volumes offer storage for applications with moderate or bursty I/O
requirements.
– Standard volumes deliver approximately 100 IOPS on average.
– well suited for use as boot volumes, where the burst capability provides fast
instance start-up times.
▪ Provisioned IOPS volumes are designed to deliver predictable, high
performance for I/O intensive workloads such as databases.
– You specify an IOPS rate when creating a volume, and EBS provisions that rate
for the lifetime of the volume.
– Amazon EBS currently supports up to 4000 IOPS per Provisioned IOPS
volume.
– You can stripe multiple volumes together to deliver thousands of IOPS per EC2
instance.
▪ To enable your EC2 instances to fully utilize the IOPS provisioned on an EBS
volume,:
– Launch selected Amazon EC2 instance types as “EBS-Optimized”
instances. – EBS-optimized instances deliver dedicated throughput between
Amazon EC2 and Amazon EBS, with options between 500 Mbps and 1000 Mbps
depending on the instance type used.
▪ EBS charges based on per GB-month AND per 1 million I/O requests
Block storage vs Object storage
Unlimited number of objects with write, read, and delete facility and containing 1
byte to 5 terabytes of data.
Each object is stored in a bucket and retrieved via a unique, developer-assigned key.
Objects stored in a specific region never leave the Region unless transferred.
Authentication mechanisms are provided to ensure that data is kept secure from
unauthorized access.
Uses standards-based REST and SOAP interfaces to work with any Internet-
development toolkit.
Open https://portal.aws.amazon.com/billing/signup.
Follow the online instructions.
Create an IAM user with administrative privilege.
Create a bucket
To create a bucket
Sign in to the AWS, open S3 console https://console.aws.amazon.com/s3/
Choose Create bucket.
In Bucket name, enter a name for your bucket with following
constraints:
Be unique across all of Amazon S3.
Be between 3 and 63 characters long.
Not contain uppercase characters.
Start with a lowercase letter or number.
After you create the bucket, you can't change its name.
In Region, choose the AWS Region where you want the bucket to
reside.
In Bucket settings for Block Public Access, keep the values set to
the defaults.
Choose Create bucket.
Upload an object to a bucket
To upload an object to a bucket
In the Bucket list, select the bucket that you want to upload
your object to.
On the Overview tab for your bucket, choose Upload or Get
Started.
In the Upload dialog box, choose Add files.
Choose a file to upload, and then choose Open.
Choose Upload.
.
To download an object from a bucket
In the Buckets list, choose the name of the bucket that
you created.
In the Name list, choose the name of the object that
you uploaded.
For your selected object, the object overview panel
opens.
On the Overview tab, review information about your
object.
To view the object in your browser, choose Open.
To download the object to your computer,
choose Download.
Delete an object and Bucket
Delete an object.
In the Buckets list, choose the bucket that you want to
delete an object from.
In the Name list, select the check box for the object that
you want to delete.
Choose Actions, and then choose Delete.
In the Delete objects dialog box, verify that the name of
the object, and choose Delete.
• Newly created S3 buckets and objects are private and protected by default
• Using Amazon S3 Block Public Access. These settings override any other policies or object permissions. Enable Block
Public Access for all buckets that you don't want to be publicly accessible. This feature provides a straightforward
method for avoiding unintended exposure of Amazon S3 data.
• Writing AWS Identity and Access Management (IAM) policies that specify the users or roles that can access specific
buckets and objects.
• Writing bucket policies that define access to specific buckets or objects. This option is typically used when the user or
system cannot authenticate by using IAM. Bucket policies can be configured to grant access across AWS accounts or to
grant public or anonymous access to Amazon S3 data. If bucket policies are used, they should be written carefully and
tested fully. You can specify a deny statement in a bucket policy to restrict access. Access will be restricted even if the
users have permissions that are granted in an identity-based policy that is attached to the users.
• Creating S3 Access Points. Access points are unique hostnames that enforce distinct permissions and network controls
for requests that are made through it. Customers with shared datasets can scale access for many applications by creating
individualized access points with names and permissions that are customized for each application.
• Setting access control lists (ACLs) on your buckets and objects. ACLs are less commonly used (ACLs predate IAM). If
you use ACLs, do not set access that is too open or permissive.
• AWS Trusted Advisor provides a bucket permission check feature. It is a useful tool for discovering if any of the buckets
in your account have permissions that grant global access.
Three general approaches to configuring access
Configure the appropriate security settings for your use case on the bucket and objects.
User A
Private Controlled
Public
access
Anyone Anyone
else else
User B
Consider encrypting objects in Amazon S3
• Server-side encryption
• On the bucket, enable this feature by selecting the Default encryption option
• Amazon S3 encrypts objects before it saves the objects to disk, and decrypts the objects when
you download them
• Client-side encryption
• Encrypt data on the client side and upload the encrypted data to Amazon S3
• In this case, you manage the encryption process
Amazon S3 benefits
Durability Scalability
• It ensures data is not lost • It offers virtually unlimited
• S3 Standard storage provides 11 capacity
9s (or 99.999999999%) of • Any single object of 5 TB or less
durability
Security
Availability • It offers fine-grained access
• You can access your data when control
needed
• S3 Standard storage class is
designed for four 9s (or 99.99%) Performance
availability • It is supported by many design
patterns
Amazon Virtual Private Cloud (VPC)
From the queue list, select the queue that you have
created.
From Queue Actions, select Delete Queue.
AWS with RESTful API management
Representational State Transfer (REST) is a software architecture
that imposes conditions on how an API should work. REST was
initially created as a guideline to manage communication on a
complex network like the internet.
Amazon API Gateway is a fully managed service that makes it easy for
developers to create, publish, maintain, monitor, and secure APIs at any
scale.
Using API Gateway, you can create RESTful APIs for real-time two-way
communication applications:
Using API Gateway, you can:
•Provide users with high-speed performance for both API requests and
responses.
•Authorize access to your APIs with AWS Identity and Access
Management (IAM)
•Run multiple versions of the same API simultaneously with API
Gateway to quickly iterate, test, and release new versions.
•Monitor performance metrics and information about API calls, data
latency, and error rates from the API Gateway.
Getting started with API Gateway
http://aws.amazon.com/free/
https://docs.aws.amazon.com/index.html