0% found this document useful (0 votes)
4 views

Lab Manual cloud Computing

The Cloud Computing Lab Manual provides an overview of cloud computing, detailing the three main service models: IaaS, PaaS, and SaaS, along with their definitions, features, and use cases. It also covers cloud computing models (Public, Private, Hybrid), advantages of cloud computing, AWS global infrastructure, and the shared responsibility model. Additionally, the manual includes information on AWS EC2, its features, use cases, instance lifecycle, and the AWS Pricing Calculator for estimating costs.

Uploaded by

rkrathore35046
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Lab Manual cloud Computing

The Cloud Computing Lab Manual provides an overview of cloud computing, detailing the three main service models: IaaS, PaaS, and SaaS, along with their definitions, features, and use cases. It also covers cloud computing models (Public, Private, Hybrid), advantages of cloud computing, AWS global infrastructure, and the shared responsibility model. Additionally, the manual includes information on AWS EC2, its features, use cases, instance lifecycle, and the AWS Pricing Calculator for estimating costs.

Uploaded by

rkrathore35046
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 51

Cloud computing lab Manual

CS102691

Department of Computer Science &


Engineering
Shri Shankaracharya Technical Campus
Junwani, Bhilai (C.G) 490020
Cloud Computing Lab manual
j

Experiment 1
Aim: Cloud computing overview, - What is Infrastructure as a Service (IaaS), what is
Platform as a Service (PaaS), what is Software as a Service (SaaS).

Cloud Computing Overview

Cloud computing is the delivery of computing services such as storage, processing


power, databases, networking, software, and more over the internet. Rather than owning
and maintaining physical servers, users can rent or lease resources from cloud service
providers. This allows for scalability, cost savings, and flexibility in managing IT
resources. Cloud computing is commonly categorized into three main service models:
Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software
as a Service (SaaS).

1. Infrastructure as a Service (IaaS)

Definition: IaaS provides virtualized computing resources over the internet. It offers the
basic infrastructure components like virtual machines, storage, and networking that
users can scale up or down based on demand.

Key Features:

 Compute Power: Virtual machines with customizable specs.


 Storage: Scalable cloud storage solutions.
 Networking: Virtual networks, load balancers, and firewalls.

Examples:

 Amazon Web Services (AWS)


 Microsoft Azure
 Google Cloud Platform (GCP)

Use Case: IaaS is great for businesses that need flexible infrastructure to support
applications and data without investing in physical hardware. For instance, hosting
websites, running virtual machines, or building disaster recovery environments.

2. Platform as a Service (PaaS)

Definition: PaaS provides a platform allowing developers to build, deploy, and manage
applications without dealing with the underlying infrastructure (like virtual machines or
storage). It typically includes tools for development, testing, and deployment, all in one
package.

Key Features:

 Development Tools: Code editors, compilers, and debuggers.


 Application Hosting: Managed environments to run applications.
Page | 1
Cloud Computing Lab manual
 Database Integration: Pre-configured databases that can be scaled and
managed.
 Security: Built-in security protocols like authentication.

Examples:

 Heroku
 Google App Engine
 Microsoft Azure App Services

Use Case: Ideal for developers who want to focus on writing code and developing
applications without managing hardware or operating systems. For example, building
web applications, APIs, or mobile backend services.

3. Software as a Service (SaaS)

Definition: SaaS delivers fully functional software applications over the internet,
eliminating the need for users to install, manage, or maintain software on their devices.
These applications are hosted, managed, and updated by the service provider.

Key Features:

 Access from Anywhere: Software is accessible via a web browser, making it


easy to access from any device.
 Subscription Model: Often offered on a subscription basis.
 Automatic Updates: Providers handle all maintenance and updates.
 Multi-Tenant: One instance of the software serves multiple customers securely.

Examples:

 Google Workspace (Docs, Gmail)


 Microsoft Office 365
 Dropbox
 Salesforce

Use Case: SaaS is ideal for users who need ready-to-use software without the need for
installation or management. For example, email platforms, project management tools, or
CRM (Customer Relationship Management) systems.

Page | 2
Cloud Computing Lab manual

Experiment 2

Aim: Cloud computing models, Advantages of using cloud, AWS Global Infrastructure,
AWS Shared Responsibility Model.

Cloud Computing Models

Cloud computing models refer to the different types of services and deployment
approaches that organizations can choose based on their specific needs. There are three
primary models: Public Cloud, Private Cloud, and Hybrid Cloud.

1. Public Cloud

In the public cloud model, cloud services are delivered over the internet and shared
across multiple organizations. These services are owned and operated by third-party
cloud service providers. Users can access resources on a pay-as-you-go basis.

Examples: AWS, Microsoft Azure, Google Cloud

Key Features:

 Managed by the cloud provider.


 Resources are shared among multiple customers (multi-tenancy).
 Ideal for applications with varying workloads or external-facing services.

2. Private Cloud

A private cloud refers to cloud resources used exclusively by one organization. It can be
hosted either on-premises or by a third-party provider. A private cloud gives more
control over resources and security.

Key Features:

 Dedicated resources for a single organization.


 Greater control over security, compliance, and data privacy.
 Often used by organizations with high-security needs or regulatory requirements.

3. Hybrid Cloud

A hybrid cloud combines both public and private cloud models. It allows businesses to
use private cloud for sensitive workloads and public cloud for less-sensitive workloads.
Hybrid cloud models offer flexibility and optimal resource utilization.

Key Features:

 Seamless integration between public and private clouds.


 Allows movement of workloads between environments.
Page | 3
Cloud Computing Lab manual
 Useful for businesses with both high-security and cost-efficiency requirements.

Advantages of Using Cloud Computing

Cloud computing offers a range of benefits that make it an attractive choice for many
organizations:

1. Cost Efficiency:

 Pay-as-you-go model: Only pay for the resources you use.


 No upfront costs: No need for large investments in physical hardware and
infrastructure.
 Reduced maintenance costs: Cloud service providers manage infrastructure
maintenance.

2. Scalability and Flexibility:

 On-demand resources: Easily scale resources up or down based on your needs.


 Global reach: Cloud platforms offer services in multiple regions, allowing
businesses to scale globally with minimal effort.

3. Reliability:

 High uptime: Most cloud providers offer Service Level Agreements (SLAs)
guaranteeing high availability.
 Redundancy: Cloud providers often have backup systems in place to ensure
minimal service disruption.

4. Security:

 Data encryption: Most cloud providers offer data encryption for both data at rest
and in transit.
 Compliance: Cloud providers adhere to regulatory standards (e.g., GDPR, HIPAA),
making it easier for businesses to stay compliant.

5. Disaster Recovery and Backup:

 Automated backup: Cloud platforms offer automatic backup and disaster


recovery options to ensure data availability in case of failure.

6. Collaboration and Accessibility:

 Access from anywhere: Cloud services can be accessed from any device with an
internet connection.
 Real-time collaboration: Multiple users can collaborate on documents, projects,
or applications in real time.

AWS Global Infrastructure

Page | 4
Cloud Computing Lab manual
Amazon Web Services (AWS) offers a vast global infrastructure that allows businesses to
build and deploy applications globally with high availability, low latency, and fault
tolerance. AWS infrastructure is designed to be flexible, scalable, and reliable.

Key Components:

1. Regions: AWS data centers are located in geographically distinct regions around
the world. Each region consists of multiple Availability Zones (AZs).
o Example: US East (N. Virginia), Europe (Ireland), Asia Pacific (Sydney).

2. Availability Zones (AZs): These are isolated locations within a region, designed
to ensure high availability. They are linked through low-latency, high-throughput
networks.
o Example: A region may have 3 AZs for fault tolerance.

3. Edge Locations: AWS has a network of edge locations for content delivery
through services like Amazon CloudFront. These locations are spread across the
globe, allowing faster content delivery with lower latency.
4. Local Zones and Wavelength: AWS has specialized zones designed for low-
latency applications, such as gaming, 5G networks, and edge computing.

AWS's global infrastructure enables customers to deploy applications closer to end-


users, ensuring fast and reliable service.

AWS Shared Responsibility Model

The AWS Shared Responsibility Model defines the division of security responsibilities
between AWS and the customer. This model helps clarify who is responsible for securing
what within the cloud environment.

1. AWS Responsibility (Security "of" the Cloud): AWS is responsible for securing
the infrastructure that runs its cloud services. This includes:
o Physical security of data centers.
o Network infrastructure security.
o Hypervisor security (for virtual machines).
o Hardware and software patches and updates for AWS infrastructure.

2. Customer Responsibility (Security "in" the Cloud): Customers are


responsible for securing their data, applications, and operating systems. This
includes:
o Managing user access and identity (e.g., AWS Identity and Access
Management or IAM).
o Configuring firewalls and network settings (e.g., Virtual Private Cloud or
VPC).
o Ensuring data encryption (at rest and in transit).
o Patching and maintaining the operating system and applications running on
cloud instances.

The shared responsibility model highlights that AWS secures the foundational
infrastructure, while customers are responsible for securing the services and
applications they deploy within the cloud.

Page | 5
Cloud Computing Lab manual

Experiment 3

Aim: To Study AWS Elastic Cloud Compute (EC2).

AWS Elastic Compute Cloud (EC2)

Amazon EC2 (Elastic Compute Cloud) is one of the core services offered by AWS and
provides scalable computing capacity in the cloud. It allows users to run virtual
machines (VMs) called instances on demand, giving them the flexibility to quickly scale
computing resources up or down based on their needs. EC2 is widely used for a variety
of tasks, from hosting websites to running applications and big data analysis.

Key Features of AWS EC2:

1. Scalability:
o Elasticity: EC2 allows users to scale their compute resources up or down
depending on demand. You can launch as many or as few instances as you
need and adjust capacity quickly.
o Auto Scaling: EC2 instances can be automatically scaled based on defined
criteria (like CPU usage or traffic), ensuring you have enough resources
when needed but also optimizing costs.
2. Flexible Instance Types:
o EC2 offers various instance types optimized for different use cases. Some of
the key instance families include:
 General Purpose: Balanced resources (e.g., T3, M5).
 Compute Optimized: High processing power (e.g., C5).
 Memory Optimized: High memory (e.g., R5, X1e).
 Storage Optimized: High I/O (e.g., I3, D2).
 Accelerated Computing: Instances with GPUs (e.g., P4, G4).
3. Customizable Configuration:
o AMI (Amazon Machine Images): You can choose pre-configured AMIs for
different operating systems (e.g., Linux, Windows) or create custom AMIs to
replicate specific environments.
o VPC Integration: EC2 instances can be placed in a Virtual Private Cloud
(VPC), giving users control over network settings and ensuring isolated and
secure networking.
4. Pay-as-You-Go Pricing:
o On-Demand Instances: Pay only for what you use. No upfront costs or
long-term commitments.
Page | 6
Cloud Computing Lab manual
o Reserved Instances: Save money by committing to a specific instance
type for a one- or three-year term (offering significant discounts).
o Spot Instances: Take advantage of unused EC2 capacity at a lower price
(but instances can be terminated by AWS with little notice if demand
increases).
o Dedicated Hosts: Physical servers dedicated to your use, useful for
compliance needs or licensing restrictions.
5. Security and Compliance:
o IAM Integration: Integrate with AWS Identity and Access Management
(IAM) to define permissions and control access to EC2 instances.
o Key Pairs: Secure SSH access to instances using key pairs for
authentication.
o Security Groups & Network ACLs: Virtual firewalls to control inbound and
outbound traffic.
o Compliance: AWS EC2 meets numerous security and regulatory standards
(e.g., HIPAA, GDPR, SOC 2).
6. Storage Options:
o EBS (Elastic Block Store): Persistent block storage for EC2 instances. You
can use EBS to store your data that needs to persist even if the instance is
terminated.
o Instance Store: Temporary storage attached to an EC2 instance (data is
lost if the instance is stopped or terminated).
o Elastic File System (EFS): A scalable, managed file storage service that
can be mounted on multiple EC2 instances.
7. Monitoring and Management:
o CloudWatch: AWS CloudWatch provides real-time monitoring of your EC2
instances and helps you set alarms based on metrics like CPU utilization,
memory, and disk I/O.
o AWS Systems Manager: Allows you to automate and manage EC2
instances, including patch management and configuration tasks.
o Elastic Load Balancing (ELB): Distributes incoming traffic across multiple
EC2 instances to ensure high availability.

Use Cases for AWS EC2:

1. Web Hosting: EC2 is commonly used to host websites and web applications. You
can easily scale your instance count based on the amount of web traffic.
2. Big Data Processing: With its flexible configurations and high performance, EC2
can be used for running big data applications, including Hadoop or Spark clusters.
3. Development and Testing: EC2 provides an isolated environment for developers
to test and develop applications without affecting production systems.
4. High-Performance Computing (HPC): EC2 instances with specialized compute
capabilities (e.g., GPU instances) are used for computationally intense tasks like
scientific research or machine learning.
5. Disaster Recovery: EC2 can be part of a disaster recovery strategy, allowing you
to quickly spin up resources in the cloud if on-premises infrastructure fails.
6. Gaming: EC2 instances with GPUs are widely used for hosting gaming servers,
which require significant compute power for rendering and multiplayer interaction.

Page | 7
Cloud Computing Lab manual

EC2 Instance Lifecycle:

1. Launch: You launch an EC2 instance from an AMI (Amazon Machine Image), which
defines the operating system and software environment.
2. Run: Once running, you can connect to your instance via SSH (for Linux) or RDP
(for Windows) to manage your application.
3. Monitor: Using AWS tools like CloudWatch, you can monitor metrics such as CPU
usage, disk I/O, and network activity to ensure optimal performance.
4. Scale: If your demand increases, you can either manually scale up by changing to
a larger instance type or automatically scale using Auto Scaling.
5. Terminate: When your work is done, you can stop or terminate the instance.
Stopping an instance saves costs, while terminating it ends all billing for the
instance.

Key Advantages of AWS EC2:

1. Flexibility: Choose from a wide variety of instance types and storage options, or
even run containerized applications using EC2 with Amazon ECS (Elastic Container
Service).
2. Cost-Effective: Pay only for the compute power you need, and benefit from
options like Reserved Instances and Spot Instances to save money.
3. High Availability and Fault Tolerance: Run instances in multiple Availability
Zones for fault tolerance and minimize downtime.
4. Global Reach: Launch instances in multiple regions and Availability Zones
worldwide for low-latency access to users.
5. Security: EC2 integrates with AWS security services like IAM, VPC, and encryption,
ensuring that your infrastructure remains secure.
6. Integration: EC2 integrates seamlessly with other AWS services like S3, RDS,
Lambda, and more to build robust and scalable applications.

In Summary:

AWS EC2 provides scalable, flexible, and cost-effective computing resources that allow
users to run virtual machines (instances) in the cloud. Whether you're running a simple
web application or conducting high-performance computing tasks, EC2 gives you the
power and resources to meet your demands. With various instance types, pricing
options, and integration with other AWS services, EC2 offers a versatile platform to
support virtually any computing need.

Page | 8
Cloud Computing Lab manual

Experiment 4

Aim: To Study AWS Pricing Calculator.

AWS Pricing Calculator

The AWS Pricing Calculator is an online tool provided by Amazon Web Services that
allows users to estimate the costs of using AWS services based on their specific use case
and configuration. It helps businesses and developers calculate and predict the cost of
running workloads on AWS before committing to any services.

The AWS Pricing Calculator is particularly useful for:

 Planning costs: Estimating expenses based on anticipated usage.


 Cost optimization: Identifying where savings can be made, such as switching to
Reserved Instances or using cheaper services.
 Budgeting: Providing cost visibility and assisting in financial planning for cloud
projects.

Key Features of AWS Pricing Calculator:

1. Cost Estimation for Multiple Services:


o The AWS Pricing Calculator covers a broad range of services, including EC2,
S3, RDS, Lambda, VPC, Elastic Load Balancing, AWS Direct Connect,
and many more.
o You can create an estimate for one or more services, either individually or in
combination.
2. Customizable Configurations:
o You can input specific usage patterns such as instance types, storage size,
network traffic, and more.
o Choose the type of AWS services you will use (e.g., On-Demand, Reserved,
or Spot Instances) and estimate the costs accordingly.
o For example, you can estimate EC2 instance costs by selecting instance
type, region, operating system, storage, etc.
3. Multi-Region Support:
o Pricing varies by AWS region, and the calculator allows you to select
different regions to estimate costs based on geographic location.
o This feature is especially useful as AWS pricing can vary significantly from
one region to another, and choosing the right region can help reduce costs.
4. Estimate Customization:

Page | 9
Cloud Computing Lab manual
o You can fine-tune estimates by specifying things like data transfer
requirements, storage types, instance types, and more.
o The tool gives you flexibility to model both simple and complex
environments (e.g., single instance vs. a multi-tier architecture).
5. Cost Breakdown:
o After generating an estimate, the calculator provides a detailed cost
breakdown. It categorizes costs by service (e.g., compute, storage, data
transfer) and even provides a forecast for recurring costs.
o The output includes both monthly and annual cost estimates, helping users
understand long-term pricing implications.
6. AWS Free Tier:
o The tool can also help you identify when services are eligible for the AWS
Free Tier, which provides limited resources for free for new customers for the
first 12 months.
7. Cost Comparison:
o You can compare different pricing models (On-Demand vs. Reserved
Instances vs. Spot Instances) for services like EC2, helping you make an
informed decision about cost optimization.
8. Exporting and Sharing:
o Once you've created a cost estimate, you can export it to a CSV or PDF file
for sharing with your team or stakeholders.
o You can also share a link to the estimate for collaboration and further
refinement.

How to Use AWS Pricing Calculator:

1. Access the AWS Pricing Calculator:


o You can access the AWS Pricing Calculator via the AWS website: AWS Pricing
Calculator.
2. Choose a Service:
o Select the AWS service(s) you want to estimate costs for (e.g., EC2, RDS, S3,
etc.).
o The tool allows you to build out your estimate by adding multiple services to
the same project.
3. Define Your Configuration:
o Input specifics such as instance types, storage capacity, data transfer, etc.
For EC2, for example, you’d need to select your desired instance type,
number of instances, region, and other settings.
o For other services like S3 or RDS, you’ll need to define parameters like
storage size, data transfer rates, or database configurations.
4. Review Your Estimate:
o The pricing calculator will provide a cost estimate based on your
configurations, broken down into categories (e.g., compute, storage, data
transfer).
o You can adjust configurations to see how different settings affect your
pricing.
5. Optimize and Fine-Tune:
o Once you have a basic estimate, you can explore different options for cost
optimization, such as using Reserved Instances or analyzing data transfer
costs.
6. Save, Share, or Export:

Page | 10
Cloud Computing Lab manual
o After refining your estimate, you can save the estimate, share it with others,
or export it to a file format for documentation or reporting purposes.

Example: Estimating EC2 Costs with the AWS Pricing Calculator

Let’s walk through an example of estimating EC2 costs for a basic web application:

1. Select EC2 Service: Start by selecting Amazon EC2 from the list of services in
the AWS Pricing Calculator.
2. Choose Instance Type:
o Select the desired instance type (e.g., t3.medium for general-purpose
workloads).
3. Select Region:
o Choose the region where you’ll deploy your EC2 instances (e.g., US East (N.
Virginia)).
4. Define Number of Instances:
o Input the number of instances required for your workload (e.g., 2 instances
for redundancy).
5. Storage:
o Choose the type and amount of storage, for example, 50 GB of General
Purpose SSD (gp3) storage.
6. Additional Configurations:
o Specify data transfer requirements, such as monthly 5 GB of outbound
data.
7. Review Costs:
o The calculator will provide an estimate of monthly and yearly costs,
including compute, storage, and data transfer.
8. Optimize:
o Explore Reserved Instances for a longer commitment (1 or 3 years) to see
how costs might be reduced.
9. Finalize:
o Once satisfied, you can save the estimate for further use, sharing, or
integration into a larger cost plan.

Benefits of Using AWS Pricing Calculator:

1. Cost Transparency:
o The calculator provides a detailed breakdown of potential costs, helping
businesses predict their cloud expenses more accurately.
2. Cost Optimization:
o By experimenting with different instance types, pricing models, and services,
users can identify areas for cost savings.
3. Easy Planning:
o Provides a simple way to plan for AWS usage, particularly helpful when
migrating from on-premises infrastructure or estimating cloud project
budgets.
4. Flexibility and Accuracy:
o You can configure the tool to suit complex workloads or even simple
projects, giving you accurate cost forecasts for virtually any AWS service.

In Summary:

Page | 11
Cloud Computing Lab manual
The AWS Pricing Calculator is a powerful and essential tool for anyone looking to
estimate and optimize their cloud infrastructure costs. It provides customizable
estimates for over 200 AWS services, helping users understand potential costs, optimize
resources, and make informed decisions. Whether you're just starting with AWS or
scaling a large enterprise, the Pricing Calculator helps manage costs and avoid surprises
on your cloud bill.

Experiment 5

Aim: Create Linux Instance, - Using Putty to connect to Linux Instance, Implement
Apache Web Server on Linux Instance.

Step-by-Step Guide: Creating a Linux EC2 Instance, Connecting via PuTTY, and
Installing Apache Web Server

Let's walk through the process of creating a Linux EC2 instance, connecting to it using
PuTTY, and setting up the Apache web server on the instance.

1. Creating a Linux EC2 Instance

Before you can connect to the instance and set up Apache, you need to create an EC2
instance running a Linux-based operating system. Here's how to do that:

Step 1: Log in to AWS Console

 Go to the AWS Management Console, and log in with your AWS account
credentials.

Step 2: Launch an EC2 Instance

1. Navigate to EC2: In the AWS Console, type "EC2" in the search bar and click on
EC2 under the Services tab.
2. Launch Instance: On the EC2 Dashboard, click on Launch Instance to start the
process of creating a new EC2 instance.
3. Choose an Amazon Machine Image (AMI):
o Select Amazon Linux 2 AMI (this is a commonly used Linux distribution for
EC2).
o You can also use other Linux distributions like Ubuntu, CentOS, or Red Hat if
desired.

4. Choose an Instance Type:


Page | 12
Cloud Computing Lab manual
o Select an instance type like t2.micro (which is eligible for the AWS Free
Tier).

5. Configure Instance Details:


o Leave the default settings unless you need to customize networking and
other configurations.
o You can leave Auto-assign Public IP enabled to ensure that your instance
gets a public IP address, which is essential for SSH access.

6. Add Storage:
o The default settings are usually sufficient. You can adjust the size if
necessary.

7. Configure Security Group:


o Create a new security group or choose an existing one.
o Ensure that port 22 (SSH) is open for incoming traffic from your IP address
(you’ll need this to connect via PuTTY).
o Optionally, open port 80 (HTTP) for the Apache web server.

8. Key Pair:
o Create a new key pair or select an existing one.
o Download the key pair file (.pem), which you'll use to securely connect to the
instance via SSH.
o Make sure to save this file securely, as you won't be able to download it
again.

9. Review and Launch:


o Review your configuration and click Launch. Your EC2 instance will be
created, and it may take a few minutes to start up.

2. Connecting to the Linux EC2 Instance Using PuTTY

After your EC2 instance is up and running, you need to connect to it using PuTTY, a
popular SSH client for Windows.

Step 1: Convert PEM to PPK (PuTTY Private Key Format)

PuTTY does not support .pem files directly, so you must convert the .pem file to .ppk
using PuTTYgen.

1. Download PuTTYgen:
o If you don't have PuTTYgen, download it from here.

2. Convert the Key:


o Open PuTTYgen.
o Click on Load and select your .pem file that you downloaded during the EC2
instance creation.
o In the Load dialog, change the file type to All Files to see your .pem file.
o Click Open, and then click Save private key. When prompted, save it as a
.ppk file.

Page | 13
Cloud Computing Lab manual
Step 2: Use PuTTY to Connect to the Instance

1. Open PuTTY: Open the PuTTY application on your computer.


2. Enter Instance Public IP:
o In PuTTY, under the Session category, enter your instance's Public IP
address (you can find this in the EC2 Console under the instance details).

3. Specify the Private Key:


o On the left sidebar, expand Connection > SSH > Auth.
o Click Browse and select the .ppk file you just created in PuTTYgen.

4. Start the SSH Session:


o Go back to the Session category and click Open to initiate the connection.
o When prompted to log in, use the username ec2-user (for Amazon Linux) or
ubuntu (for Ubuntu instances).

Example:

o Username: ec2-user
o Password: No password; the authentication is done via the private key.

Once logged in, you’ll have access to the command line of your Linux instance.

3. Installing Apache Web Server on the Linux Instance

Now that you're connected to your EC2 instance, you can proceed to install and
configure the Apache web server.

Step 1: Update the Package Repositories

Run the following command to ensure that all the package repositories are up to date:

sudo yum update -y

This command updates all installed packages and the package list, ensuring that you're
installing the latest version of software.

Step 2: Install Apache (httpd)

To install the Apache web server (known as httpd on Linux):

sudo yum install httpd -y

This command will download and install Apache on your instance.

Step 3: Start Apache Web Server

Once Apache is installed, you need to start the service:

sudo systemctl start httpd

Page | 14
Cloud Computing Lab manual
To ensure that Apache starts automatically when the instance reboots:

sudo systemctl enable httpd

Step 4: Allow HTTP Traffic in the Security Group

If you didn't open port 80 (HTTP) in the security group earlier, you can do so now to
allow incoming traffic to the web server:

1. Go to the EC2 Dashboard in the AWS Management Console.


2. In the left menu, click Security Groups under Network & Security.
3. Select the security group associated with your EC2 instance.
4. In the Inbound rules tab, click Edit inbound rules.
5. Add a rule:
o Type: HTTP
o Port: 80
o Source: Anywhere (0.0.0.0/0)
6. Click Save rules.

Step 5: Verify the Apache Web Server

Now, open a web browser and enter your instance's Public IP address in the URL bar:

http://<your-ec2-public-ip>

You should see the default Apache web page, which confirms that the Apache server is
running successfully.

4. Optional: Configure the Apache Web Server

You can modify the default Apache configuration file located at


/etc/httpd/conf/httpd.conf if needed. You can also upload your website files to the
/var/www/html directory to serve custom content.

In Summary:

 Launch a Linux EC2 Instance: Using Amazon Linux or another Linux distribution.
 Connect Using PuTTY: Convert your .pem file to .ppk and connect to your
instance.
 Install Apache: Use the yum package manager to install and start Apache
(httpd).
 Configure Security Group: Open port 80 for HTTP traffic.
 Verify: Access the Apache web server using the instance's public IP.

That’s it! You've successfully created a Linux EC2 instance, connected via SSH using
PuTTY, and installed Apache Web Server. Feel free to customize your server with your
own content.

Page | 15
Cloud Computing Lab manual

Experiment 6

Aim: To Create Windows Instance.

Step-by-Step Guide: Creating a Windows EC2 Instance

Creating a Windows EC2 instance in AWS follows a similar process to creating a Linux
instance, but with a few key differences for Windows. Here's a detailed guide to help you
through the steps:

1. Log in to AWS Console

1. Go to the AWS Management Console, and log in with your AWS account
credentials.

Page | 16
Cloud Computing Lab manual
2. In the Services search bar, type EC2 and select EC2 under the Compute section
to open the EC2 Dashboard.

2. Launch a Windows EC2 Instance

Step 1: Launch Instance

1. In the EC2 Dashboard, click Launch Instance to start the process of creating a
new EC2 instance.

Step 2: Choose an Amazon Machine Image (AMI)

1. Select a Windows AMI:


o In the Choose an Amazon Machine Image (AMI) step, you’ll see several
options.
o Look under Microsoft Windows and select a version of Windows Server
that you want to use, such as Microsoft Windows Server 2019 Base or
Windows Server 2022 Base.
o There are also "Windows with SQL" options if you need SQL Server pre-
installed.

Step 3: Choose an Instance Type

1. Select Instance Type:


o Choose the appropriate instance type for your workload. For simple tasks or
testing, t2.micro (which is free-tier eligible) can be a good choice.
o For production workloads, you may need to choose a more powerful instance
(e.g., t3.medium, m5.large, etc.).
o After selecting an instance type, click Next: Configure Instance Details.

Step 4: Configure Instance Details

1. Configure Settings:
o Leave the default settings unless you need custom configurations (e.g.,
networking, IAM role).
o You can set Auto-assign Public IP to Enable so that your instance can be
accessed via RDP.
o If you want to place the instance in a specific Virtual Private Cloud (VPC),
configure that here.
o Click Next: Add Storage when you're ready to proceed.

Step 5: Add Storage

1. Configure Storage:
o By default, a Windows instance will have a root volume of 30 GB. You can
increase or decrease the storage as needed.
o You can add additional EBS volumes if your application requires extra
storage.
o Once you're happy with the storage configuration, click Next: Add Tags.
Page | 17
Cloud Computing Lab manual
Step 6: Add Tags (Optional)

1. Tagging:
o You can add tags for easier identification, such as a Name tag, where you
could name your instance (e.g., Windows-Server-1).
o Tags are optional but can help manage resources efficiently, especially in
larger environments.

Step 7: Configure Security Group

1. Set Up Security Group:


o You need to configure your Security Group to allow inbound traffic to the
instance.
o For RDP access, add a rule to allow TCP port 3389 from your IP address
(you can also select Anywhere for wider access, but this is less secure).
 Type: RDP
 Port Range: 3389
 Source: My IP (or Anywhere, for broader access)
o Additionally, you may want to open port 80 (HTTP) or 443 (HTTPS) if you
plan to host web services on the instance.
2. After configuring the security group, click Review and Launch.

Step 8: Review and Launch

1. Review Configuration:
o Review all settings, including the instance type, storage, and security group.
2. Launch the Instance:
o Click Launch to start the instance creation.
3. Create a Key Pair:
o If you don’t already have a key pair, create a new one.
o Choose Create a new key pair, give it a name (e.g., WindowsKeyPair), and
then click Download Key Pair to save the .pem file. This file is crucial for
accessing the instance.
o Keep the .pem file safe—AWS will not allow you to download it again.
o If you already have a key pair, select Choose an existing key pair and
select the one you want to use.
o Acknowledge that you have the key pair and click Launch Instances.

3. Connect to the Windows EC2 Instance via RDP

After launching the instance, it may take a few minutes for it to start up. Once it's
running, you can connect to it via RDP (Remote Desktop Protocol).

Step 1: Get the Windows Administrator Password

1. Navigate to the EC2 Dashboard:


o In the Instances section, find your newly created Windows instance. It
should be listed in the Running Instances.
2. Get Password:
Page | 18
Cloud Computing Lab manual
o Right-click on the instance and choose Get Windows Password.
o Click Browse and select the .pem key file you downloaded earlier.
o Click Decrypt Password. AWS will then show you the Administrator
password that you can use to log in.

Step 2: Connect via RDP

1. Get the Public IP:


o In the Instance Details section, note the Public IP of the instance. This will
be used to connect to your instance via RDP.

2. RDP Client:
o Open Remote Desktop Connection (or use an RDP client on macOS or
Linux).
o In the Computer field, enter the Public IP of the instance.

3. Enter Credentials:
o In the RDP client, when prompted for credentials, enter:
 Username: Administrator
 Password: The decrypted password from earlier.

4. Login: Click OK to connect, and you should be logged into your Windows instance.

4. Optional: Install IIS Web Server (Windows Server)

If you want to set up a web server on your Windows instance using IIS (Internet
Information Services), follow these steps:

Step 1: Install IIS

1. Open Server Manager on your Windows instance (this opens automatically when
you log in).
2. In Server Manager, click on Add roles and features.
3. Click Next until you reach the Select features screen.
4. On the Select Features screen, check Web Server (IIS).
5. Click Next and follow the prompts to complete the installation.

Step 2: Verify IIS

1. After IIS is installed, you can verify the installation by opening a web browser
within the Windows instance and navigating to http://localhost.
2. You should see the default IIS page indicating that the web server is running.

Step 3: Allow HTTP in Security Group

1. If you want to access the IIS server from outside the instance, you need to make
sure port 80 (HTTP) is open in your Security Group.
2. Go to the EC2 Dashboard and update the Security Group to allow inbound
traffic on port 80 from your IP or from anywhere.
Page | 19
Cloud Computing Lab manual

In Summary:

1. Launch a Windows EC2 Instance:


o Use a Microsoft Windows AMI, select an instance type, configure storage and
security group, and create a key pair.

2. Connect via RDP:


o Get the instance’s public IP, retrieve the password, and use an RDP client to
connect.

3. Set Up IIS (Optional):


o Install and configure IIS for hosting web applications on your Windows
instance.

With these steps, you've successfully created a Windows EC2 instance, connected to it
using RDP, and optionally set up a web server. Let me know if you need more details on
any of the steps!

Experiment 7

Aim: To study Tools Used For Accessing Cloud.

There are several tools and methods available for accessing and managing cloud
resources, depending on the cloud provider (like AWS, Azure, Google Cloud), the type of
access (e.g., command line, graphical interface, or programmatic access), and the
specific cloud services you're using. Below are the common tools used for accessing and
managing cloud environments:

Page | 20
Cloud Computing Lab manual
1. Web-Based Console/Portal

Most cloud providers offer a web-based console or management portal that allows
users to manage resources through a graphical user interface (GUI).

 AWS Management Console:


o AWS provides a web-based console to interact with various AWS services like
EC2, S3, IAM, and more.
o You can launch instances, configure networks, manage storage, and monitor
usage from this console.
o URL: AWS Console
 Azure Portal:
o Microsoft's Azure also offers a web-based portal to manage Azure resources.
o Provides an intuitive GUI to manage services like Virtual Machines, App
Services, databases, etc.
o URL: Azure Portal
 Google Cloud Console:
o Google Cloud provides a web console for managing Google Cloud resources,
including Compute Engine, Cloud Storage, BigQuery, and more.
o URL: Google Cloud Console

2. Command Line Tools/CLI (Command Line Interface)

Cloud providers offer CLI tools that allow users to interact with cloud services directly
from their terminal or command prompt. These are especially useful for automating
tasks, writing scripts, or when you want to avoid using a graphical interface.

 AWS CLI:
o AWS CLI allows users to manage AWS services using the command line. It's
available for Windows, macOS, and Linux.
o You can perform tasks such as launching EC2 instances, managing S3
buckets, or deploying Lambda functions.
o Install it from: AWS CLI
 Azure CLI:
o Azure CLI allows you to manage Azure resources from the command line. It
supports Windows, macOS, and Linux environments.
o You can create, configure, and monitor Azure resources such as VMs,
databases, and networking.
o Install it from: Azure CLI
 Google Cloud SDK (gcloud):
o The gcloud CLI tool is used to manage Google Cloud resources, such as
creating and managing Compute Engine instances, configuring GCP services,
etc.
o Install it from: Google Cloud SDK

3. Cloud-Specific SDKs (Software Development Kits)

For developers building applications that interact with cloud services, SDKs provide
libraries and tools to make API calls more convenient and programmatically
manageable.

Page | 21
Cloud Computing Lab manual
 AWS SDK:
o AWS provides SDKs for popular programming languages such as Python
(Boto3), Java, JavaScript, .NET, PHP, and others.
o These SDKs simplify interacting with AWS services programmatically (e.g.,
uploading files to S3, starting EC2 instances).
o AWS SDK docs: AWS SDK
 Azure SDK:
o Microsoft Azure offers SDKs for various programming languages like .NET,
Python, Java, Node.js, and Go to interact with Azure services.
o Azure SDK docs: Azure SDK
 Google Cloud SDK:
o Google provides SDKs for Python, Java, Go, Node.js, .NET, and other
languages to interact with Google Cloud services programmatically.
o Google Cloud SDK docs: Google Cloud SDK

4. Remote Access Tools

For accessing virtual machines (VMs) running in the cloud, users typically use remote
access protocols.

 SSH (Secure Shell):


o For Linux-based instances (like EC2 in AWS or Google Compute Engine),
SSH is commonly used to access the instance.
o PuTTY is a popular SSH client for Windows, while Linux and macOS have
built-in SSH capabilities.
 RDP (Remote Desktop Protocol):
o For Windows-based instances, users typically connect using RDP.
o Windows has a built-in RDP client, and for macOS/Linux, there are third-party
RDP clients like Microsoft Remote Desktop (for macOS) or rdesktop (for
Linux).

5. Cloud Storage Access Tools

If you're accessing cloud storage services (e.g., AWS S3, Google Cloud Storage, Azure
Blob Storage), there are specific tools that make the process easier.

 AWS S3 CLI:
o AWS CLI also includes commands specifically for interacting with Amazon
S3, like uploading or downloading files, listing objects, and managing
buckets.
o Example commands:
 aws s3 ls (List S3 buckets)
 aws s3 cp (Copy files to/from S3)
 Azure Storage Explorer:
o A desktop app for accessing and managing Azure storage resources (Blob
Storage, Tables, Queues, etc.) without needing to use the portal.
o Install it from: Azure Storage Explorer
 gsutil (Google Cloud Storage):
o A command-line tool that allows you to interact with Google Cloud Storage.
o Example commands:
 gsutil cp (Copy files to/from Google Cloud Storage)

Page | 22
Cloud Computing Lab manual
 gsutil ls (List storage buckets)

6. Infrastructure as Code Tools

For managing cloud infrastructure programmatically, Infrastructure as Code (IaC)


tools are commonly used to automate the creation and management of cloud resources.

 AWS CloudFormation:
o AWS CloudFormation allows you to define and provision AWS infrastructure
using templates written in JSON or YAML.
o You can use CloudFormation to automate the deployment of EC2 instances,
RDS databases, VPCs, and more.
 Azure Resource Manager (ARM):
o ARM templates allow you to define and deploy Azure resources in a
declarative manner. Similar to CloudFormation but for Azure.
 Terraform:
o Terraform is a cloud-agnostic IaC tool that supports multiple cloud providers,
including AWS, Azure, Google Cloud, and others. It allows you to define cloud
infrastructure in code (using HCL - HashiCorp Configuration Language) and
manage it across multiple providers.

7. Cloud Management and Monitoring Tools

Cloud management and monitoring tools allow you to keep track of performance, usage,
and cost in your cloud environment.

 AWS CloudWatch:
o AWS CloudWatch is a monitoring service for AWS cloud resources and the
applications you run on AWS. It provides metrics, logs, and alarms.
o You can track EC2 instance performance, monitor application logs, and set
up automated alerts for thresholds like CPU usage.
 Azure Monitor:
o Azure Monitor helps you collect, analyze, and act on telemetry data from
Azure resources and applications.
o It provides insights into resource health, performance, and usage.
 Google Cloud Operations (formerly Stackdriver):
o Google Cloud Operations Suite provides monitoring, logging, and diagnostics
for cloud applications, helping you understand and manage the health of
your GCP resources.

8. Third-Party Cloud Management Tools

There are several third-party tools available to help manage, monitor, and optimize
cloud resources across different providers.

 CloudBolt:
o Provides multi-cloud management, optimization, and cost governance across
AWS, Azure, and Google Cloud.
 CloudHealth:
o A cloud management platform that helps organizations optimize cloud costs,
manage security, and track performance across multiple cloud
environments.
Page | 23
Cloud Computing Lab manual
 Datadog:
o A monitoring and analytics platform that helps you monitor cloud
infrastructure and applications in real time.

Summary of Tools for Accessing Cloud:

Type Tools Purpose


AWS Console, Azure Portal, GUI for managing cloud
Web Console/Portal
Google Cloud Console resources
AWS CLI, Azure CLI, Google Command-line access to manage
CLI Tools
Cloud SDK cloud resources
AWS SDK, Azure SDK, Google Programmatic access to cloud
SDKs
Cloud SDK services
Access cloud-based VMs
Remote Access SSH, PuTTY, RDP
(Linux/Windows)
AWS S3 CLI, Azure Storage Manage cloud storage (S3, Blob,
Storage Access Tools
Explorer, gsutil Google Cloud Storage)
CloudFormation, ARM, Automate cloud infrastructure
IaC Tools
Terraform provisioning
Monitoring/ CloudWatch, Azure Monitor, Monitor and manage cloud
Management Google Cloud Operations resources' health and usage
CloudBolt, CloudHealth, Multi-cloud management, cost
Third-Party Tools
Datadog optimization, and monitoring

These tools provide various ways to interact with, manage, monitor, and optimize your
cloud resources. Depending on your workflow, you may use a combination of these tools
to perform tasks efficiently.

Experiment 8

Aim: To Study Elastic Block Storage (EBS), Simple Storage Service (S3).

Elastic Block Storage (EBS) and Simple Storage Service (S3) are two of Amazon Web
Services' (AWS) core storage offerings, but they serve different purposes.

1. Elastic Block Storage (EBS):


o What it is: EBS is a block-level storage service that provides persistent
storage for Amazon EC2 (Elastic Compute Cloud) instances. It works like a
hard drive attached to your server.
Page | 24
Cloud Computing Lab manual
o Use cases: EBS is great for storing data that needs to be accessed
frequently and requires low-latency and high throughput, like databases, file
systems, and boot volumes.
o Types: It offers different types of volumes optimized for different use cases,
such as SSD-based (general purpose or provisioned IOPS) and HDD-based
(cold or throughput optimized).
o Persistence: Data persists even after the EC2 instance is stopped or
terminated (unless the volume is explicitly deleted).
2. Simple Storage Service (S3):
o What it is: S3 is an object storage service that provides highly scalable,
durable, and low-cost storage. It's ideal for storing and retrieving any
amount of data at any time, from anywhere on the web.
o Use cases: S3 is used for backup, archiving, data lakes, static website
hosting, and storing large objects like media files, documents, or log files.
It’s not meant for real-time applications like databases.
o Types: There are different storage classes within S3, such as Standard,
Intelligent-Tiering, Glacier (for archival), etc., to suit various needs for cost
and performance.
o Persistence: Like EBS, S3 data is durable and can be retained as long as
you need, with features like versioning and lifecycle policies to manage
objects over time.

Key Differences:

 Storage Type: EBS is block storage, which is suited for file systems and
applications that need to interact with storage directly. S3 is object storage, better
for storing large amounts of unstructured data (like files, images, backups, etc.).
 Performance: EBS typically provides faster access with low-latency storage for
high-performance applications, while S3 provides high scalability but may have
higher access latency for large files.
 Cost: EBS tends to be more expensive per GB compared to S3, especially if you're
storing large amounts of data or using high-performance EBS volumes. S3's cost
varies depending on storage class and data retrieval patterns.

Experiment 9

Aim: To study Elastic File System (EFS).

Elastic File System (EFS) is another storage service offered by AWS, and it’s distinct
from both EBS and S3. Here's a breakdown of what EFS is and how it fits into the AWS
ecosystem:

What is EFS?

Page | 25
Cloud Computing Lab manual
Elastic File System (EFS) is a fully managed, scalable file storage service that is
designed to be used with AWS cloud services and on-premises resources. It provides a
file system interface and file system semantics, meaning you can use it in much
the same way you would use a traditional file system, but it operates in the cloud. It's
designed for applications that require a shared file system, so multiple Amazon EC2
instances can access the data simultaneously.

Key Features of EFS:

1. Scalability:
o EFS automatically scales as you add or remove data, so you don’t need to
worry about provisioning or managing storage capacity. It grows and shrinks
as needed, without manual intervention.
2. Shared Access:
o It provides multi-attach access, meaning multiple EC2 instances (and even
across different availability zones) can access the same file system
simultaneously. This makes EFS great for use cases where you need shared
access to data from multiple instances or applications.
3. Performance:
o EFS offers two performance modes:
 General Purpose: Optimized for latency-sensitive applications like
web servers or content management systems.
 Max I/O: Designed for high throughput and workloads that require
more than 10,000 concurrent connections (such as big data
applications).
o It provides low-latency access and can handle large amounts of data
transfer, but is typically not as fast as EBS for single-instance performance.
4. Managed Storage:
o EFS is fully managed, so AWS handles maintenance, patching, and scaling
for you, reducing the administrative burden compared to setting up your
own file server.
5. NFS Protocol:
o EFS uses the NFS (Network File System) protocol, meaning you can
mount EFS as a network drive on EC2 instances and other resources that
support NFS, similar to how you’d mount network drives in a traditional on-
premise environment.
6. Durability and Availability:
o EFS is designed for high availability and durability, automatically replicating
data across multiple availability zones within a region, making it resilient to
failures in a single zone.
7. Security:
o EFS integrates with AWS Identity and Access Management (IAM) for
access control. It also supports encryption at rest and in transit, ensuring
your data is secure.

Use Cases for EFS:

 Content Management and Web Serving: Share files across multiple web
servers and EC2 instances.
 Big Data and Analytics: Applications that need to process large amounts of data
concurrently.

Page | 26
Cloud Computing Lab manual
 Application Hosting: Applications that require shared file storage, like
development environments or custom applications.
 Media Processing: EFS is often used for storing and sharing media files, as many
EC2 instances might need to access the same data.
 Lift-and-Shift Applications: If you’re migrating a traditional on-premise
application that relies on a file system, EFS is a good fit because it supports the
same file system semantics that many applications require.

Key Differences Between EFS and Other AWS Storage Options:

 EBS vs EFS:
o EBS is a block-level storage solution designed for single-instance use,
whereas EFS is a file system that allows shared access by multiple instances
at once. EBS is better for individual EC2 instances, while EFS is ideal for
scenarios where multiple EC2 instances need access to the same data.
 EFS vs S3:
o EFS provides a file system interface and is ideal for applications requiring a
file structure and shared access. S3, on the other hand, is object storage
and is more suited for static files or unstructured data that doesn’t require
direct file system access or traditional file system features like directories
and file locking.

Cost:

 EFS pricing is based on the amount of data you store in the file system, and you
pay for the storage you use each month. It also offers an EFS Infrequent Access
(IA) storage class for lower-cost storage options for files that are infrequently
accessed.
 Generally, EFS is more expensive than S3 but can be more cost-effective than
using EBS for shared, scalable storage across multiple instances.

In Summary:

 Use EFS when you need a scalable, shared file system that multiple EC2 instances
or other services can access concurrently.
 Use EBS when you need block storage for a single EC2 instance or high-
performance needs (like databases).
 Use S3 for object storage needs when you want highly scalable, low-cost storage
for static files and don’t need direct file system semantics.

Experiment 10

Aim: To study Relational Database Service (RDS), Security and Compliance concepts
& DynamoDB.

Page | 27
Cloud Computing Lab manual
Relational Database Service (RDS)

Amazon RDS is a fully managed relational database service that allows you to easily
set up, operate, and scale relational databases in the cloud. It supports several popular
database engines, including MySQL, PostgreSQL, Oracle, Microsoft SQL Server, and
MariaDB.

Key Features of RDS:

 Automated Backups: RDS automatically backs up your database and provides


point-in-time recovery, ensuring your data is safe and can be restored if needed.
 Scalability: You can scale the compute and storage resources up or down easily.
With features like Read Replicas, you can scale read-heavy workloads, and
Multi-AZ deployments provide high availability.
 Managed Service: Amazon takes care of routine database maintenance tasks
like patching, backups, and monitoring, letting you focus on application
development.
 Security: RDS integrates with AWS IAM, supports encryption (both at rest and in
transit), and allows you to control network access via VPC security groups.
 Performance: RDS offers various instance types, from general-purpose to
memory-optimized, ensuring that you can select the best option for your workload.

RDS Security and Compliance Concepts:

 Encryption:
o At Rest: RDS supports encryption at rest using AWS Key Management
Service (KMS). This ensures that your data is encrypted while stored on disk.
o In Transit: You can enable SSL/TLS encryption for connections to your RDS
instance to ensure data is encrypted during transmission.

 Access Control:
o IAM Roles: With IAM, you can control who can access your RDS instances
and manage the permissions on who can perform specific actions.
o Security Groups: Security groups act as a virtual firewall, controlling
inbound and outbound traffic to your RDS instances. You can configure rules
based on IP, port, and protocol.

 Multi-AZ Deployments: This feature provides high availability and automatic


failover. In the event of a failure in the primary database, RDS automatically
switches to a standby instance in a different availability zone.
 Compliance:
o AWS RDS is compliant with various regulatory standards, including GDPR,
HIPAA, PCI-DSS, SOC 1, 2, and 3, and others. You can leverage these
compliance features to ensure that your application meets industry-specific
regulatory requirements.

 Audit Logging:
o Database Logs: You can enable database logging for audit purposes, which
records changes and access to sensitive data. Integration with services like
CloudWatch and CloudTrail can help you monitor access patterns and
other events.

Page | 28
Cloud Computing Lab manual
DynamoDB

Amazon DynamoDB is a fully managed, serverless NoSQL database service designed


to provide high-performance, low-latency data storage at any scale. It’s ideal for
applications that require flexible schema, fast access to large amounts of data, and need
to scale quickly.

Key Features of DynamoDB:

 Fully Managed: DynamoDB handles operational tasks like hardware provisioning,


patching, and scaling, letting you focus on building your application.
 Scalability: It scales automatically, handling large amounts of data and high
request loads without needing manual intervention. It can handle millions of
requests per second for any application.
 Performance: DynamoDB provides single-digit millisecond latency at any scale,
making it suitable for high-performance applications such as gaming, IoT, and
mobile apps.
 Flexible Schema: You don’t need to define a rigid schema for DynamoDB tables.
It uses a key-value store model, allowing you to store various types of data,
including JSON, without needing to define relationships between tables like in
relational databases.

DynamoDB Security and Compliance Concepts:

 Encryption:
o At Rest: Data is automatically encrypted at rest using AWS Key
Management Service (KMS). This ensures that all data stored in DynamoDB
is protected.
o In Transit: DynamoDB supports encryption in transit using SSL/TLS to
protect data while it’s being transferred to and from your application.

 Access Control:
o IAM: You can control access to DynamoDB through AWS Identity and Access
Management (IAM). You can define policies that allow or deny actions like
reading, writing, or modifying tables.
o VPC Endpoints: You can access DynamoDB securely from your Virtual
Private Cloud (VPC) via VPC endpoints, which eliminate the need for traffic to
traverse the public internet.

 Backup and Restore:


o DynamoDB provides point-in-time recovery (PITR) to restore your table to
any point in the past 35 days.
o On-Demand Backups allow you to create backups of DynamoDB tables
without impacting performance.

 Compliance:
o DynamoDB complies with multiple standards, such as GDPR, HIPAA, PCI
DSS, and others, making it suitable for regulated industries.
o DynamoDB integrates with services like CloudTrail for logging and
CloudWatch for monitoring, which is essential for security auditing and
compliance reporting.

Page | 29
Cloud Computing Lab manual
When to Use DynamoDB:

 Low Latency: When you need real-time, low-latency performance for high-volume
data (e.g., gaming leaderboards, mobile apps, or IoT).
 Scalable Web Apps: For web apps that need a flexible schema and auto-scaling
to handle variable traffic.
 Event-Driven Applications: Use cases that require high throughput and fast
data processing, like streaming data or real-time analytics.

DynamoDB vs RDS:

 RDS is a relational database, so if your application needs structured data, complex


queries, or transactions, RDS is the better choice. It’s also useful if you need SQL-
based querying.
 DynamoDB is a NoSQL database designed for high throughput, low-latency
access, and highly scalable data storage. It’s ideal for applications that need a
schema-less design or need to scale quickly based on traffic patterns.

Key Takeaways:

 RDS is great for applications needing structured data, complex querying, and
transactional support (like SQL databases).
 DynamoDB is ideal for applications needing fast, scalable, and flexible NoSQL
data storage, such as high-performance apps with massive, dynamic data sets.

Page | 30
Cloud Computing Lab manual

Experiment 11

Aim: To Create and Use Custom AMI ,Virtual Private Cloud (VPC) & Deploy Application
in Custom VPC using best practices.

Create and Use Custom AMI (Amazon Machine Image)

A Custom AMI is a pre-configured virtual machine image that you create and use to
launch EC2 instances with a specific configuration. This can save time when launching
multiple instances with the same operating system, software, or settings, ensuring
consistency across deployments.

Steps to Create a Custom AMI:

1. Launch and Configure an EC2 Instance:


o First, launch an EC2 instance with the desired operating system (e.g.,
Amazon Linux, Ubuntu, Windows).
o SSH (for Linux) or RDP (for Windows) into the instance and install/configure
any software or settings you want to include in your custom image. This
might include web servers, database software, custom scripts, or security
configurations.

2. Create an AMI from the Instance:


o Once your EC2 instance is configured and ready, go to the EC2 dashboard in
the AWS Management Console.
o In the Instances section, right-click the instance you want to use and select
Create Image.
o Give the image a name and optional description, and decide whether to
include any additional EBS volumes.
o Click Create Image, and AWS will create an AMI from your instance. This
process will take a few minutes.

3. Use the Custom AMI to Launch New Instances:


o After the AMI is created, you can launch new EC2 instances from the AMI.
o Go to AMIs in the EC2 dashboard, select your custom AMI, and click
Launch.
o You can now configure the instance type, network settings, storage, and
security settings as you would with any other EC2 instance.

4. Manage the Custom AMI:


o You can modify, share, or delete your AMI as needed. For example, you can
share the AMI with other AWS accounts or make it public if you want to allow
others to use it.
o Keep in mind that AMIs are region-specific, so you’ll need to copy the AMI to
other regions if you want to use it elsewhere.

Why Use a Custom AMI?

 Consistency: Ensures consistency in environments across EC2 instances.


Page | 31
Cloud Computing Lab manual
 Automation: Useful in auto-scaling groups where new instances need to be spun
up quickly and with the same configuration.
 Efficiency: Reduces the time spent on manual configuration by pre-installing and
configuring software on instances.

Virtual Private Cloud (VPC)

A VPC is a virtual network in AWS that allows you to define a logically isolated network
within the AWS cloud. You can control aspects like IP address ranges, subnets, route
tables, and network gateways, giving you complete control over your network
environment.

Key Components of a VPC:

1. Subnets:
o A VPC is divided into subnets, each belonging to a specific Availability Zone
(AZ). You can create public subnets for resources that need to be accessed
from the internet and private subnets for resources that should not be
directly accessible.

2. Route Tables:
o Route tables control the routing of traffic between subnets and external
networks, such as the internet or other VPCs. A default route table is created
when the VPC is created, but you can customize it.

3. Internet Gateway (IGW):


o This is used to allow communication between resources in your VPC and the
internet. A VPC must have an attached internet gateway to access the
internet.

4. NAT Gateway:
o In a private subnet, you can use a NAT Gateway or NAT Instance to allow
outbound internet traffic for private resources (like EC2 instances in private
subnets), without exposing them to incoming traffic from the internet.

5. Security Groups & Network ACLs:


o Security Groups: These are stateful firewalls that control inbound and
outbound traffic to instances. They are attached to EC2 instances and
control access at the instance level.
o Network ACLs: These are stateless firewalls that control inbound and
outbound traffic at the subnet level. They are often used for additional
security.

6. VPC Peering:
o VPC peering allows communication between instances in different VPCs
within the same region or across different regions, while maintaining
network isolation.

7. VPN Connection:
o A VPN connection allows you to securely connect your on-premises network
to your VPC.

Page | 32
Cloud Computing Lab manual
Deploy Application in Custom VPC Using Best Practices

When deploying an application in a custom VPC, there are several best practices to
follow to ensure security, scalability, and high availability. Below are some
recommended steps and guidelines for deploying an application in a custom VPC.

Best Practices for VPC Design:

1. Separate Public and Private Subnets:


o Create public subnets for resources that need to be exposed to the
internet, such as load balancers or web servers.
o Use private subnets for resources that do not need direct internet access,
like application servers, databases, and backend services.

2. Use a Multi-AZ Architecture for High Availability:


o For fault tolerance, deploy your application across multiple Availability
Zones. Distribute your instances between different subnets in different AZs
to ensure that your application remains available even if one AZ experiences
issues.

3. Configure a Bastion Host (Jump Box):


o For secure administrative access to your EC2 instances in private subnets,
use a bastion host in a public subnet. This allows you to SSH/RDP into the
private instances securely by connecting through the bastion host.

4. Use Elastic Load Balancers (ELB):


o Place an Application Load Balancer (ALB) or Network Load Balancer
(NLB) in your public subnet to distribute traffic across multiple EC2
instances. This ensures that traffic is balanced evenly and improves fault
tolerance by rerouting traffic in case of instance failure.

5. Use Security Groups and Network ACLs:


o Security Groups: Use security groups to control inbound and outbound
traffic to your instances. For example, allow only HTTP/HTTPS traffic to your
web servers and restrict access to your database servers.
o Network ACLs: Use network ACLs to provide an additional layer of security
at the subnet level. Define rules to allow or deny traffic based on IP address
or port.

6. Implement Auto Scaling:


o Set up Auto Scaling Groups (ASG) to automatically scale your EC2
instances based on demand. This is particularly useful for handling traffic
spikes or reducing costs during low-demand periods.

7. Enable VPC Flow Logs:


o Enable VPC Flow Logs to capture and analyze traffic patterns in your VPC.
This helps in troubleshooting connectivity issues and provides visibility into
network security.

8. Use a NAT Gateway for Private Subnets:

Page | 33
Cloud Computing Lab manual
o Ensure that your private instances can still reach the internet (e.g., for
updates or third-party services) without exposing them directly to the
internet by configuring a NAT Gateway in a public subnet.

9. Control Access Using IAM:


o Use IAM roles and policies to control access to resources within the VPC. For
instance, restrict which users or services can launch or terminate EC2
instances in the VPC.

10. Backup and Disaster Recovery:


o Use AWS Backup or Snapshots to back up critical data and configurations.
o Ensure that your VPC architecture is designed to quickly recover from failure
by replicating resources and data across Availability Zones.

Example VPC Deployment for a Web Application:

1. VPC Creation: Create a VPC with CIDR block 10.0.0.0/16.


2. Public Subnet: Create a public subnet (10.0.1.0/24) in AZ1 for the web servers
and load balancers.
3. Private Subnet: Create a private subnet (10.0.2.0/24) in AZ2 for application
servers.
4. Internet Gateway: Attach an Internet Gateway (IGW) to the VPC.
5. NAT Gateway: Place a NAT Gateway in the public subnet to enable internet
access for resources in the private subnet.
6. Load Balancer: Set up an Application Load Balancer in the public subnet to
distribute traffic to EC2 instances in the private subnet.
7. EC2 Instances: Launch EC2 instances in the private subnet with your application
stack.
8. Security: Use security groups to control inbound traffic to the load balancer and
application instances.

In Summary:

 Custom AMIs: Allow you to create reusable and consistent environments for
deploying EC2 instances, saving time on instance setup and configuration.
 VPC: Provides a highly customizable and secure network environment where you
can isolate your application resources.
 Best Practices: Follow security and scalability best practices when deploying
applications, including using separate subnets, auto-scaling, and proper security
configurations.

This architecture allows you to deploy a scalable, secure, and high-performance


application within AWS using a custom VPC.

Page | 34
Cloud Computing Lab manual

Experiment 12

Aim: To study Elastic Load Balancer (ELB), Application Load Balancer (ALB) &
Network Load Balancer (NLB).

Elastic Load Balancer (ELB) Overview

Elastic Load Balancer (ELB) is a fully managed service by AWS that automatically
distributes incoming application traffic across multiple targets, such as EC2 instances,
containers, or IP addresses. ELB improves the availability and fault tolerance of your
application by ensuring that traffic is evenly distributed and that no single resource is
overwhelmed by too much traffic. There are three primary types of load balancers within
ELB, each designed to handle specific use cases: Application Load Balancer (ALB),
Network Load Balancer (NLB), and the older Classic Load Balancer (CLB)
(though it is now being phased out in favor of ALB and NLB).

Application Load Balancer (ALB)

Application Load Balancer (ALB) operates at the application layer (Layer 7) of the
OSI model, making it ideal for HTTP and HTTPS traffic. ALB is designed for more complex

Page | 35
Cloud Computing Lab manual
routing decisions based on content within the request, making it perfect for web
applications, microservices, and containers.

Key Features of ALB:

1. Content-Based Routing:
o ALB can route traffic based on URL paths (e.g., /api/* or /images/*) or host
headers (e.g., www.example.com vs. api.example.com), enabling you to
direct traffic to different application services or containers based on request
content.

2. Support for WebSockets and HTTP/2:


o ALB supports WebSockets for real-time communication and HTTP/2 for
improved performance with multiplexed requests.

3. SSL/TLS Termination:
o ALB can handle SSL/TLS termination, offloading the encryption/decryption
work from backend instances, improving performance and simplifying
certificate management.

4. Routing to Multiple Targets:


o You can route traffic to different targets based on various conditions,
including path-based or host-based routing. For example, traffic to /api could
be routed to one set of servers, while traffic to /admin goes to a different set.

5. Container and Microservices Support:


o ALB works well with containerized applications, particularly those using
Amazon ECS (Elastic Container Service) or EKS (Elastic Kubernetes Service),
by automatically registering and deregistering containers in response to
changes in the container environment.

6. Health Checks:
o ALB regularly performs health checks on the registered targets and ensures
that traffic is only routed to healthy instances, improving reliability.

Use Cases for ALB:

 Web Applications: Ideal for complex web applications requiring content-based


routing or routing to microservices.
 Microservices Architectures: Perfect for directing traffic to multiple
microservices or containerized applications.
 Mobile App Backends: Useful for routing traffic to backends based on specific
conditions (like APIs).

Network Load Balancer (NLB)

Network Load Balancer (NLB) operates at the transport layer (Layer 4) of the OSI
model, handling TCP and TLS/SSL traffic. NLB is designed for extremely high

Page | 36
Cloud Computing Lab manual
performance and low latency, making it suitable for applications that require fast, low-
latency traffic routing, such as gaming, IoT, or real-time communications.

Key Features of NLB:

1. High Throughput and Low Latency:


o NLB is optimized for handling millions of requests per second, providing low-
latency and high-throughput performance even under heavy traffic loads.

2. TCP and TLS/SSL Termination:


o While ALB handles HTTP/HTTPS traffic, NLB supports TCP and TLS/SSL
termination, making it suitable for a broader range of applications, especially
those that require secure and fast transmission.

3. Static IP and Elastic IP Support:


o NLB allows you to assign static IP addresses to your load balancer, making
it useful for scenarios where IP address stability is required. You can also
associate Elastic IPs to your NLB.

4. Health Checks:
o NLB also performs health checks on backend targets and will route traffic
only to healthy targets, improving fault tolerance.

5. High Availability:
o NLB automatically scales to handle traffic spikes, and it’s designed to be
highly available, capable of maintaining consistent performance under large-
scale, high-traffic conditions.

6. Supports IP Targeting:
o NLB can route traffic directly to any IP address, meaning it can be used with
instances, containers, or even resources outside of AWS if necessary.

Use Cases for NLB:

 Real-Time Applications: Suitable for applications that require extremely low


latency, like financial trading platforms, gaming servers, or IoT systems.
 TCP/UDP Traffic: Ideal for applications that rely on raw TCP/UDP protocols (like
email or database services) rather than HTTP-based traffic.
 Hybrid Environments: If you need to route traffic to non-AWS resources or on-
premises servers using IP addresses, NLB can be used.

Comparison of ALB vs NLB

Feature Application Load Balancer (ALB) Network Load Balancer (NLB)

Layer Application Layer (Layer 7) Transport Layer (Layer 4)

Protocols HTTP, HTTPS, WebSockets, HTTP/2 TCP, TLS (SSL), UDP

Routing Content-based routing (path, host, Route based on IP protocol and

Page | 37
Cloud Computing Lab manual
Feature Application Load Balancer (ALB) Network Load Balancer (NLB)

query string) port

Web applications, microservices, HTTP Low-latency applications,


Use Case
APIs TCP/UDP traffic

SSL Yes, SSL/TLS termination


Yes, SSL/TLS termination supported
Termination supported

High (low-latency, high


Performance Moderate (HTTP/S traffic)
throughput)

EC2 instances, containers, Lambda EC2 instances, IP addresses


Target Types
functions (external)

Health
Yes, at the application level (HTTP) Yes, at the transport level (TCP)
Checks

Classic Load Balancer (CLB) (Deprecated)

The Classic Load Balancer (CLB) was the original ELB service offered by AWS. While
still available for backward compatibility, it's now considered legacy, and AWS
recommends using ALB and NLB for new deployments due to their advanced features,
such as content-based routing, SSL termination, and better scalability.

Key Features of CLB:

 Supports both HTTP/HTTPS and TCP traffic, but with fewer advanced features than
ALB and NLB.
 Does not support advanced routing options like content-based routing,
WebSockets, or HTTP/2.
 Limited to EC2 instances as targets, unlike ALB and NLB, which can handle
containers, IP addresses, and Lambda functions.

When to Use Which Load Balancer?

1. Use ALB when:


o You need to perform content-based routing (e.g., route requests based on
URL path or host headers).
o You are deploying web applications or microservices that rely on
HTTP/HTTPS.
o You need SSL/TLS termination at the load balancer level to offload the SSL
processing from your backend servers.
o You are working with containers (ECS/EKS) and need dynamic scaling.

2. Use NLB when:


o You need high performance and low latency for TCP/UDP applications.
o You are working with applications that require static IPs or Elastic IPs.

Page | 38
Cloud Computing Lab manual
o You need to handle millions of requests per second with very low latency,
such as IoT, real-time communications, or gaming apps.
o You are dealing with applications that don’t rely on HTTP/HTTPS traffic (e.g.,
databases or email servers).

3. Use CLB when:


o You are supporting legacy applications that require basic load balancing
but don’t need the advanced features offered by ALB or NLB.

Conclusion:

Choosing between ALB and NLB depends largely on your application's needs:

 ALB is perfect for HTTP/HTTPS traffic with more complex routing requirements and
supports modern web apps, APIs, and microservices.
 NLB is the go-to for ultra-low-latency, high-performance applications that deal
with TCP, TLS, or UDP traffic, and where maintaining a static IP is critical.

Experiment 13

Aim: To study Identity and Access Management (IAM), Well-Architected Framework &
AWS Cloud Watch and SNS.

Page | 39
Cloud Computing Lab manual
Identity and Access Management (IAM)

AWS Identity and Access Management (IAM) is a service that helps you securely
control access to AWS services and resources. It allows you to manage users, groups,
roles, and policies to enforce granular access control within your AWS environment.

Key Features of IAM:

1. Users:
o IAM Users are entities that you create in your AWS account to represent
individual people or services. You can assign permissions to users to allow or
deny access to specific AWS resources.

2. Groups:
o IAM Groups are collections of IAM users. You can assign permissions to a
group, and those permissions are automatically granted to all members of
that group.

3. Roles:
o IAM Roles are similar to IAM users but are meant to be assumed by trusted
entities like users, applications, or AWS services. Roles are ideal for
delegating permissions to AWS services or EC2 instances.

4. Policies:
o IAM Policies define permissions (what actions are allowed or denied) on
AWS resources. Policies are written in JSON format and can be attached to
users, groups, or roles to determine what AWS resources and actions they
can access.

5. Multi-Factor Authentication (MFA):


o To enhance security, IAM supports Multi-Factor Authentication (MFA),
which requires users to provide an additional authentication factor (like a
smartphone or hardware device) in addition to their username and
password.

6. Temporary Security Credentials:


o IAM roles allow the use of temporary security credentials, which are often
used for cross-account access or when AWS services assume roles (e.g., EC2
instances needing access to S3).

7. Granular Access Control:


o IAM enables you to control access at a very granular level, specifying which
API actions a user can perform, what resources they can access, and what
actions they can perform on those resources.

8. IAM Federation:
o IAM allows you to integrate with existing identity providers (e.g., Active
Directory, Google Apps) for Single Sign-On (SSO), enabling you to use
corporate credentials for AWS access.

Page | 40
Cloud Computing Lab manual
Best Practices for IAM:

 Least Privilege: Always assign the minimum permissions required for users,
groups, and roles to perform their tasks.
 Use Groups: Assign permissions at the group level rather than individually to
users for easier management.
 Enable MFA: Use MFA for all IAM users, especially for users with elevated
privileges.
 Rotate Keys Regularly: Regularly rotate access keys and secrets, and avoid
embedding them in application code.
 Use IAM Roles for EC2: Instead of embedding AWS credentials in EC2 instances,
use IAM roles to grant temporary access to AWS resources.

Well-Architected Framework

The AWS Well-Architected Framework is a set of best practices and guidelines


designed to help you build secure, high-performing, resilient, and efficient infrastructure
for your applications. It provides a consistent approach to evaluating architectures and
implementing cloud-based solutions that meet AWS best practices.

The Five Pillars of the Well-Architected Framework:

1. Operational Excellence:
o Focuses on operations in the cloud, covering monitoring, automation,
incident response, and evolving procedures over time to improve the
reliability of applications.
o Key practices: Continuous monitoring, automation, feedback loops, incident
management.

2. Security:
o Ensures that your data, systems, and assets are protected against
unauthorized access or compromise.
o Key practices: Identity and access management, data protection, threat
detection, incident response.

3. Reliability:
o Ensures a system can recover from failures and meet customer expectations
in terms of uptime and performance.
o Key practices: Fault tolerance, recovery planning, scaling, availability, and
monitoring.

4. Performance Efficiency:
o Focuses on using cloud resources efficiently and adapting to changes in
demand or technological advancements over time.
o Key practices: Right-sizing resources, managing capacity, and improving
performance over time.

5. Cost Optimization:

Page | 41
Cloud Computing Lab manual
o Ensures that you are not overspending on cloud resources and that you're
maximizing the value from your AWS services.
o Key practices: Cost monitoring, resource optimization, scaling efficiently, and
using appropriate pricing models (e.g., reserved instances).

How to Apply the Well-Architected Framework:

 Use the AWS Well-Architected Tool: AWS provides a tool to assess your
workloads against these best practices, helping you understand the gaps in your
architecture.
 Adopt Continuous Improvement: The framework emphasizes a continuous
improvement process, encouraging you to regularly review and enhance your
cloud architecture based on evolving best practices and business needs.

AWS CloudWatch and SNS

AWS CloudWatch

Amazon CloudWatch is a monitoring and observability service that provides real-time


insights into your AWS resources and applications. It allows you to collect and track
metrics, collect and monitor log files, and set alarms to take action based on metrics or
logs.

Key Features of AWS CloudWatch:

1. Metrics:
o CloudWatch collects metrics from various AWS services (like EC2, RDS,
Lambda, etc.), including CPU utilization, network traffic, disk I/O, and other
performance indicators.
o You can create custom metrics for applications or systems that are not AWS-
native.

2. Logs:
o CloudWatch Logs allow you to store and monitor log files, including logs from
EC2 instances, Lambda functions, or any custom logs from applications.
o You can set up log groups and log streams to organize and track logs
efficiently.

3. Alarms:
o You can set CloudWatch Alarms to notify you when a metric crosses a
specified threshold (e.g., when CPU utilization exceeds 80% for an EC2
instance).
o Alarms can trigger automated actions, like scaling up EC2 instances, sending
notifications, or invoking AWS Lambda functions.

4. Dashboards:
o CloudWatch Dashboards allow you to visualize metrics from different AWS
services in a single, customizable view. Dashboards provide insights into the
health and performance of your infrastructure in real time.

5. Events:
Page | 42
Cloud Computing Lab manual
o CloudWatch Events allows you to monitor and respond to state changes in
your AWS resources. You can set up rules that react to changes such as EC2
instance state changes or autoscaling events.
o CloudWatch Events is often used to automate responses to various changes
in your AWS environment.

6. CloudWatch Insights:
o CloudWatch Logs Insights is a fully integrated, interactive, and fast query
engine for CloudWatch Logs that allows you to search, analyze, and visualize
log data in real time.

Amazon Simple Notification Service (SNS)

Amazon SNS is a fully managed service that allows you to send messages or
notifications to subscribers or other systems via multiple protocols like email, SMS, or
HTTP. It's a highly scalable messaging service for distributed systems, mobile
applications, and microservices.

Key Features of SNS:

1. Topics:
o A topic is a communication channel that allows publishers to send messages
to multiple subscribers. You can configure SNS topics to send messages to
different types of subscribers (e.g., email, Lambda, SQS, HTTP/S endpoints).

2. Multiple Protocols:
o SNS supports several notification protocols, such as:
 Email: Send notifications to one or more email addresses.
 SMS: Send text message notifications to users’ mobile phones.
 Lambda: Invoke AWS Lambda functions to process the message.
 HTTP/HTTPS: Send messages to web servers via HTTP/S endpoints.
 SQS: Send messages to Simple Queue Service (SQS) queues for
further processing.

3. Message Filtering:
o SNS supports message filtering, allowing subscribers to only receive
specific types of messages based on filtering criteria (e.g., you could filter
notifications for certain types of events).

4. Durability and Scalability:


o SNS guarantees high availability and durability, with messages replicated
across multiple AWS availability zones. It also automatically scales to handle
millions of messages.

5. Push Notifications:
o SNS can be used for mobile push notifications to apps on devices like
Android, iOS, and Fire OS.

Page | 43
Cloud Computing Lab manual
Integrating CloudWatch and SNS:

CloudWatch and SNS are often used together for alerting and automated responses.
For example:

 You can create a CloudWatch Alarm that monitors EC2 instance health (e.g.,
high CPU usage).
 When the alarm triggers, it can send a notification via SNS to inform
administrators via email, or it can invoke a Lambda function for automated
remediation (e.g., restarting the EC2 instance).

This integration helps you automate monitoring, alerting, and troubleshooting, ensuring
a more proactive approach to managing your AWS resources.

Summary:

 IAM: Manage access to AWS resources by defining users, roles, and policies with
fine-grained permissions.
 Well-Architected Framework: AWS best practices across five pillars
(Operational Excellence, Security, Reliability, Performance Efficiency, Cost
Optimization) to ensure well-architected, efficient, and resilient applications.
 CloudWatch: Monitor AWS resources and applications with metrics, logs, alarms,
and dashboards to ensure everything is functioning correctly.
 SNS: A messaging service to send notifications or messages to multiple endpoints
(email, SMS, Lambda, etc.) for system alerts or communication.

Page | 44
Cloud Computing Lab manual

Experiment 14

Aim: To study AWS CloudFront, Auto Scaling & AWS Route 53.

AWS CloudFront

Amazon CloudFront is AWS's content delivery network (CDN) service that helps you
deliver content (like websites, videos, APIs, or other web assets) to users with low
latency and high transfer speeds. CloudFront caches content at edge locations around
the world, which are strategically placed near your users to minimize latency.

Key Features of AWS CloudFront:

1. Global Distribution:
o CloudFront has a vast network of edge locations around the world. When a
user makes a request, CloudFront serves the content from the nearest edge
location, reducing latency and improving user experience.

2. Caching and Performance Optimization:


o CloudFront caches content at edge locations, reducing the load on your
origin servers. Cached content can be static assets (images, videos, CSS,
JavaScript) or dynamic content (e.g., personalized pages or API responses).
o It supports TTL (Time To Live) settings to control how long cached content
remains at the edge before being refreshed.

3. Customizable Content Delivery:


o You can create multiple cache behaviors to route different types of content
to different origins (e.g., an S3 bucket, EC2 instance, or custom origin).

Page | 45
Cloud Computing Lab manual
o CloudFront allows for URL query string forwarding, cookie handling, and
more granular caching strategies based on request headers or other
variables.

4. Secure Content Delivery:


o CloudFront integrates with AWS Shield for DDoS protection and AWS WAF
(Web Application Firewall) to protect against common web attacks like SQL
injection or cross-site scripting (XSS).
o You can use SSL/TLS to encrypt content delivered through CloudFront,
ensuring secure data transfer.
o Field-level encryption is available to protect sensitive data within your
application.

5. Edge Computing with Lambda@Edge:


o Lambda@Edge allows you to run code closer to users by executing Lambda
functions at CloudFront edge locations. This can be used for content
customization, security enhancements, or even complex transformations.

6. Origin Failover:
o CloudFront supports automatic origin failover. If one origin (e.g., an S3
bucket) becomes unavailable, CloudFront can automatically switch to a
secondary origin, ensuring continuous content delivery.

7. Real-Time Analytics:
o CloudFront provides detailed real-time reports and analytics on cache
hit/miss rates, bandwidth usage, request/response performance, and other
performance metrics. This helps you optimize delivery and troubleshoot
issues.

Use Cases for AWS CloudFront:

 Website and Application Acceleration: Improve the performance and speed of


your websites and applications by caching content at the edge.
 Video Streaming: Deliver high-quality streaming videos to users with minimal
latency.
 API Gateway and Dynamic Content: Accelerate API requests and dynamic
content delivery by caching API responses at the edge.
 Security: Protect your applications with DDoS protection, SSL encryption, and
Web Application Firewall (WAF).

Auto Scaling

AWS Auto Scaling is a service that allows you to automatically adjust the number of
computing resources (e.g., EC2 instances) to meet changing demand. With Auto Scaling,
you can ensure your application always has the right amount of resources without over-
provisioning or under-provisioning.

Page | 46
Cloud Computing Lab manual
Key Features of AWS Auto Scaling:

1. Dynamic Scaling:
o Auto Scaling can dynamically scale your resources based on real-time
demand. For example, if your web server traffic spikes, Auto Scaling can
automatically launch new EC2 instances to handle the increased load.
Conversely, it can terminate instances when demand decreases.

2. Scheduled Scaling:
o You can schedule scaling actions in advance based on known patterns of
demand (e.g., scale up during business hours and scale down during off-
hours).

3. Elastic Load Balancer (ELB) Integration:


o Auto Scaling works seamlessly with Elastic Load Balancer (ELB). When
Auto Scaling launches or terminates EC2 instances, the ELB automatically
adjusts to route traffic to healthy instances.

4. Health Checks:
o Auto Scaling regularly performs health checks on your EC2 instances. If an
instance is deemed unhealthy, it can be automatically replaced with a new
one, ensuring that only healthy instances are serving traffic.

5. Scaling Policies:
o You can create scaling policies to define when to scale in or out. Policies
can be based on various metrics, such as CPU utilization, memory usage,
network traffic, or custom CloudWatch metrics.

6. Multiple Resource Types:


o You can use Auto Scaling with multiple AWS resource types, such as EC2
instances, EBS volumes, and even DynamoDB tables. For example, you
can scale both your EC2 instances and DynamoDB read/write capacity to
match demand.

7. Predictive Scaling:
o AWS Auto Scaling also includes predictive scaling, which uses machine
learning to predict demand and scale your resources in advance. This can be
particularly helpful for workloads with fluctuating or cyclical usage patterns.

Benefits of AWS Auto Scaling:

 Cost Efficiency: By scaling resources only when needed, Auto Scaling helps you
avoid paying for unused capacity.
 High Availability: Ensures that your application always has sufficient capacity to
handle traffic spikes, improving availability.
 Simplified Management: Auto Scaling automatically handles scaling, reducing
the need for manual intervention and ensuring your infrastructure can adapt to
changes in traffic patterns.

Page | 47
Cloud Computing Lab manual
Use Cases for Auto Scaling:

 Web and Application Servers: Automatically scale the number of EC2 instances
based on website traffic.
 Batch Processing: Scale EC2 instances or container clusters up and down for
processing batch jobs based on workload.
 Databases: Adjust the read/write capacity of DynamoDB based on workload
demands.

AWS Route 53

Amazon Route 53 is a scalable and highly available Domain Name System (DNS)
web service designed to route end-user requests to endpoints (such as websites,
applications, or other AWS services). It provides DNS services, domain registration, and
routing policies to manage how your users access your resources.

Key Features of AWS Route 53:

1. Domain Registration:
o Route 53 allows you to register domain names directly through AWS. It
integrates with other AWS services and provides easy management for your
domains.

2. DNS Management:
o Route 53 provides reliable DNS resolution by mapping domain names to IP
addresses of servers. It offers features like:
 A Records (Address Records): Map domain names to IPv4
addresses.
 CNAME Records (Canonical Name Records): Map subdomains to
other domain names.
 MX Records (Mail Exchange Records): Route email traffic.
 TTL (Time To Live): Define how long DNS records are cached by
resolvers.

3. Health Checks and Monitoring:


o Route 53 can monitor the health of your resources (e.g., web servers or EC2
instances) through health checks. If a resource becomes unavailable, Route
53 can reroute traffic to healthy endpoints to ensure availability.

4. Routing Policies:
o Route 53 offers multiple routing policies to control how DNS queries are
answered:
 Simple Routing: Route traffic to a single resource.
 Weighted Routing: Route traffic to different resources based on
assigned weights.
 Latency-Based Routing: Route traffic to the resource with the lowest
latency based on the user’s location.
Page | 48
Cloud Computing Lab manual
 Geo DNS Routing: Route traffic based on geographic location.
 Failover Routing: Route traffic to a secondary resource in case the
primary one becomes unavailable.

5. Traffic Flow:
o Route 53 provides a visual interface for setting up routing rules with its
Traffic Flow feature. It simplifies complex routing configurations for
different types of traffic.

6. DNS Failover:
o Route 53 can automatically reroute traffic to a backup resource in case the
primary resource becomes unavailable, ensuring high availability for your
applications.

7. Integration with AWS Services:


o Route 53 integrates with other AWS services like Elastic Load Balancer
(ELB), CloudFront, and S3, making it easier to manage DNS for your AWS-
based resources.

Benefits of AWS Route 53:

 High Availability and Scalability: Route 53 uses a highly reliable and scalable
infrastructure to ensure consistent and fast DNS resolution.
 Traffic Routing Flexibility: Route 53’s advanced routing policies allow you to
optimize traffic routing based on performance, geographical location, or health of
your resources.
 Easy Management: As a fully managed service, Route 53 eliminates the need for
managing DNS infrastructure manually.

Use Cases for AWS Route 53:

 Website Hosting: Manage the DNS for your website by mapping domain names
to your web servers or CloudFront distributions.
 Failover and Disaster Recovery: Set up DNS failover to reroute traffic to a
backup resource if the primary one goes down.
 Global Applications: Use latency-based or geo-based routing to serve users from
the closest data center, improving performance.

Summary:

 AWS CloudFront: A content delivery network (CDN) that caches content at edge
locations around the world to reduce latency and improve performance.
 Auto Scaling: Automatically adjusts the number of EC2 instances or other
resources based on demand to ensure optimal performance and cost-efficiency.
 AWS Route 53: A scalable DNS and domain management service that provides
advanced routing, health checks, and failover capabilities to ensure high
availability and low-latency access for your applications.

These services together allow you to build highly available, scalable, and performant
applications that can automatically adjust to changing traffic patterns while ensuring
your users have a seamless experience.

Page | 49
Cloud Computing Lab manual

Page | 50

You might also like