Lab Manual cloud Computing
Lab Manual cloud Computing
CS102691
Experiment 1
Aim: Cloud computing overview, - What is Infrastructure as a Service (IaaS), what is
Platform as a Service (PaaS), what is Software as a Service (SaaS).
Definition: IaaS provides virtualized computing resources over the internet. It offers the
basic infrastructure components like virtual machines, storage, and networking that
users can scale up or down based on demand.
Key Features:
Examples:
Use Case: IaaS is great for businesses that need flexible infrastructure to support
applications and data without investing in physical hardware. For instance, hosting
websites, running virtual machines, or building disaster recovery environments.
Definition: PaaS provides a platform allowing developers to build, deploy, and manage
applications without dealing with the underlying infrastructure (like virtual machines or
storage). It typically includes tools for development, testing, and deployment, all in one
package.
Key Features:
Examples:
Heroku
Google App Engine
Microsoft Azure App Services
Use Case: Ideal for developers who want to focus on writing code and developing
applications without managing hardware or operating systems. For example, building
web applications, APIs, or mobile backend services.
Definition: SaaS delivers fully functional software applications over the internet,
eliminating the need for users to install, manage, or maintain software on their devices.
These applications are hosted, managed, and updated by the service provider.
Key Features:
Examples:
Use Case: SaaS is ideal for users who need ready-to-use software without the need for
installation or management. For example, email platforms, project management tools, or
CRM (Customer Relationship Management) systems.
Page | 2
Cloud Computing Lab manual
Experiment 2
Aim: Cloud computing models, Advantages of using cloud, AWS Global Infrastructure,
AWS Shared Responsibility Model.
Cloud computing models refer to the different types of services and deployment
approaches that organizations can choose based on their specific needs. There are three
primary models: Public Cloud, Private Cloud, and Hybrid Cloud.
1. Public Cloud
In the public cloud model, cloud services are delivered over the internet and shared
across multiple organizations. These services are owned and operated by third-party
cloud service providers. Users can access resources on a pay-as-you-go basis.
Key Features:
2. Private Cloud
A private cloud refers to cloud resources used exclusively by one organization. It can be
hosted either on-premises or by a third-party provider. A private cloud gives more
control over resources and security.
Key Features:
3. Hybrid Cloud
A hybrid cloud combines both public and private cloud models. It allows businesses to
use private cloud for sensitive workloads and public cloud for less-sensitive workloads.
Hybrid cloud models offer flexibility and optimal resource utilization.
Key Features:
Cloud computing offers a range of benefits that make it an attractive choice for many
organizations:
1. Cost Efficiency:
3. Reliability:
High uptime: Most cloud providers offer Service Level Agreements (SLAs)
guaranteeing high availability.
Redundancy: Cloud providers often have backup systems in place to ensure
minimal service disruption.
4. Security:
Data encryption: Most cloud providers offer data encryption for both data at rest
and in transit.
Compliance: Cloud providers adhere to regulatory standards (e.g., GDPR, HIPAA),
making it easier for businesses to stay compliant.
Access from anywhere: Cloud services can be accessed from any device with an
internet connection.
Real-time collaboration: Multiple users can collaborate on documents, projects,
or applications in real time.
Page | 4
Cloud Computing Lab manual
Amazon Web Services (AWS) offers a vast global infrastructure that allows businesses to
build and deploy applications globally with high availability, low latency, and fault
tolerance. AWS infrastructure is designed to be flexible, scalable, and reliable.
Key Components:
1. Regions: AWS data centers are located in geographically distinct regions around
the world. Each region consists of multiple Availability Zones (AZs).
o Example: US East (N. Virginia), Europe (Ireland), Asia Pacific (Sydney).
2. Availability Zones (AZs): These are isolated locations within a region, designed
to ensure high availability. They are linked through low-latency, high-throughput
networks.
o Example: A region may have 3 AZs for fault tolerance.
3. Edge Locations: AWS has a network of edge locations for content delivery
through services like Amazon CloudFront. These locations are spread across the
globe, allowing faster content delivery with lower latency.
4. Local Zones and Wavelength: AWS has specialized zones designed for low-
latency applications, such as gaming, 5G networks, and edge computing.
The AWS Shared Responsibility Model defines the division of security responsibilities
between AWS and the customer. This model helps clarify who is responsible for securing
what within the cloud environment.
1. AWS Responsibility (Security "of" the Cloud): AWS is responsible for securing
the infrastructure that runs its cloud services. This includes:
o Physical security of data centers.
o Network infrastructure security.
o Hypervisor security (for virtual machines).
o Hardware and software patches and updates for AWS infrastructure.
The shared responsibility model highlights that AWS secures the foundational
infrastructure, while customers are responsible for securing the services and
applications they deploy within the cloud.
Page | 5
Cloud Computing Lab manual
Experiment 3
Amazon EC2 (Elastic Compute Cloud) is one of the core services offered by AWS and
provides scalable computing capacity in the cloud. It allows users to run virtual
machines (VMs) called instances on demand, giving them the flexibility to quickly scale
computing resources up or down based on their needs. EC2 is widely used for a variety
of tasks, from hosting websites to running applications and big data analysis.
1. Scalability:
o Elasticity: EC2 allows users to scale their compute resources up or down
depending on demand. You can launch as many or as few instances as you
need and adjust capacity quickly.
o Auto Scaling: EC2 instances can be automatically scaled based on defined
criteria (like CPU usage or traffic), ensuring you have enough resources
when needed but also optimizing costs.
2. Flexible Instance Types:
o EC2 offers various instance types optimized for different use cases. Some of
the key instance families include:
General Purpose: Balanced resources (e.g., T3, M5).
Compute Optimized: High processing power (e.g., C5).
Memory Optimized: High memory (e.g., R5, X1e).
Storage Optimized: High I/O (e.g., I3, D2).
Accelerated Computing: Instances with GPUs (e.g., P4, G4).
3. Customizable Configuration:
o AMI (Amazon Machine Images): You can choose pre-configured AMIs for
different operating systems (e.g., Linux, Windows) or create custom AMIs to
replicate specific environments.
o VPC Integration: EC2 instances can be placed in a Virtual Private Cloud
(VPC), giving users control over network settings and ensuring isolated and
secure networking.
4. Pay-as-You-Go Pricing:
o On-Demand Instances: Pay only for what you use. No upfront costs or
long-term commitments.
Page | 6
Cloud Computing Lab manual
o Reserved Instances: Save money by committing to a specific instance
type for a one- or three-year term (offering significant discounts).
o Spot Instances: Take advantage of unused EC2 capacity at a lower price
(but instances can be terminated by AWS with little notice if demand
increases).
o Dedicated Hosts: Physical servers dedicated to your use, useful for
compliance needs or licensing restrictions.
5. Security and Compliance:
o IAM Integration: Integrate with AWS Identity and Access Management
(IAM) to define permissions and control access to EC2 instances.
o Key Pairs: Secure SSH access to instances using key pairs for
authentication.
o Security Groups & Network ACLs: Virtual firewalls to control inbound and
outbound traffic.
o Compliance: AWS EC2 meets numerous security and regulatory standards
(e.g., HIPAA, GDPR, SOC 2).
6. Storage Options:
o EBS (Elastic Block Store): Persistent block storage for EC2 instances. You
can use EBS to store your data that needs to persist even if the instance is
terminated.
o Instance Store: Temporary storage attached to an EC2 instance (data is
lost if the instance is stopped or terminated).
o Elastic File System (EFS): A scalable, managed file storage service that
can be mounted on multiple EC2 instances.
7. Monitoring and Management:
o CloudWatch: AWS CloudWatch provides real-time monitoring of your EC2
instances and helps you set alarms based on metrics like CPU utilization,
memory, and disk I/O.
o AWS Systems Manager: Allows you to automate and manage EC2
instances, including patch management and configuration tasks.
o Elastic Load Balancing (ELB): Distributes incoming traffic across multiple
EC2 instances to ensure high availability.
1. Web Hosting: EC2 is commonly used to host websites and web applications. You
can easily scale your instance count based on the amount of web traffic.
2. Big Data Processing: With its flexible configurations and high performance, EC2
can be used for running big data applications, including Hadoop or Spark clusters.
3. Development and Testing: EC2 provides an isolated environment for developers
to test and develop applications without affecting production systems.
4. High-Performance Computing (HPC): EC2 instances with specialized compute
capabilities (e.g., GPU instances) are used for computationally intense tasks like
scientific research or machine learning.
5. Disaster Recovery: EC2 can be part of a disaster recovery strategy, allowing you
to quickly spin up resources in the cloud if on-premises infrastructure fails.
6. Gaming: EC2 instances with GPUs are widely used for hosting gaming servers,
which require significant compute power for rendering and multiplayer interaction.
Page | 7
Cloud Computing Lab manual
1. Launch: You launch an EC2 instance from an AMI (Amazon Machine Image), which
defines the operating system and software environment.
2. Run: Once running, you can connect to your instance via SSH (for Linux) or RDP
(for Windows) to manage your application.
3. Monitor: Using AWS tools like CloudWatch, you can monitor metrics such as CPU
usage, disk I/O, and network activity to ensure optimal performance.
4. Scale: If your demand increases, you can either manually scale up by changing to
a larger instance type or automatically scale using Auto Scaling.
5. Terminate: When your work is done, you can stop or terminate the instance.
Stopping an instance saves costs, while terminating it ends all billing for the
instance.
1. Flexibility: Choose from a wide variety of instance types and storage options, or
even run containerized applications using EC2 with Amazon ECS (Elastic Container
Service).
2. Cost-Effective: Pay only for the compute power you need, and benefit from
options like Reserved Instances and Spot Instances to save money.
3. High Availability and Fault Tolerance: Run instances in multiple Availability
Zones for fault tolerance and minimize downtime.
4. Global Reach: Launch instances in multiple regions and Availability Zones
worldwide for low-latency access to users.
5. Security: EC2 integrates with AWS security services like IAM, VPC, and encryption,
ensuring that your infrastructure remains secure.
6. Integration: EC2 integrates seamlessly with other AWS services like S3, RDS,
Lambda, and more to build robust and scalable applications.
In Summary:
AWS EC2 provides scalable, flexible, and cost-effective computing resources that allow
users to run virtual machines (instances) in the cloud. Whether you're running a simple
web application or conducting high-performance computing tasks, EC2 gives you the
power and resources to meet your demands. With various instance types, pricing
options, and integration with other AWS services, EC2 offers a versatile platform to
support virtually any computing need.
Page | 8
Cloud Computing Lab manual
Experiment 4
The AWS Pricing Calculator is an online tool provided by Amazon Web Services that
allows users to estimate the costs of using AWS services based on their specific use case
and configuration. It helps businesses and developers calculate and predict the cost of
running workloads on AWS before committing to any services.
Page | 9
Cloud Computing Lab manual
o You can fine-tune estimates by specifying things like data transfer
requirements, storage types, instance types, and more.
o The tool gives you flexibility to model both simple and complex
environments (e.g., single instance vs. a multi-tier architecture).
5. Cost Breakdown:
o After generating an estimate, the calculator provides a detailed cost
breakdown. It categorizes costs by service (e.g., compute, storage, data
transfer) and even provides a forecast for recurring costs.
o The output includes both monthly and annual cost estimates, helping users
understand long-term pricing implications.
6. AWS Free Tier:
o The tool can also help you identify when services are eligible for the AWS
Free Tier, which provides limited resources for free for new customers for the
first 12 months.
7. Cost Comparison:
o You can compare different pricing models (On-Demand vs. Reserved
Instances vs. Spot Instances) for services like EC2, helping you make an
informed decision about cost optimization.
8. Exporting and Sharing:
o Once you've created a cost estimate, you can export it to a CSV or PDF file
for sharing with your team or stakeholders.
o You can also share a link to the estimate for collaboration and further
refinement.
Page | 10
Cloud Computing Lab manual
o After refining your estimate, you can save the estimate, share it with others,
or export it to a file format for documentation or reporting purposes.
Let’s walk through an example of estimating EC2 costs for a basic web application:
1. Select EC2 Service: Start by selecting Amazon EC2 from the list of services in
the AWS Pricing Calculator.
2. Choose Instance Type:
o Select the desired instance type (e.g., t3.medium for general-purpose
workloads).
3. Select Region:
o Choose the region where you’ll deploy your EC2 instances (e.g., US East (N.
Virginia)).
4. Define Number of Instances:
o Input the number of instances required for your workload (e.g., 2 instances
for redundancy).
5. Storage:
o Choose the type and amount of storage, for example, 50 GB of General
Purpose SSD (gp3) storage.
6. Additional Configurations:
o Specify data transfer requirements, such as monthly 5 GB of outbound
data.
7. Review Costs:
o The calculator will provide an estimate of monthly and yearly costs,
including compute, storage, and data transfer.
8. Optimize:
o Explore Reserved Instances for a longer commitment (1 or 3 years) to see
how costs might be reduced.
9. Finalize:
o Once satisfied, you can save the estimate for further use, sharing, or
integration into a larger cost plan.
1. Cost Transparency:
o The calculator provides a detailed breakdown of potential costs, helping
businesses predict their cloud expenses more accurately.
2. Cost Optimization:
o By experimenting with different instance types, pricing models, and services,
users can identify areas for cost savings.
3. Easy Planning:
o Provides a simple way to plan for AWS usage, particularly helpful when
migrating from on-premises infrastructure or estimating cloud project
budgets.
4. Flexibility and Accuracy:
o You can configure the tool to suit complex workloads or even simple
projects, giving you accurate cost forecasts for virtually any AWS service.
In Summary:
Page | 11
Cloud Computing Lab manual
The AWS Pricing Calculator is a powerful and essential tool for anyone looking to
estimate and optimize their cloud infrastructure costs. It provides customizable
estimates for over 200 AWS services, helping users understand potential costs, optimize
resources, and make informed decisions. Whether you're just starting with AWS or
scaling a large enterprise, the Pricing Calculator helps manage costs and avoid surprises
on your cloud bill.
Experiment 5
Aim: Create Linux Instance, - Using Putty to connect to Linux Instance, Implement
Apache Web Server on Linux Instance.
Step-by-Step Guide: Creating a Linux EC2 Instance, Connecting via PuTTY, and
Installing Apache Web Server
Let's walk through the process of creating a Linux EC2 instance, connecting to it using
PuTTY, and setting up the Apache web server on the instance.
Before you can connect to the instance and set up Apache, you need to create an EC2
instance running a Linux-based operating system. Here's how to do that:
Go to the AWS Management Console, and log in with your AWS account
credentials.
1. Navigate to EC2: In the AWS Console, type "EC2" in the search bar and click on
EC2 under the Services tab.
2. Launch Instance: On the EC2 Dashboard, click on Launch Instance to start the
process of creating a new EC2 instance.
3. Choose an Amazon Machine Image (AMI):
o Select Amazon Linux 2 AMI (this is a commonly used Linux distribution for
EC2).
o You can also use other Linux distributions like Ubuntu, CentOS, or Red Hat if
desired.
6. Add Storage:
o The default settings are usually sufficient. You can adjust the size if
necessary.
8. Key Pair:
o Create a new key pair or select an existing one.
o Download the key pair file (.pem), which you'll use to securely connect to the
instance via SSH.
o Make sure to save this file securely, as you won't be able to download it
again.
After your EC2 instance is up and running, you need to connect to it using PuTTY, a
popular SSH client for Windows.
PuTTY does not support .pem files directly, so you must convert the .pem file to .ppk
using PuTTYgen.
1. Download PuTTYgen:
o If you don't have PuTTYgen, download it from here.
Page | 13
Cloud Computing Lab manual
Step 2: Use PuTTY to Connect to the Instance
Example:
o Username: ec2-user
o Password: No password; the authentication is done via the private key.
Once logged in, you’ll have access to the command line of your Linux instance.
Now that you're connected to your EC2 instance, you can proceed to install and
configure the Apache web server.
Run the following command to ensure that all the package repositories are up to date:
This command updates all installed packages and the package list, ensuring that you're
installing the latest version of software.
Page | 14
Cloud Computing Lab manual
To ensure that Apache starts automatically when the instance reboots:
If you didn't open port 80 (HTTP) in the security group earlier, you can do so now to
allow incoming traffic to the web server:
Now, open a web browser and enter your instance's Public IP address in the URL bar:
http://<your-ec2-public-ip>
You should see the default Apache web page, which confirms that the Apache server is
running successfully.
In Summary:
Launch a Linux EC2 Instance: Using Amazon Linux or another Linux distribution.
Connect Using PuTTY: Convert your .pem file to .ppk and connect to your
instance.
Install Apache: Use the yum package manager to install and start Apache
(httpd).
Configure Security Group: Open port 80 for HTTP traffic.
Verify: Access the Apache web server using the instance's public IP.
That’s it! You've successfully created a Linux EC2 instance, connected via SSH using
PuTTY, and installed Apache Web Server. Feel free to customize your server with your
own content.
Page | 15
Cloud Computing Lab manual
Experiment 6
Creating a Windows EC2 instance in AWS follows a similar process to creating a Linux
instance, but with a few key differences for Windows. Here's a detailed guide to help you
through the steps:
1. Go to the AWS Management Console, and log in with your AWS account
credentials.
Page | 16
Cloud Computing Lab manual
2. In the Services search bar, type EC2 and select EC2 under the Compute section
to open the EC2 Dashboard.
1. In the EC2 Dashboard, click Launch Instance to start the process of creating a
new EC2 instance.
1. Configure Settings:
o Leave the default settings unless you need custom configurations (e.g.,
networking, IAM role).
o You can set Auto-assign Public IP to Enable so that your instance can be
accessed via RDP.
o If you want to place the instance in a specific Virtual Private Cloud (VPC),
configure that here.
o Click Next: Add Storage when you're ready to proceed.
1. Configure Storage:
o By default, a Windows instance will have a root volume of 30 GB. You can
increase or decrease the storage as needed.
o You can add additional EBS volumes if your application requires extra
storage.
o Once you're happy with the storage configuration, click Next: Add Tags.
Page | 17
Cloud Computing Lab manual
Step 6: Add Tags (Optional)
1. Tagging:
o You can add tags for easier identification, such as a Name tag, where you
could name your instance (e.g., Windows-Server-1).
o Tags are optional but can help manage resources efficiently, especially in
larger environments.
1. Review Configuration:
o Review all settings, including the instance type, storage, and security group.
2. Launch the Instance:
o Click Launch to start the instance creation.
3. Create a Key Pair:
o If you don’t already have a key pair, create a new one.
o Choose Create a new key pair, give it a name (e.g., WindowsKeyPair), and
then click Download Key Pair to save the .pem file. This file is crucial for
accessing the instance.
o Keep the .pem file safe—AWS will not allow you to download it again.
o If you already have a key pair, select Choose an existing key pair and
select the one you want to use.
o Acknowledge that you have the key pair and click Launch Instances.
After launching the instance, it may take a few minutes for it to start up. Once it's
running, you can connect to it via RDP (Remote Desktop Protocol).
2. RDP Client:
o Open Remote Desktop Connection (or use an RDP client on macOS or
Linux).
o In the Computer field, enter the Public IP of the instance.
3. Enter Credentials:
o In the RDP client, when prompted for credentials, enter:
Username: Administrator
Password: The decrypted password from earlier.
4. Login: Click OK to connect, and you should be logged into your Windows instance.
If you want to set up a web server on your Windows instance using IIS (Internet
Information Services), follow these steps:
1. Open Server Manager on your Windows instance (this opens automatically when
you log in).
2. In Server Manager, click on Add roles and features.
3. Click Next until you reach the Select features screen.
4. On the Select Features screen, check Web Server (IIS).
5. Click Next and follow the prompts to complete the installation.
1. After IIS is installed, you can verify the installation by opening a web browser
within the Windows instance and navigating to http://localhost.
2. You should see the default IIS page indicating that the web server is running.
1. If you want to access the IIS server from outside the instance, you need to make
sure port 80 (HTTP) is open in your Security Group.
2. Go to the EC2 Dashboard and update the Security Group to allow inbound
traffic on port 80 from your IP or from anywhere.
Page | 19
Cloud Computing Lab manual
In Summary:
With these steps, you've successfully created a Windows EC2 instance, connected to it
using RDP, and optionally set up a web server. Let me know if you need more details on
any of the steps!
Experiment 7
There are several tools and methods available for accessing and managing cloud
resources, depending on the cloud provider (like AWS, Azure, Google Cloud), the type of
access (e.g., command line, graphical interface, or programmatic access), and the
specific cloud services you're using. Below are the common tools used for accessing and
managing cloud environments:
Page | 20
Cloud Computing Lab manual
1. Web-Based Console/Portal
Most cloud providers offer a web-based console or management portal that allows
users to manage resources through a graphical user interface (GUI).
Cloud providers offer CLI tools that allow users to interact with cloud services directly
from their terminal or command prompt. These are especially useful for automating
tasks, writing scripts, or when you want to avoid using a graphical interface.
AWS CLI:
o AWS CLI allows users to manage AWS services using the command line. It's
available for Windows, macOS, and Linux.
o You can perform tasks such as launching EC2 instances, managing S3
buckets, or deploying Lambda functions.
o Install it from: AWS CLI
Azure CLI:
o Azure CLI allows you to manage Azure resources from the command line. It
supports Windows, macOS, and Linux environments.
o You can create, configure, and monitor Azure resources such as VMs,
databases, and networking.
o Install it from: Azure CLI
Google Cloud SDK (gcloud):
o The gcloud CLI tool is used to manage Google Cloud resources, such as
creating and managing Compute Engine instances, configuring GCP services,
etc.
o Install it from: Google Cloud SDK
For developers building applications that interact with cloud services, SDKs provide
libraries and tools to make API calls more convenient and programmatically
manageable.
Page | 21
Cloud Computing Lab manual
AWS SDK:
o AWS provides SDKs for popular programming languages such as Python
(Boto3), Java, JavaScript, .NET, PHP, and others.
o These SDKs simplify interacting with AWS services programmatically (e.g.,
uploading files to S3, starting EC2 instances).
o AWS SDK docs: AWS SDK
Azure SDK:
o Microsoft Azure offers SDKs for various programming languages like .NET,
Python, Java, Node.js, and Go to interact with Azure services.
o Azure SDK docs: Azure SDK
Google Cloud SDK:
o Google provides SDKs for Python, Java, Go, Node.js, .NET, and other
languages to interact with Google Cloud services programmatically.
o Google Cloud SDK docs: Google Cloud SDK
For accessing virtual machines (VMs) running in the cloud, users typically use remote
access protocols.
If you're accessing cloud storage services (e.g., AWS S3, Google Cloud Storage, Azure
Blob Storage), there are specific tools that make the process easier.
AWS S3 CLI:
o AWS CLI also includes commands specifically for interacting with Amazon
S3, like uploading or downloading files, listing objects, and managing
buckets.
o Example commands:
aws s3 ls (List S3 buckets)
aws s3 cp (Copy files to/from S3)
Azure Storage Explorer:
o A desktop app for accessing and managing Azure storage resources (Blob
Storage, Tables, Queues, etc.) without needing to use the portal.
o Install it from: Azure Storage Explorer
gsutil (Google Cloud Storage):
o A command-line tool that allows you to interact with Google Cloud Storage.
o Example commands:
gsutil cp (Copy files to/from Google Cloud Storage)
Page | 22
Cloud Computing Lab manual
gsutil ls (List storage buckets)
AWS CloudFormation:
o AWS CloudFormation allows you to define and provision AWS infrastructure
using templates written in JSON or YAML.
o You can use CloudFormation to automate the deployment of EC2 instances,
RDS databases, VPCs, and more.
Azure Resource Manager (ARM):
o ARM templates allow you to define and deploy Azure resources in a
declarative manner. Similar to CloudFormation but for Azure.
Terraform:
o Terraform is a cloud-agnostic IaC tool that supports multiple cloud providers,
including AWS, Azure, Google Cloud, and others. It allows you to define cloud
infrastructure in code (using HCL - HashiCorp Configuration Language) and
manage it across multiple providers.
Cloud management and monitoring tools allow you to keep track of performance, usage,
and cost in your cloud environment.
AWS CloudWatch:
o AWS CloudWatch is a monitoring service for AWS cloud resources and the
applications you run on AWS. It provides metrics, logs, and alarms.
o You can track EC2 instance performance, monitor application logs, and set
up automated alerts for thresholds like CPU usage.
Azure Monitor:
o Azure Monitor helps you collect, analyze, and act on telemetry data from
Azure resources and applications.
o It provides insights into resource health, performance, and usage.
Google Cloud Operations (formerly Stackdriver):
o Google Cloud Operations Suite provides monitoring, logging, and diagnostics
for cloud applications, helping you understand and manage the health of
your GCP resources.
There are several third-party tools available to help manage, monitor, and optimize
cloud resources across different providers.
CloudBolt:
o Provides multi-cloud management, optimization, and cost governance across
AWS, Azure, and Google Cloud.
CloudHealth:
o A cloud management platform that helps organizations optimize cloud costs,
manage security, and track performance across multiple cloud
environments.
Page | 23
Cloud Computing Lab manual
Datadog:
o A monitoring and analytics platform that helps you monitor cloud
infrastructure and applications in real time.
These tools provide various ways to interact with, manage, monitor, and optimize your
cloud resources. Depending on your workflow, you may use a combination of these tools
to perform tasks efficiently.
Experiment 8
Aim: To Study Elastic Block Storage (EBS), Simple Storage Service (S3).
Elastic Block Storage (EBS) and Simple Storage Service (S3) are two of Amazon Web
Services' (AWS) core storage offerings, but they serve different purposes.
Key Differences:
Storage Type: EBS is block storage, which is suited for file systems and
applications that need to interact with storage directly. S3 is object storage, better
for storing large amounts of unstructured data (like files, images, backups, etc.).
Performance: EBS typically provides faster access with low-latency storage for
high-performance applications, while S3 provides high scalability but may have
higher access latency for large files.
Cost: EBS tends to be more expensive per GB compared to S3, especially if you're
storing large amounts of data or using high-performance EBS volumes. S3's cost
varies depending on storage class and data retrieval patterns.
Experiment 9
Elastic File System (EFS) is another storage service offered by AWS, and it’s distinct
from both EBS and S3. Here's a breakdown of what EFS is and how it fits into the AWS
ecosystem:
What is EFS?
Page | 25
Cloud Computing Lab manual
Elastic File System (EFS) is a fully managed, scalable file storage service that is
designed to be used with AWS cloud services and on-premises resources. It provides a
file system interface and file system semantics, meaning you can use it in much
the same way you would use a traditional file system, but it operates in the cloud. It's
designed for applications that require a shared file system, so multiple Amazon EC2
instances can access the data simultaneously.
1. Scalability:
o EFS automatically scales as you add or remove data, so you don’t need to
worry about provisioning or managing storage capacity. It grows and shrinks
as needed, without manual intervention.
2. Shared Access:
o It provides multi-attach access, meaning multiple EC2 instances (and even
across different availability zones) can access the same file system
simultaneously. This makes EFS great for use cases where you need shared
access to data from multiple instances or applications.
3. Performance:
o EFS offers two performance modes:
General Purpose: Optimized for latency-sensitive applications like
web servers or content management systems.
Max I/O: Designed for high throughput and workloads that require
more than 10,000 concurrent connections (such as big data
applications).
o It provides low-latency access and can handle large amounts of data
transfer, but is typically not as fast as EBS for single-instance performance.
4. Managed Storage:
o EFS is fully managed, so AWS handles maintenance, patching, and scaling
for you, reducing the administrative burden compared to setting up your
own file server.
5. NFS Protocol:
o EFS uses the NFS (Network File System) protocol, meaning you can
mount EFS as a network drive on EC2 instances and other resources that
support NFS, similar to how you’d mount network drives in a traditional on-
premise environment.
6. Durability and Availability:
o EFS is designed for high availability and durability, automatically replicating
data across multiple availability zones within a region, making it resilient to
failures in a single zone.
7. Security:
o EFS integrates with AWS Identity and Access Management (IAM) for
access control. It also supports encryption at rest and in transit, ensuring
your data is secure.
Content Management and Web Serving: Share files across multiple web
servers and EC2 instances.
Big Data and Analytics: Applications that need to process large amounts of data
concurrently.
Page | 26
Cloud Computing Lab manual
Application Hosting: Applications that require shared file storage, like
development environments or custom applications.
Media Processing: EFS is often used for storing and sharing media files, as many
EC2 instances might need to access the same data.
Lift-and-Shift Applications: If you’re migrating a traditional on-premise
application that relies on a file system, EFS is a good fit because it supports the
same file system semantics that many applications require.
EBS vs EFS:
o EBS is a block-level storage solution designed for single-instance use,
whereas EFS is a file system that allows shared access by multiple instances
at once. EBS is better for individual EC2 instances, while EFS is ideal for
scenarios where multiple EC2 instances need access to the same data.
EFS vs S3:
o EFS provides a file system interface and is ideal for applications requiring a
file structure and shared access. S3, on the other hand, is object storage
and is more suited for static files or unstructured data that doesn’t require
direct file system access or traditional file system features like directories
and file locking.
Cost:
EFS pricing is based on the amount of data you store in the file system, and you
pay for the storage you use each month. It also offers an EFS Infrequent Access
(IA) storage class for lower-cost storage options for files that are infrequently
accessed.
Generally, EFS is more expensive than S3 but can be more cost-effective than
using EBS for shared, scalable storage across multiple instances.
In Summary:
Use EFS when you need a scalable, shared file system that multiple EC2 instances
or other services can access concurrently.
Use EBS when you need block storage for a single EC2 instance or high-
performance needs (like databases).
Use S3 for object storage needs when you want highly scalable, low-cost storage
for static files and don’t need direct file system semantics.
Experiment 10
Aim: To study Relational Database Service (RDS), Security and Compliance concepts
& DynamoDB.
Page | 27
Cloud Computing Lab manual
Relational Database Service (RDS)
Amazon RDS is a fully managed relational database service that allows you to easily
set up, operate, and scale relational databases in the cloud. It supports several popular
database engines, including MySQL, PostgreSQL, Oracle, Microsoft SQL Server, and
MariaDB.
Encryption:
o At Rest: RDS supports encryption at rest using AWS Key Management
Service (KMS). This ensures that your data is encrypted while stored on disk.
o In Transit: You can enable SSL/TLS encryption for connections to your RDS
instance to ensure data is encrypted during transmission.
Access Control:
o IAM Roles: With IAM, you can control who can access your RDS instances
and manage the permissions on who can perform specific actions.
o Security Groups: Security groups act as a virtual firewall, controlling
inbound and outbound traffic to your RDS instances. You can configure rules
based on IP, port, and protocol.
Audit Logging:
o Database Logs: You can enable database logging for audit purposes, which
records changes and access to sensitive data. Integration with services like
CloudWatch and CloudTrail can help you monitor access patterns and
other events.
Page | 28
Cloud Computing Lab manual
DynamoDB
Encryption:
o At Rest: Data is automatically encrypted at rest using AWS Key
Management Service (KMS). This ensures that all data stored in DynamoDB
is protected.
o In Transit: DynamoDB supports encryption in transit using SSL/TLS to
protect data while it’s being transferred to and from your application.
Access Control:
o IAM: You can control access to DynamoDB through AWS Identity and Access
Management (IAM). You can define policies that allow or deny actions like
reading, writing, or modifying tables.
o VPC Endpoints: You can access DynamoDB securely from your Virtual
Private Cloud (VPC) via VPC endpoints, which eliminate the need for traffic to
traverse the public internet.
Compliance:
o DynamoDB complies with multiple standards, such as GDPR, HIPAA, PCI
DSS, and others, making it suitable for regulated industries.
o DynamoDB integrates with services like CloudTrail for logging and
CloudWatch for monitoring, which is essential for security auditing and
compliance reporting.
Page | 29
Cloud Computing Lab manual
When to Use DynamoDB:
Low Latency: When you need real-time, low-latency performance for high-volume
data (e.g., gaming leaderboards, mobile apps, or IoT).
Scalable Web Apps: For web apps that need a flexible schema and auto-scaling
to handle variable traffic.
Event-Driven Applications: Use cases that require high throughput and fast
data processing, like streaming data or real-time analytics.
DynamoDB vs RDS:
Key Takeaways:
RDS is great for applications needing structured data, complex querying, and
transactional support (like SQL databases).
DynamoDB is ideal for applications needing fast, scalable, and flexible NoSQL
data storage, such as high-performance apps with massive, dynamic data sets.
Page | 30
Cloud Computing Lab manual
Experiment 11
Aim: To Create and Use Custom AMI ,Virtual Private Cloud (VPC) & Deploy Application
in Custom VPC using best practices.
A Custom AMI is a pre-configured virtual machine image that you create and use to
launch EC2 instances with a specific configuration. This can save time when launching
multiple instances with the same operating system, software, or settings, ensuring
consistency across deployments.
A VPC is a virtual network in AWS that allows you to define a logically isolated network
within the AWS cloud. You can control aspects like IP address ranges, subnets, route
tables, and network gateways, giving you complete control over your network
environment.
1. Subnets:
o A VPC is divided into subnets, each belonging to a specific Availability Zone
(AZ). You can create public subnets for resources that need to be accessed
from the internet and private subnets for resources that should not be
directly accessible.
2. Route Tables:
o Route tables control the routing of traffic between subnets and external
networks, such as the internet or other VPCs. A default route table is created
when the VPC is created, but you can customize it.
4. NAT Gateway:
o In a private subnet, you can use a NAT Gateway or NAT Instance to allow
outbound internet traffic for private resources (like EC2 instances in private
subnets), without exposing them to incoming traffic from the internet.
6. VPC Peering:
o VPC peering allows communication between instances in different VPCs
within the same region or across different regions, while maintaining
network isolation.
7. VPN Connection:
o A VPN connection allows you to securely connect your on-premises network
to your VPC.
Page | 32
Cloud Computing Lab manual
Deploy Application in Custom VPC Using Best Practices
When deploying an application in a custom VPC, there are several best practices to
follow to ensure security, scalability, and high availability. Below are some
recommended steps and guidelines for deploying an application in a custom VPC.
Page | 33
Cloud Computing Lab manual
o Ensure that your private instances can still reach the internet (e.g., for
updates or third-party services) without exposing them directly to the
internet by configuring a NAT Gateway in a public subnet.
In Summary:
Custom AMIs: Allow you to create reusable and consistent environments for
deploying EC2 instances, saving time on instance setup and configuration.
VPC: Provides a highly customizable and secure network environment where you
can isolate your application resources.
Best Practices: Follow security and scalability best practices when deploying
applications, including using separate subnets, auto-scaling, and proper security
configurations.
Page | 34
Cloud Computing Lab manual
Experiment 12
Aim: To study Elastic Load Balancer (ELB), Application Load Balancer (ALB) &
Network Load Balancer (NLB).
Elastic Load Balancer (ELB) is a fully managed service by AWS that automatically
distributes incoming application traffic across multiple targets, such as EC2 instances,
containers, or IP addresses. ELB improves the availability and fault tolerance of your
application by ensuring that traffic is evenly distributed and that no single resource is
overwhelmed by too much traffic. There are three primary types of load balancers within
ELB, each designed to handle specific use cases: Application Load Balancer (ALB),
Network Load Balancer (NLB), and the older Classic Load Balancer (CLB)
(though it is now being phased out in favor of ALB and NLB).
Application Load Balancer (ALB) operates at the application layer (Layer 7) of the
OSI model, making it ideal for HTTP and HTTPS traffic. ALB is designed for more complex
Page | 35
Cloud Computing Lab manual
routing decisions based on content within the request, making it perfect for web
applications, microservices, and containers.
1. Content-Based Routing:
o ALB can route traffic based on URL paths (e.g., /api/* or /images/*) or host
headers (e.g., www.example.com vs. api.example.com), enabling you to
direct traffic to different application services or containers based on request
content.
3. SSL/TLS Termination:
o ALB can handle SSL/TLS termination, offloading the encryption/decryption
work from backend instances, improving performance and simplifying
certificate management.
6. Health Checks:
o ALB regularly performs health checks on the registered targets and ensures
that traffic is only routed to healthy instances, improving reliability.
Network Load Balancer (NLB) operates at the transport layer (Layer 4) of the OSI
model, handling TCP and TLS/SSL traffic. NLB is designed for extremely high
Page | 36
Cloud Computing Lab manual
performance and low latency, making it suitable for applications that require fast, low-
latency traffic routing, such as gaming, IoT, or real-time communications.
4. Health Checks:
o NLB also performs health checks on backend targets and will route traffic
only to healthy targets, improving fault tolerance.
5. High Availability:
o NLB automatically scales to handle traffic spikes, and it’s designed to be
highly available, capable of maintaining consistent performance under large-
scale, high-traffic conditions.
6. Supports IP Targeting:
o NLB can route traffic directly to any IP address, meaning it can be used with
instances, containers, or even resources outside of AWS if necessary.
Page | 37
Cloud Computing Lab manual
Feature Application Load Balancer (ALB) Network Load Balancer (NLB)
Health
Yes, at the application level (HTTP) Yes, at the transport level (TCP)
Checks
The Classic Load Balancer (CLB) was the original ELB service offered by AWS. While
still available for backward compatibility, it's now considered legacy, and AWS
recommends using ALB and NLB for new deployments due to their advanced features,
such as content-based routing, SSL termination, and better scalability.
Supports both HTTP/HTTPS and TCP traffic, but with fewer advanced features than
ALB and NLB.
Does not support advanced routing options like content-based routing,
WebSockets, or HTTP/2.
Limited to EC2 instances as targets, unlike ALB and NLB, which can handle
containers, IP addresses, and Lambda functions.
Page | 38
Cloud Computing Lab manual
o You need to handle millions of requests per second with very low latency,
such as IoT, real-time communications, or gaming apps.
o You are dealing with applications that don’t rely on HTTP/HTTPS traffic (e.g.,
databases or email servers).
Conclusion:
Choosing between ALB and NLB depends largely on your application's needs:
ALB is perfect for HTTP/HTTPS traffic with more complex routing requirements and
supports modern web apps, APIs, and microservices.
NLB is the go-to for ultra-low-latency, high-performance applications that deal
with TCP, TLS, or UDP traffic, and where maintaining a static IP is critical.
Experiment 13
Aim: To study Identity and Access Management (IAM), Well-Architected Framework &
AWS Cloud Watch and SNS.
Page | 39
Cloud Computing Lab manual
Identity and Access Management (IAM)
AWS Identity and Access Management (IAM) is a service that helps you securely
control access to AWS services and resources. It allows you to manage users, groups,
roles, and policies to enforce granular access control within your AWS environment.
1. Users:
o IAM Users are entities that you create in your AWS account to represent
individual people or services. You can assign permissions to users to allow or
deny access to specific AWS resources.
2. Groups:
o IAM Groups are collections of IAM users. You can assign permissions to a
group, and those permissions are automatically granted to all members of
that group.
3. Roles:
o IAM Roles are similar to IAM users but are meant to be assumed by trusted
entities like users, applications, or AWS services. Roles are ideal for
delegating permissions to AWS services or EC2 instances.
4. Policies:
o IAM Policies define permissions (what actions are allowed or denied) on
AWS resources. Policies are written in JSON format and can be attached to
users, groups, or roles to determine what AWS resources and actions they
can access.
8. IAM Federation:
o IAM allows you to integrate with existing identity providers (e.g., Active
Directory, Google Apps) for Single Sign-On (SSO), enabling you to use
corporate credentials for AWS access.
Page | 40
Cloud Computing Lab manual
Best Practices for IAM:
Least Privilege: Always assign the minimum permissions required for users,
groups, and roles to perform their tasks.
Use Groups: Assign permissions at the group level rather than individually to
users for easier management.
Enable MFA: Use MFA for all IAM users, especially for users with elevated
privileges.
Rotate Keys Regularly: Regularly rotate access keys and secrets, and avoid
embedding them in application code.
Use IAM Roles for EC2: Instead of embedding AWS credentials in EC2 instances,
use IAM roles to grant temporary access to AWS resources.
Well-Architected Framework
1. Operational Excellence:
o Focuses on operations in the cloud, covering monitoring, automation,
incident response, and evolving procedures over time to improve the
reliability of applications.
o Key practices: Continuous monitoring, automation, feedback loops, incident
management.
2. Security:
o Ensures that your data, systems, and assets are protected against
unauthorized access or compromise.
o Key practices: Identity and access management, data protection, threat
detection, incident response.
3. Reliability:
o Ensures a system can recover from failures and meet customer expectations
in terms of uptime and performance.
o Key practices: Fault tolerance, recovery planning, scaling, availability, and
monitoring.
4. Performance Efficiency:
o Focuses on using cloud resources efficiently and adapting to changes in
demand or technological advancements over time.
o Key practices: Right-sizing resources, managing capacity, and improving
performance over time.
5. Cost Optimization:
Page | 41
Cloud Computing Lab manual
o Ensures that you are not overspending on cloud resources and that you're
maximizing the value from your AWS services.
o Key practices: Cost monitoring, resource optimization, scaling efficiently, and
using appropriate pricing models (e.g., reserved instances).
Use the AWS Well-Architected Tool: AWS provides a tool to assess your
workloads against these best practices, helping you understand the gaps in your
architecture.
Adopt Continuous Improvement: The framework emphasizes a continuous
improvement process, encouraging you to regularly review and enhance your
cloud architecture based on evolving best practices and business needs.
AWS CloudWatch
1. Metrics:
o CloudWatch collects metrics from various AWS services (like EC2, RDS,
Lambda, etc.), including CPU utilization, network traffic, disk I/O, and other
performance indicators.
o You can create custom metrics for applications or systems that are not AWS-
native.
2. Logs:
o CloudWatch Logs allow you to store and monitor log files, including logs from
EC2 instances, Lambda functions, or any custom logs from applications.
o You can set up log groups and log streams to organize and track logs
efficiently.
3. Alarms:
o You can set CloudWatch Alarms to notify you when a metric crosses a
specified threshold (e.g., when CPU utilization exceeds 80% for an EC2
instance).
o Alarms can trigger automated actions, like scaling up EC2 instances, sending
notifications, or invoking AWS Lambda functions.
4. Dashboards:
o CloudWatch Dashboards allow you to visualize metrics from different AWS
services in a single, customizable view. Dashboards provide insights into the
health and performance of your infrastructure in real time.
5. Events:
Page | 42
Cloud Computing Lab manual
o CloudWatch Events allows you to monitor and respond to state changes in
your AWS resources. You can set up rules that react to changes such as EC2
instance state changes or autoscaling events.
o CloudWatch Events is often used to automate responses to various changes
in your AWS environment.
6. CloudWatch Insights:
o CloudWatch Logs Insights is a fully integrated, interactive, and fast query
engine for CloudWatch Logs that allows you to search, analyze, and visualize
log data in real time.
Amazon SNS is a fully managed service that allows you to send messages or
notifications to subscribers or other systems via multiple protocols like email, SMS, or
HTTP. It's a highly scalable messaging service for distributed systems, mobile
applications, and microservices.
1. Topics:
o A topic is a communication channel that allows publishers to send messages
to multiple subscribers. You can configure SNS topics to send messages to
different types of subscribers (e.g., email, Lambda, SQS, HTTP/S endpoints).
2. Multiple Protocols:
o SNS supports several notification protocols, such as:
Email: Send notifications to one or more email addresses.
SMS: Send text message notifications to users’ mobile phones.
Lambda: Invoke AWS Lambda functions to process the message.
HTTP/HTTPS: Send messages to web servers via HTTP/S endpoints.
SQS: Send messages to Simple Queue Service (SQS) queues for
further processing.
3. Message Filtering:
o SNS supports message filtering, allowing subscribers to only receive
specific types of messages based on filtering criteria (e.g., you could filter
notifications for certain types of events).
5. Push Notifications:
o SNS can be used for mobile push notifications to apps on devices like
Android, iOS, and Fire OS.
Page | 43
Cloud Computing Lab manual
Integrating CloudWatch and SNS:
CloudWatch and SNS are often used together for alerting and automated responses.
For example:
You can create a CloudWatch Alarm that monitors EC2 instance health (e.g.,
high CPU usage).
When the alarm triggers, it can send a notification via SNS to inform
administrators via email, or it can invoke a Lambda function for automated
remediation (e.g., restarting the EC2 instance).
This integration helps you automate monitoring, alerting, and troubleshooting, ensuring
a more proactive approach to managing your AWS resources.
Summary:
IAM: Manage access to AWS resources by defining users, roles, and policies with
fine-grained permissions.
Well-Architected Framework: AWS best practices across five pillars
(Operational Excellence, Security, Reliability, Performance Efficiency, Cost
Optimization) to ensure well-architected, efficient, and resilient applications.
CloudWatch: Monitor AWS resources and applications with metrics, logs, alarms,
and dashboards to ensure everything is functioning correctly.
SNS: A messaging service to send notifications or messages to multiple endpoints
(email, SMS, Lambda, etc.) for system alerts or communication.
Page | 44
Cloud Computing Lab manual
Experiment 14
Aim: To study AWS CloudFront, Auto Scaling & AWS Route 53.
AWS CloudFront
Amazon CloudFront is AWS's content delivery network (CDN) service that helps you
deliver content (like websites, videos, APIs, or other web assets) to users with low
latency and high transfer speeds. CloudFront caches content at edge locations around
the world, which are strategically placed near your users to minimize latency.
1. Global Distribution:
o CloudFront has a vast network of edge locations around the world. When a
user makes a request, CloudFront serves the content from the nearest edge
location, reducing latency and improving user experience.
Page | 45
Cloud Computing Lab manual
o CloudFront allows for URL query string forwarding, cookie handling, and
more granular caching strategies based on request headers or other
variables.
6. Origin Failover:
o CloudFront supports automatic origin failover. If one origin (e.g., an S3
bucket) becomes unavailable, CloudFront can automatically switch to a
secondary origin, ensuring continuous content delivery.
7. Real-Time Analytics:
o CloudFront provides detailed real-time reports and analytics on cache
hit/miss rates, bandwidth usage, request/response performance, and other
performance metrics. This helps you optimize delivery and troubleshoot
issues.
Auto Scaling
AWS Auto Scaling is a service that allows you to automatically adjust the number of
computing resources (e.g., EC2 instances) to meet changing demand. With Auto Scaling,
you can ensure your application always has the right amount of resources without over-
provisioning or under-provisioning.
Page | 46
Cloud Computing Lab manual
Key Features of AWS Auto Scaling:
1. Dynamic Scaling:
o Auto Scaling can dynamically scale your resources based on real-time
demand. For example, if your web server traffic spikes, Auto Scaling can
automatically launch new EC2 instances to handle the increased load.
Conversely, it can terminate instances when demand decreases.
2. Scheduled Scaling:
o You can schedule scaling actions in advance based on known patterns of
demand (e.g., scale up during business hours and scale down during off-
hours).
4. Health Checks:
o Auto Scaling regularly performs health checks on your EC2 instances. If an
instance is deemed unhealthy, it can be automatically replaced with a new
one, ensuring that only healthy instances are serving traffic.
5. Scaling Policies:
o You can create scaling policies to define when to scale in or out. Policies
can be based on various metrics, such as CPU utilization, memory usage,
network traffic, or custom CloudWatch metrics.
7. Predictive Scaling:
o AWS Auto Scaling also includes predictive scaling, which uses machine
learning to predict demand and scale your resources in advance. This can be
particularly helpful for workloads with fluctuating or cyclical usage patterns.
Cost Efficiency: By scaling resources only when needed, Auto Scaling helps you
avoid paying for unused capacity.
High Availability: Ensures that your application always has sufficient capacity to
handle traffic spikes, improving availability.
Simplified Management: Auto Scaling automatically handles scaling, reducing
the need for manual intervention and ensuring your infrastructure can adapt to
changes in traffic patterns.
Page | 47
Cloud Computing Lab manual
Use Cases for Auto Scaling:
Web and Application Servers: Automatically scale the number of EC2 instances
based on website traffic.
Batch Processing: Scale EC2 instances or container clusters up and down for
processing batch jobs based on workload.
Databases: Adjust the read/write capacity of DynamoDB based on workload
demands.
AWS Route 53
Amazon Route 53 is a scalable and highly available Domain Name System (DNS)
web service designed to route end-user requests to endpoints (such as websites,
applications, or other AWS services). It provides DNS services, domain registration, and
routing policies to manage how your users access your resources.
1. Domain Registration:
o Route 53 allows you to register domain names directly through AWS. It
integrates with other AWS services and provides easy management for your
domains.
2. DNS Management:
o Route 53 provides reliable DNS resolution by mapping domain names to IP
addresses of servers. It offers features like:
A Records (Address Records): Map domain names to IPv4
addresses.
CNAME Records (Canonical Name Records): Map subdomains to
other domain names.
MX Records (Mail Exchange Records): Route email traffic.
TTL (Time To Live): Define how long DNS records are cached by
resolvers.
4. Routing Policies:
o Route 53 offers multiple routing policies to control how DNS queries are
answered:
Simple Routing: Route traffic to a single resource.
Weighted Routing: Route traffic to different resources based on
assigned weights.
Latency-Based Routing: Route traffic to the resource with the lowest
latency based on the user’s location.
Page | 48
Cloud Computing Lab manual
Geo DNS Routing: Route traffic based on geographic location.
Failover Routing: Route traffic to a secondary resource in case the
primary one becomes unavailable.
5. Traffic Flow:
o Route 53 provides a visual interface for setting up routing rules with its
Traffic Flow feature. It simplifies complex routing configurations for
different types of traffic.
6. DNS Failover:
o Route 53 can automatically reroute traffic to a backup resource in case the
primary resource becomes unavailable, ensuring high availability for your
applications.
High Availability and Scalability: Route 53 uses a highly reliable and scalable
infrastructure to ensure consistent and fast DNS resolution.
Traffic Routing Flexibility: Route 53’s advanced routing policies allow you to
optimize traffic routing based on performance, geographical location, or health of
your resources.
Easy Management: As a fully managed service, Route 53 eliminates the need for
managing DNS infrastructure manually.
Website Hosting: Manage the DNS for your website by mapping domain names
to your web servers or CloudFront distributions.
Failover and Disaster Recovery: Set up DNS failover to reroute traffic to a
backup resource if the primary one goes down.
Global Applications: Use latency-based or geo-based routing to serve users from
the closest data center, improving performance.
Summary:
AWS CloudFront: A content delivery network (CDN) that caches content at edge
locations around the world to reduce latency and improve performance.
Auto Scaling: Automatically adjusts the number of EC2 instances or other
resources based on demand to ensure optimal performance and cost-efficiency.
AWS Route 53: A scalable DNS and domain management service that provides
advanced routing, health checks, and failover capabilities to ensure high
availability and low-latency access for your applications.
These services together allow you to build highly available, scalable, and performant
applications that can automatically adjust to changing traffic patterns while ensuring
your users have a seamless experience.
Page | 49
Cloud Computing Lab manual
Page | 50