Cloud Foundation Report
Cloud Foundation Report
Accredited by NBA & NAAC with “A” Grade Recognized by UGC under section 2(f) & 12(B) Approved by
AICTE – New Delhi Permanently Affiliated to JNTUK,SBTET Ranked as “A” Grade by Govnt.of A.P.
Internship Report
On
In
TO JULY-2023
(10 Weeks)
1
Internship Report on Cloud Foundations
List of Contents
Topic Pg. No
Cloud Foundation
List of Figures
Chapter 1:Introduction to Cloud Computing 10
1.1: Introduction
Fig 1.1.1: Cloud Computing
2.1: Introduction 13
Fig 3.1.2:Region
Chapter 4:AWS Cloud Security
4
Internship Report on Cloud Foundations
5.3:Amazon VPC
Fig 5.3.1:Amazon VPC
5.4:VPC Networking
5.6:Route S3
5.7:CloudFront
Chapter6:Compute
6.2:Amazon EC2
6.3:AWS Lambda
Fig 6.31:AWS Lambda
Chapter7:Storage
7.1:AWS EBS
7.2: AWS S3
7.4:Amazon S3 Glacier
5
Internship Report on Cloud Foundations
Chapter8:Databases
8.1: AmazonRDS
8.2:Amazon Redshift
8.3:Amazon Aurora
Cloud Architecture
Chapter1:Introduction to Cloud Architecture
1.1:Introduction
2.1:Cloud Storage
3.1:Introduction
Fig 3.1:Adding a compute layer
6
Internship Report on Cloud Foundations
4.1: Introduction
5.1: Introduction
Fig 5.1: Creating a networking environment
Chapter6:Connecting Networks
6.1: Introduction
Fig 6.11: Connecting Networks
7
Internship Report on Cloud Foundations
List of contents
SI. No Fig. No Name of the figure Page No
9. 3.1 Introduction 16
8
Internship Report on Cloud Foundations
9
Internship Report on Cloud Foundations
CLOUD FOUNDATATION
CHAPTER 1
Introduction to Cloud Computing
1.1 Introduction:Cloud Computing is the delivery of computing services such
as servers, storage, databases, networking, software, analytics, intelligence, and
more, over the Cloud (Internet).
10
Internship Report on Cloud Foundations
The cloud environment provides an easily accessible online portal that makes
handy for the user to manage the compute, storage, network, and application
resources. Some cloud service providers are in the following figure.
11
Internship Report on Cloud Foundations
• allows you to easily scale your resources up or down as your needs change,
helping you to save money and ensure that your application always has the
resources it needs.
• AWS provides a highly reliable and secure infrastructure, with multiple data
centers and a commitment to 99.99% availability for many of its services.
• AWS offers a wide range of services and tools that can be easily combined to
build and deploy a variety of applications, making it highly flexible.
• AWS offers a pay-as-you-go pricing model, allowing you to only pay for the
resources you actually use and avoid upfront costs and long-term
commitments.
12
Internship Report on Cloud Foundations
CHAPTER 2
2.1 Introduction :
Cloud computing provides organizations with numerous benefits. These
include additional security of resources, scalable infrastructure, agility, and more.
However, these benefits come at a cost.
Since cloud economics helps businesses determine if cloud computing is right for
them, it’s essential to take before getting on with migration aspects
13
Internship Report on Cloud Foundations
TCO defines all the direct and indirect costs involved. These include data centers,
maintenance and support, development, business continuity and disaster recovery,
network, and more. This analysis compares the cost of on-premise infrastructure
with the cost of cloud computing, enabling a business to make the right impact.
Businesses also learn about opportunity costs through TCO. The main aim is to
attain a lower TCO than when operating on-premise. A business can either pause
migration efforts, pay the extra costs if it wants to achieve other goals, or migrate
in phases.
On-demand :
On-demand pricing is a major factor to consider when considering cloud
migration. With on-premise computing, you buy a fixed capacity that you own.
The fixed capacity charges, however, change when you migrate to the cloud and
choose on-demand pricing. Costs become elastic and can quickly spiral out of
control if you don’t regularly monitored or control them.
Cost fluctuations resulting from the pricing as you go model can cost you lots of
money. Therefore, you need a cost management tool to help you detect any
anomalies.
Case study :
Amazon.comis the world’s largest online retailer. In 2011, Amazon.com switched
from tape backup to usingAmazon Simple Storage Service(Amazon S3) for
backing up the majority of its Oracle databases. This strategy reduces complexity
and capital expenditures, provides faster backup and restore performance,
eliminates tape capacity planning for backup and archive, and frees up
administrative staff for higher value operations. The company was able to replace
their backup tape infrastructure with cloud-based Amazon S3 storage, eliminate
backup software, and experienced a 12X performance improvement, reducing
restore time from around 15 hours to 2.5 hours in select scenarios
14
Internship Report on Cloud Foundations
The AWS Cost Management console has features that you can use for budgeting
and forecasting costs and methods for you to optimize your pricing to reduce your
overall AWS bill.
The AWS Cost Management console is integrated closely with the Billing
console. Using both together, you can manage your costs in a holistic manner.
You can use Billing console resources to manage your ongoing payments, and
AWS Cost Management console resources to optimize your future costs. For
information about AWS resources to understand, pay, or organize your AWS bills,
see theAWS BillingUser Guide.
15
Internship Report on Cloud Foundations
You can choose between chart types and time periods on the top of the
section. You can adjust additional preferences using the gear icon.
You can choose between chart types and time periods on the top of the
section. Adjust additional preferences using the gear icon.
CHAPTER 3
16
Internship Report on Cloud Foundations
3.1Introduction:
o AWS is a cloud computing platform which is globally available.
o Global infrastructure is a region around the world in which AWS is based.
Global infrastructure is a bunch of high-level IT services which is shown
below:
o AWS is available in 19 regions, and 57 availability zones in December
2018 and 5 more regions 15 more availability zones for 2019.
The following are the components that make up the AWS infrastructure:
Region
17
Internship Report on Cloud Foundations
Fig 3.1.2:-Region
o Availability zones are connected through redundant and isolated metro
fibers.
Edge Locations
o Edge locations are the endpoints for AWS used for caching content.
o Edge locations consist of CloudFront, Amazon's Content Delivery
Network (CDN).
o Edge locations are more than regions. Currently, there are over 150 edge
locations.
o Edge location is not a region but a small location that AWS have. It is used
for caching the content.
o Edge locations are mainly located in most of the major cities to distribute
the content to end users with reduced latency.
o For example, some user accesses your website from Singapore; then this
request would be redirected to the edge location closest to Singapore
where cached data can be read.
18
Internship Report on Cloud Foundations
the Regional edge cache instead of the Origin servers that have high
latency.
CHAPTER 4
4.1 AWS Shared Responsibility Model :
Security and Compliance is a shared responsibility between AWS and the
customer. This shared model can help relieve the customer’s operational burden as
AWS operates, manages and controls the components from the host operating
system and virtualization layer down to the physical security of the facilities in
which the service operates. The customer assumes responsibility and management
of the guest operating system (including updates and security patches), other
associated application software as well as the configuration of the AWS provided
security group firewall. Customers should carefully consider the services they
choose as their responsibilities vary depending on the services used, the
integration of those services into their IT environment, and applicable laws and
regulations. The nature of this shared responsibility also provides the flexibility
and customer control that permits the deployment. As shown in the chart below,
this differentiation of responsibility is commonly referred to as Security “of” the
Cloud versus Security “in” the Cloud.AWS responsibility “Security of the Cloud”
- AWS is responsible for protecting the infrastructure that runs all of the services
offered in the AWS Cloud. This infrastructure is composed of the hardware,
software, networking, and facilities that run AWS Cloud services.
19
Internship Report on Cloud Foundations
system, and platforms, and customers access the endpoints to store and retrieve
data. Customers are responsible for managing their data (including encryption
options), classifying their assets, and using IAM tools to apply the appropriate
permissions.
When you create an AWS account, you begin with one sign-in identity that has
complete access to all AWS services and resources in the account. This identity is
called the AWS account root user and is accessed by signing in with the email
address and password that you used to create the account. We strongly
recommend that you don't use the root user for your everyday tasks. Safeguard
your root user credentials and use them to perform the tasks that only the root user
can perform. For the complete list of tasks that require you to sign in as the root
user, seeTasks that require root usercredentialsin the AWS Account Management
Reference Guide.
20
Internship Report on Cloud Foundations
different resources. For example, you might allow some users complete
access to
Granular permissions
Amazon Elastic Compute Cloud (Amazon EC2), Amazon Simple Storage
Service (Amazon S3), Amazon DynamoDB, Amazon Redshift, and other
AWS services. For other users, you can allow read-only access to just
some S3 buckets, or permission to administer just some EC2 instances, or
to access your billing information but nothing else.
Secure access to AWS resources for applications that run on Amazon EC2
You can use IAM features to securely provide credentials for
applications that run on EC2 instances. These credentials provide
permissions for your application to access other AWS resources.
Examples include S3 buckets and DynamoDB tables.
21
Internship Report on Cloud Foundations
• Security of the cloud– AWS is responsible for protecting the infrastructure that
runs AWS services in the AWS Cloud. AWS also provides you with services that
you can use securely. Third-party auditors regularly test and verify the
effectiveness of our security as part of theAWS Compliance Programs. To learn
about the compliance programs that apply to Account Management, seeAWS
services in scope by complianceprogram.
22
Internship Report on Cloud Foundations
• Security in the cloud– Your responsibility is determined by the AWS service that
you use. You are also responsible for other factors including the sensitivity of
your data, your company’s requirements, and applicable laws and regulations
This documentation helps you understand how to apply the shared responsibility
model when using AWS Account Management. It shows you how to configure
Account Management to meet your security and compliance objectives. You also
learn how to use other AWS services that help you to monitor and secure your
Account Management resources.
CHAPTER 5
5.1 Networking Basics :
Starting your cloud networking journey can seem overwhelming. Especially if you
are accustomed to the traditional on-premises way of provisioning hardware and
managing and configuring networks. Having a good understanding of core
networking concepts like IP addressing, TCP communication, IP routing, security,
and virtualization will help you as you begin gaining familiarity with cloud
networking on AWS. In the following sections, we answer common questions
about cloud networking and explore best practices for building infrastructure on
AWS.
23
Internship Report on Cloud Foundations
The following diagram shows an example VPC. The VPC has one subnet in each
of the Availability Zones in the Region, EC2 instances in each subnet, and an
internet gateway to allow communication between the resources in your VPC and
the internet.
1. Virtual Private Cloud (VPC): When you create a VPC, it represents your
private virtual network in the AWS cloud. You can think of it as your own data
center in the cloud.
2. Subnets: Within a VPC, you can create one or more subnets, each associated
with a specific Availability Zone in a region. Subnets help you logically segment
your resources and provide high availability and fault tolerance.
3. IP Addressing: You can define the IP address range for your VPC using
CIDR (Classless Inter-Domain Routing) notation. For example, you can choose a
range like 10.0.0.0/16, which allows for up to 65,536 IP addresses.
24
Internship Report on Cloud Foundations
5. Route Tables: Each subnet in a VPC is associated with a route table, which
defines the rules for routing traffic in and out of the subnet. By default, the main
route table allows communication within the VPC, but you can create custom
route tables to control specific traffic patterns.
Security Groups: Security groups act as virtual firewalls for your EC2 instances
within a VPC. You can specify inbound and outbound traffic rules for each
security group, allowing you to control what traffic is allowed to reach your
instances. They operate at the instance level and can be associated with one or
more instances.
Network ACLs (Access Control Lists): Network ACLs are another layer of
security that operate at the subnet level. They control inbound and outbound
traffic at the subnet level and provide additional control over traffic flow between
subnets. Unlike security groups, network ACLs are stateless, meaning that you
must define rules for both inbound and outbound traffic.
Public and Private Subnets: By carefully designing your VPC with public and
private subnets, you can control which resources are exposed to the internet and
which remain private. Public subnets typically have a route to an Internet
Gateway, allowing instances within them to communicate with the internet, while
private subnets do not have direct internet access.
5.6 Route 53 :
Amazon Route 53 is a highly scalable and reliable Domain Name System (DNS)
web service provided by Amazon Web Services (AWS). It helps you manage the
25
Internship Report on Cloud Foundations
domain names (e.g., example.com) and route incoming requests to the appropriate
AWS resources, such as EC2 instances, load balancers, or S3 buckets. Here's an
overview of Amazon Route 53:
Routing Policies: Route 53 offers several routing policies that allow you to
control how incoming traffic is distributed among multiple resources. Some of the
routing policies include:
Latency-Based Routing: Routes traffic to the resource with the lowest latency
for the user.
Health Checks: Route 53 enables you to set up health checks for your resources,
such as EC2 instances or load balancers. Health checks monitor the health and
availability of resources, and Route 53 can automatically reroute traffic away from
unhealthy resources.
5.7 Cloudfront :
Amazon CloudFront is a web service that speeds up distribution of your static and
dynamic web content, such as .html, .css, .js, and image files, to your users.
CloudFront delivers your content through a worldwide network of data centers
called edge locations. When a user requests content that you're serving with
CloudFront, the request is routed to the edge location that provides the lowest
latency (time delay), so that content is delivered with the best possible
performance.
• If the content is already in the edge location with the lowest latency, CloudFront
delivers it immediately.
• If the content is not in that edge location, CloudFront retrieves it from an origin
that you've defined—such as an Amazon S3 bucket, a MediaPackage channel, or
26
Internship Report on Cloud Foundations
an HTTP server (for example, a web server) that you have identified as the source
for the definitive version of your content.
Your users can easily navigate to this URL and see the image. But they probably
don't know that their request is routed from one network to another—through the
complex collection of interconnected networks that comprise the internet—until
the image is found.
CloudFront speeds up the distribution of your content by routing each user request
through the AWS backbone network to the edge location that can best serve your
content. Typically, this is a CloudFront edge server that provides the fastest
delivery to the viewer. Using the AWS network dramatically reduces the number
of networks that your users' requests must pass through, which improves
performance. Users get lower latency—the time it takes to load the first byte of
the file—and higher data transfer rates.
You also get increased reliability and availability because copies of your files (also
known as objects) are now held (or cached) in multiple edge locations around the
world.
CHAPTER 6
COMPUTE
Amazon EC2 (Elastic Compute Cloud): Amazon EC2 is a web service that
provides resizable compute capacity in the cloud. It allows you to launch virtual
machines, known as instances, with various operating systems and configurations.
EC2 offers flexibility in terms of instance types, storage options, and networking
capabilities.
27
Internship Report on Cloud Foundations
AWS Lambda: AWS Lambda is a serverless compute service that lets you run
code without provisioning or managing servers. You can upload your code and
specify the triggering events, and Lambda automatically scales and executes the
code in response to those events.
AWS Batch: AWS Batch enables you to run batch computing workloads at scale.
It dynamically provisions the optimal amount of compute resources based on the
job's requirements.
The simple web interface of Amazon EC2 allows you to obtain and configure
capacity with minimal friction. It provides you with complete control of your
computing resources and lets you run on Amazon’s proven computing
environment. Amazon EC2 reduces the time required to obtain and boot new
server instances (called Amazon EC2 instances) to minutes, allowing you to
quickly scale capacity, both up and down, as your computing requirements
change. Amazon EC2 changes the economics of computing by allowing you to
pay only for capacity that you actually use. Amazon EC2 provides developers and
system administrators the tools to build failure resilient applications and isolate
themselves from common failure scenarios.
Instance types
Amazon EC2 passes on to you the financial benefits of Amazon scale. You pay a
very low rate for the compute capacity you actually consume. Refer to
AmazonEC2Instance Purchasing Optionsfor a more detailed description.
28
Internship Report on Cloud Foundations
o Users that prefer the low cost and flexibility of Amazon EC2 without any up-front
payment or long-term commitment
o Applications with short-term, spiky, or unpredictable workloads that cannot be
interrupted
o Applications being developed or tested on Amazon EC2 for the first time
•Spot Instances—Spot Instancesare available at up to a 90% discount compared to
On-Demand prices and let you take advantage of unused Amazon EC2 capacity in
the AWS Cloud. You can significantly reduce the cost of running your
applications, grow your application’s compute capacity and throughput for the
same budget, and enable new types of cloud computing applications. Spot
Instances are recommended for:
o Applications that have flexible start and end times oApplications
that are only feasible at very low compute prices
o Users with urgent computing needs for large amounts of additional
capacity Cost optimization :
Cost optimization in Amazon EC2 is crucial to ensure that you are getting the
most value out of your cloud infrastructure while keeping your expenses under
control. Here are some strategies and best practices to optimize costs with
Amazon EC2:
Right-Sizing Instances: Choose the instance type that best matches your
workload requirements. If your workload is not resource-intensive, consider using
smaller or lower-cost instance types to avoid overprovisioning.
Reserved Instances (RIs): Utilize Reserved Instances for stable workloads with
predictable usage. RIs offer significant cost savings compared to On-Demand
Instances when you commit to a one- or three-year term.
29
Internship Report on Cloud Foundations
In AWS Lambda the code is executed based on the response of events in AWS
services such as add/delete files in S3 bucket, HTTP request from Amazon API
gateway, etc. However, Amazon Lambda can only be used to execute background
tasks.
AWS Lambda function helps you to focus on your core product and business logic
instead of managing operating system (OS) access control, OS patching, right-
sizing, provisioning, scaling, etc.
The following AWS Lambda example with block diagram explains the working of
AWS Lambda in a few easy steps:
30
Internship Report on Cloud Foundations
CHAPTER 7
STORAGE
Persistent Storage: EBS volumes provide durable and persistent block storage
that persists independently from the lifecycle of the EC2 instance. This means that
data stored in EBS volumes remains intact even if the associated EC2 instance is
stopped, terminated, or fails.
Multiple Volume Types: AWS offers different EBS volume types to cater to
various use cases and performance requirements:
General Purpose SSD (gp2): Provides a balance of price and performance for
most workloads.
Cold HDD (sc1): Optimized for low-cost, infrequently accessed workloads with
throughput-oriented performance.
31
Internship Report on Cloud Foundations
EBS Snapshots: You can create point-in-time snapshots of EBS volumes, which
are stored in Amazon S3. These snapshots serve as backups and can be used to
restore volumes or create new volumes with the same data.
EBS Encryption: EBS volumes support encryption using AWS Key Management
Service (KMS) keys. Encryption provides an additional layer of data security,
especially for sensitive workloads.
EBS Volume Resizing: You can dynamically resize EBS volumes without
disrupting the associated EC2 instance. This allows you to adjust storage capacity
as per your evolving application needs.
EBS Multi-Attach: Some EBS volume types, like io1 and io2, support multi-
attach. This enables attaching a single EBS volume to multiple EC2 instances in
the same Availability Zone, allowing for shared storage for clustered or high-
availability applications.
7.2 AWS S3 :
AWS EBS (Amazon Elastic Block Store) is a scalable block storage service
provided by Amazon Web Services (AWS). It offers persistent block-level storage
volumes that can be attached to Amazon EC2 instances, providing durable storage
for your applications and data. Here are the key features and characteristics of
AWS Elastic Block Store:
Persistent Storage: EBS volumes provide durable and persistent block storage
that persists independently from the lifecycle of the EC2 instance. This means that
data stored in EBS volumes remains intact even if the associated EC2 instance is
stopped, terminated, or fails.
Multiple Volume Types: AWS offers different EBS volume types to cater to
various use cases and performance requirements:
General Purpose SSD (gp2): Provides a balance of price and performance for
most workloads.
32
Internship Report on Cloud Foundations
Cold HDD (sc1): Optimized for low-cost, infrequently accessed workloads with
throughput-oriented performance.
EBS Snapshots: You can create point-in-time snapshots of EBS volumes, which
are stored in Amazon S3. These snapshots serve as backups and can be used to
restore volumes or create new volumes with the same data.
EBS Encryption: EBS volumes support encryption using AWS Key Management
Service (KMS) keys. Encryption provides an additional layer of data security,
especially for sensitive workloads.
EBS Volume Resizing: You can dynamically resize EBS volumes without
disrupting the associated EC2 instance. This allows you to adjust storage capacity
as per your evolving application needs.
Shared File System: Amazon EFS allows you to create a scalable and shared file
system that can be mounted simultaneously by multiple EC2 instances. This
enables multiple instances to read and write data to the file system concurrently,
making it suitable for applications with shared workloads.
Elastic and Scalable: EFS automatically scales its file systems as data storage
needs grow or shrink. It can accommodate an almost unlimited number of files
and data, and there is no need to pre-provision storage capacity.
Data Durability and Availability: EFS is designed for high durability and
availability. It automatically replicates data across multiple Availability Zones
(AZs) within a region, ensuring that data is protected against hardware failures
and provides 99.99% availability.
33
Internship Report on Cloud Foundations
Max I/O Mode: Designed for applications with higher levels of aggregate
throughput and higher performance at the cost of slightly higher latency.
Archival Storage: Glacier is primarily used for data archiving rather than
frequently accessed data storage. It is an excellent option for data that needs to be
retained for long periods without the need for real-time retrieval.
Data Retrieval Options: Glacier offers three retrieval options, each with different
costs and retrieval times:
Expedited Retrieval: Provides real-time access to your data but comes with
higher costs.
Standard Retrieval: The default option, which provides data retrieval within a
few hours.
Bulk Retrieval: Designed for large data retrieval, typically taking 5-12 hours.
Data Lifecycle Policies: You can create data lifecycle policies to automatically
transition data from S3 to Glacier based on specific criteria, such as data age or
access frequency. This helps optimize storage costs by moving infrequently
accessed data to Glacier.
Security and Encryption: Glacier provides data security through SSL (Secure
Sockets Layer) for data in transit and server-side encryption at rest. You can also
34
Internship Report on Cloud Foundations
use AWS Key Management Service (KMS) to manage encryption keys for added
security.
CHAPTER 8
DATABASES
Easy Scalability: RDS allows you to scale your database instance vertically (by
increasing its compute and memory resources) or horizontally (by creating Read
Replicas for read-heavy workloads).
35
Internship Report on Cloud Foundations
Security Features: RDS provides security features such as encryption at rest and
in transit, IAM database authentication, and network isolation within a VPC
(Virtual Private Cloud).
Read Replicas: For read-intensive workloads, you can create Read Replicas of
your primary database to offload read traffic and improve performance.
Amazon DynamoDB:
Amazon DynamoDB is a fully managed, NoSQL database service provided by
Amazon Web Services (AWS). It is designed to provide fast and scalable
performance for both read and write operations while maintaining low-latency
responses. DynamoDB is suitable for a wide range of applications, from small-
scale web applications to largescale enterprise solutions.
Fully Managed: With DynamoDB, AWS takes care of the database management
tasks, such as hardware provisioning, setup, configuration, scaling, backups, and
maintenance. This allows developers to focus on building applications without
worrying about database administration.
36
Internship Report on Cloud Foundations
Columnar Storage: Redshift stores data in columns rather than rows, which
allows for high compression rates and improved query performance for analytical
workloads. This columnar storage reduces I/O and improves query execution
times.
Scalability: Amazon Redshift is highly scalable and can easily scale up or down
based on your data volume and performance requirements. You can add or remove
nodes to handle changing workloads.
Fully Managed: Redshift is a fully managed service, meaning AWS takes care of
the underlying infrastructure, backups, patching, and other administrative tasks.
This allows you to focus on analyzing your data without worrying about managing
the database.
Integration with Other AWS Services: Redshift seamlessly integrates with other
AWS services, such as Amazon S3 for data loading, AWS Data Pipeline for data
ETL (Extract, Transform, Load), and AWS Glue for data cataloging and
transformation.
37
Internship Report on Cloud Foundations
Performance: Aurora is designed for high performance and can deliver up to five
times the throughput of standard MySQL and up to three times the throughput of
standard PostgreSQL.
Scalability: Aurora can automatically scale both compute and storage resources to
handle increasing workloads. It can also create up to 15 read replicas, providing
high read scalability for read-heavy applications.
High Availability: Aurora offers high availability through Multi-AZ
deployments. In a Multi-AZ configuration, Aurora automatically replicates data to
a standby instance in a different Availability Zone, providing automatic failover in
case of a primary instance failure.
Chapter 9
38
Internship Report on Cloud Foundations
not route traffic to that unhealthy target. Thereby ensuring your application is
highly available and fault tolerant. To know more about load balancing refer
toLoad Balancing inCloud Computing.
Amazon CloudWatch monitors your Amazon Web Services (AWS) resources and
the applications you run on AWS in real time. You can use CloudWatch to collect
and track metrics, which are variables you can measure for your resources and
applications.
The CloudWatch home page automatically displays metrics about every AWS
service you use. You can additionally create custom dashboards to display metrics
about your custom applications, and display custom collections of metrics that you
choose.
You can create alarms that watch metrics and send notifications or automatically
make changes to the resources you are monitoring when a threshold is breached.
For example, you can monitor the CPU usage and disk reads and writes of your
Amazon EC2 instances and then use that data to determine whether you should
launch additional instances to handle increased load. You can also use this data to
stop underused instances to save money.
With CloudWatch, you gain system-wide visibility into resource utilization,
application performance, and operational health.
39
Internship Report on Cloud Foundations
Accessing CloudWatch
You can access CloudWatch using any of the following methods:
For example, the following Auto Scaling group has a minimum size of one
instance, a desired capacity of two instances, and a maximum size of four
instances. The scaling
policies that you define adjust the number of instances, within your minimum and
maximum number of instances, based on the criteria that you specify.
40
Internship Report on Cloud Foundations
CLOUD ARCHITECTURE
CHAPTER 1
1.1Introduction to cloud architecture:
41
Internship Report on Cloud Foundations
Migrating to the cloud can offer many business benefits compared to on-premises
environments, from improved agility and scalability to cost efficiency. While
many organizations may start with a “lift-and-shift” approach, where on-premises
applications are moved over with minimal modifications, ultimately it will be
necessary to construct and deploy applications according to the needs and
requirements of cloud environments.
Cloud architecture dictates how components are integrated so that you can pool,
share, and scale resources over a network. Think of it as a building blueprint for
running and deploying applications in cloud environments.
Explore how Google Cloud helps you designcloud architectureto match your
business needs. Use ourArchitecture Frameworkfor guidance, recommendations,
and best practices to build and migrate your workloads to the cloud. Use
ourArchitectureDiagramming Toolfor pre-built reference architectures and
customizing them to your use cases.
In cloud computing, there are various roles and responsibilities that individuals or
teams can take on to manage and utilize cloud resources effectively. The specific
roles may vary depending on the cloud service provider and the organization's
structure. Here are some common roles in cloud computing:
Cloud Architect: Responsible for designing and implementing the overall cloud
infrastructure, including selecting appropriate services, security protocols, and
integration with existing systems.
Cloud Engineer: Works on the technical aspects of the cloud infrastructure, such
as setting up and configuring cloud services, managing virtual machines, and
implementing networking solutions.
42
Internship Report on Cloud Foundations
Cloud Security Specialist: Ensures the security and compliance of the cloud
infrastructure, including implementing security protocols, monitoring for
vulnerabilities, and responding to incidents.
Data Engineer: Works with big data and analytics solutions in the cloud,
designing and maintaining data pipelines, databases, and data storage solutions.
1. Operational Excellence
The Operational Excellence pillar includes the ability to support development and
run workloads effectively, gain insight into their operation, and continuously
improve supporting processes and procedures to delivery business value. You can
find prescriptive guidance on implementation in theOperational Excellence
Pillarwhitepaper.
Design Principles
There are five design principles for operational excellence in the cloud:
• Anticipate failure
43
Internship Report on Cloud Foundations
2. Security
The Security pillar includes the ability to protect data, systems, and assets to take
advantage of cloud technologies to improve your security. You can find
prescriptive guidance on implementation in theSecurity Pillar whitepaper.
Design Principles
There are seven design principles for security in the cloud:
• Enable traceability
3. Reliability
The Reliability pillar encompasses the ability of a workload to perform its
intended function correctly and consistently when it’s expected to. This includes
the ability to operate and test the workload through its total lifecycle. You can find
prescriptive guidance on implementation in theReliability Pillar whitepaper.
Design Principles
There are five design principles for reliability in the cloud:
4. Performance Efficiency
The Performance Efficiency pillar includes the ability to use computing resources
efficiently to meet system requirements, and to maintain that efficiency as demand
44
Internship Report on Cloud Foundations
Design Principles
There are five design principles for performance efficiency in the cloud:
• Go global in minutes
5. Cost Optimization
The Cost Optimization pillar includes the ability to run systems to deliver business
value at the lowest price point. You can find prescriptive guidance on
implementation in theCost Optimization Pillar whitepaper.
Design Principles
There are five design principles for cost optimization in the cloud:
6. Sustainability
The discipline of sustainability addresses the long-term environmental, economic,
and societal impact of your business activities. You can find prescriptive guidance
on implementation in theSustainability Pillar whitepaper.
Design Principles
There are six design principles for sustainability in the cloud:
45
Internship Report on Cloud Foundations
• Anticipate and adopt new, more efficient hardware and software offerings
Chapter 2
Now, let’s have a look at the different types of storage services offered by AWS.
46
Internship Report on Cloud Foundations
2.3 Before AWS S3 :Organizations had a difficult time finding, storing, and
managing all of your data. Not only that, running applications, delivering content to
customers, hosting high traffic websites, or backing up emails and other files required a
lot of storage. Maintaining the organization’s repository was also expensive and time-
consuming for several reasons. Challenges included the following:
AWS S3 Benefits:
Low cost: S3 lets you store data in a range of “storage classes.” These classes are
based on the frequency and immediacy you require in accessing files.
Scalability: S3 charges you only for what resources you actually use, and there are no
hidden fees or overage charges. You can scale your storage resources to easily meet
your organization’s ever-changing demands.
47
Internship Report on Cloud Foundations
Flexibility: S3 is ideal for a wide range of uses like data storage, data backup,
software delivery, data archiving, disaster recovery, website hosting, mobile
applications, IoT devices, and much more.
Account A: The AWS account that you use for managing network resources. The
service endpoint that you activate the DataSync agent with also belongs to this
account.
Account B: The AWS account for the S3 bucket that you want to copy data to.
3.1 Introduction:
Adding a compute layer in AWS cloud architecture involves provisioning and
configuring the necessary resources to run your applications and workloads. AWS
offers several compute services that cater to different needs and use cases. Here's how
you can add a compute layer to your architecture:
49
Internship Report on Cloud Foundations
Chapter 4
4.1 Introduction:
Adding a database layer in AWS cloud architecture involves designing and deploying the
appropriate database solutions to store, manage, and access your application's data. Here's
how you can add a database layer to your architecture:
1. Selecting the Right Database Service:
• Amazon RDS (Relational Database Service): Managed relational database service
that supports popular database engines like MySQL, PostgreSQL, Oracle, and SQL
Server.
• Amazon DynamoDB: Managed NoSQL database service that offers seamless
scalability and low-latency performance.
• Amazon Aurora: High-performance, MySQL and PostgreSQL-compatible relational
database engine.
50
Internship Report on Cloud Foundations
51
Internship Report on Cloud Foundations
Chapter 5
Creating a NetworkingEnvironment
5.1Introduction:
Create two new AWS accounts for testing purposes in the same Region. When you create an
AWS account, it automatically creates a dedicated virtual private cloud (VPC) in each
account.
Configure a VPC peering connection between the directory owner and the directory
consumer account
The VPC peering connection you will create is between the directory consumer and directory
owner VPCs. Follow these steps to configure a VPC peering connection for connectivity with
the directory consumer account.
To create a VPC peering connection between the directory owner and directory consumer
account
1. Open the Amazon VPC console athttps://console.aws.amazon.com/vpc/. Makes sure to
sign in as a user with administrator credentials in the directory owner account.
2. In the navigation pane, choose Peering Connections. Then choose Create Peering
Connection.
3. Configure the following information:
52
Internship Report on Cloud Foundations
Peering connection name tag: Provide a name that clearly identifies this
connection with the VPC in the directory consumer account.
VPC (Requester): Select the VPC ID for the directory owner account.
Under Select another VPC to peer with, ensure that My account and This region
are selected.
VPC (Accepter): Select the VPC ID for the directory consumer account.
4. Choose Create Peering Connection. In the confirmation dialog box, choose OK.
Since both VPCs are in the same Region, the administrator of the directory owner account
who sent the VPC peering request can also accept the peering request on behalf of the
directory consumer account.
53
Internship Report on Cloud Foundations
CHAPTER 6
Connecting Networks
6.1 Introduction:
1.VPC Peering: VPC peering allows you to connect two VPC together.To set up VPC
peering, both VPC must have non-overlapping CIDR blocks, and you need to create a
peering connection between them.
2.VPN (Virtual Private Network): AWS provides the option to set up a hardware VPN
connection or a software VPN using AWS VPN Cloud Hub.AWS VPN Cloud Hub enables you
to connect multiple on-premises VPN to your VPC over a secure, encrypted connection.
54
Internship Report on Cloud Foundations
4.Transit Gateway: AWS Transit Gateway acts as a hub that simplifies network connectivity
and management for multiple VPCs and on-premises networks.
Diagram:
VPC-A and VPC-B are in different AWS regions (us-east-1 and us-west-2, respectively).
Each VPC has a NAT Gateway to allow instances in the private subnet to access the
internet.
EC2 instances, RDS databases, and other resources are placed in the respective VPCs.
Chapter 7
55
Internship Report on Cloud Foundations
7.1 Introduction:
• Securing user and application access in AWS (Amazon Web Services) cloud is
crucial to protect your resources and data from unauthorized access and potential
security breaches. AWS provides various tools and best practices to help you
achieve this. Below are some key strategies for securing user and application
access in AWS:
1. IAM allows you to manage users, groups, and roles to control access to AWS
resources.
2. Create individual IAM users for each person needing access and assign appropriate
permissions through IAM policies.
3. Use IAM roles for AWS services and applications to access resources securely
without using long-term access keys.
Multi-Factor Authentication (MFA):
1. Enable MFA for IAM users to add an extra layer of security to their login process.
2. AWS supports various MFA options, such as virtual MFA devices, hardware MFA
devices, or SMS-based MFA.
Security Groups and Network Access Control Lists (NACLs):
1. Utilize security groups to control inbound and outbound traffic for EC2 instances and
other AWS resources.
2. Network Access Control Lists (NACLs) help control traffic at the subnet level.
Encryption:
56
Internship Report on Cloud Foundations
1. Enable AWS CloudTrail to log all API calls and monitor activities within your AWS
account.
2. Use Amazon CloudWatch to monitor and receive alerts for unusual activities.
1. Follow the least privilege principle while assigning permissions to users and
applications.
2. Grant the minimum required permissions to perform specific tasks and regularly
review and update access as needed.
1.AWS provides a range of security services like AWS Identity and Access
Management (IAM), AWS Shield, AWS WAF, AWS Firewall Manager, etc.,
Conclusion:
Remember that security is an ongoing process, and it's essential to stay updated with the latest
security best practices and implement them as necessary to keep your AWS environment
secure.
57
Internship Report on Cloud Foundations
Chapter 8
8.1 Introduction
•Implementing elasticity, high availability, and monitoring in AWS cloud architecture can help
ensure your applications are scalable, resilient, and efficiently managed. Here are some key
components and best practices to achieve these goals:
1. Elasticity:
• Auto Scaling Groups (ASG): Use ASGs to automatically adjust the number of
instances in response to changes in demand. ASGs can be based on CPU
utilization, network traffic, or custom metrics.
• Elastic Load Balancer (ELB):Distribute incoming traffic across multiple
instances to ensure even workload distribution and to achieve fault tolerance.
• Amazon RDS Read Replicas:If using Amazon RDS for databases, implement
read replicas to scale read-heavy workloads.
• AWS Lambda:Use serverless computing with AWS Lambda to automatically
scale compute resources based on event-driven triggers.
2. High Availability:
• Multi-Availability Zone (AZ) Deployment: Distribute your application across
multiple AZs to ensure redundancy and fault tolerance. If one AZ goes down, the
others can continue to handle requests.
• Load Balancing:Employ Elastic Load Balancing to distribute traffic across
multiple instances in different AZs.
• Amazon RDS Multi-AZ:For critical databases, enable Multi-AZ deployment to
have a standby replica in a different AZ for failover.
3. Monitoring:
• Amazon CloudWatch:Use CloudWatch to monitor AWS resources and
applications. Set up alarms to notify you of important events or performance
thresholds.
• AWS CloudTrail:Enable CloudTrail to record all API activity in your AWS
account, providing an audit trail for security and compliance purposes.
• AWS Config:Use AWS Config to track changes to your AWS resources and
maintain a history of resource configurations.
58
Internship Report on Cloud Foundations
Conclusion:
Remember that the architecture's specific implementation may vary depending on your
application requirements and use case. Regularly review and test your architecture to ensure
it meets the desired performance, scalability, and availability goals.
Chapter 9
59
Internship Report on Cloud Foundations
9.1 Introduction:
❖Automating your architecture in AWS cloud can significantly improve operational
efficiency, reduce human errors, and facilitate seamless scaling. There are several AWS
services and tools that you can use to automate various aspects of your architecture.
1. Infrastructure as Code (IaC):
• Use IaC tools like AWS CloudFormation or AWS CDK (Cloud Development
Kit) to define and provision your infrastructure resources in a declarative
manner.
• Infrastructure as Code allows you to version control your infrastructure and
replicate environments easily.
2. Configuration Management:
• Utilize configuration management tools like AWS Systems Manager (SSM) or
third-party tools (e.g., Ansible, Chef, Puppet) to automate the configuration and
management of instances and applications.
• SSM Parameter Store can help centralize and manage configuration data
securely.
3. Continuous Integration and Continuous Deployment (CI/CD):
• Implement CI/CD pipelines using AWS CodePipeline, AWS CodeCommit,
AWS CodeBuild, and AWS CodeDeploy to automate the build, testing, and
deployment of your applications.
• CI/CD pipelines enable you to quickly and reliably release new features and
updates to your applications.
4. Auto Scaling and Elasticity:
• Set up Auto Scaling Groups (ASGs) to automatically scale resources based on
defined criteria (e.g., CPU utilization, network traffic).
• Use AWS Lambda to trigger automatic scaling based on event-driven triggers.
5. Serverless Architectures:
• Embrace serverless computing using AWS Lambda to automate eventdriven
functions without the need to manage servers.
• Use AWS Step Functions to coordinate complex workflows involving multiple
Lambda functions.
6. Monitoring and Alerts:
• Leverage AWS CloudWatch for monitoring and set up alarms to trigger
automated actions based on predefined thresholds or patterns.
• Integrate AWS CloudWatch with AWS Lambda to automate responses to
specific events.
7. Backup and Recovery:
•Automate data backups using AWS Backup or custom scripts to schedule regular
backups and retention policies.
• Implement disaster recovery automation using services like AWS CloudEndure
or AWS Backup.
8. Automated Testing:
60
Internship Report on Cloud Foundations
• Use AWS Device Farm for automated mobile application testing across different
devices and platforms.
• Implement automated testing for web applications using services like AWS
CodePipeline and CodeBuild.
9. Event-Driven Automation:
• Utilize AWS EventBridge (formerly known as Amazon CloudWatch Events) to
create rules and automate responses to events within your AWS environment.
10. Third-Party Integrations:
• Consider using third-party automation tools that integrate with AWS services to
enhance automation capabilities.
Conclusion:
Remember to thoroughly test and validate your automation scripts and workflows before
deploying them into production environments. Additionally, continuous monitoring and
periodic updates to your automation processes will ensure your architecture remains efficient
and secure over time.
Chapter 10
Caching Content
10.1 Introduction:
•Caching content is an important aspect of optimizing the performance and scalability of
applications, especially when hosted on cloud platforms like AWS (Amazon Web
61
Internship Report on Cloud Foundations
Services). AWS provides several services and tools that you can use to implement
caching for your content. Here's a general approach:
1. Amazon CloudFront: Amazon CloudFront is a content delivery network (CDN)
service that distributes your content globally to reduce latency and deliver content
faster to users. CloudFront can be used to cache static and dynamic content, such as
images, videos, web pages, and API responses.
2. Amazon ElastiCache:Amazon ElastiCache is a managed in-memory caching service
that supports popular caching engines like Redis and Memcached.This is particularly
useful for applications that require low-latency access to data.
6. API Gateway Caching: If you're serving APIs, AWS API Gateway provides builtin
caching mechanisms that can cache the responses from your APIs, reducing the need
to repeatedly execute backend processes.
62
Internship Report on Cloud Foundations
Conclusion:
• Remember that the choice of caching strategy will depend on your specific use case,
requirements, and the architecture of your application.
• Always monitor and fine-tune your caching setup to ensure that it's effectively improving
performance without causing data staleness or other issues.
Chapter 11:
11.1:Introduction
➢Building decoupled architectures is a fundamental principle in designing modern,
scalable, and maintainable software systems. Here are some key concepts and practices
63
Internship Report on Cloud Foundations
forbuilding
64
Internship Report on Cloud Foundations
Conclusion:
Remember that building a decoupled architecture requires careful design,
planning, and ongoing maintenance. While it offers benefits like scalability, flexibility, and
resilience, it also introduces complexities that need to be managed effectively.
65
Internship Report on Cloud Foundations
Chapter 12:
12.1 Introduction:
Planning for disaster recovery in AWS cloud architecture is crucial to ensure the
availability, resilience, and continuity of your applications and data, even in the
face of unexpected events. Here's a step-by-step guide to help you plan for
disaster recovery in the AWS cloud:
Identify Critical Assets and Services: Determine which applications, data, and services are
critical for your business operations. This includes identifying dependencies between
components and understanding their interconnections.
• Define Recovery Objectives: Establish Recovery Time Objective (RTO) and
Recovery Point Objective (RPO) metrics. RTO defines the maximum acceptable
downtime, while RPO defines the maximum data loss that your business can tolerate.
• Choose a Region and Availability Zones: AWS offers multiple regions and
Availability Zones (AZs) worldwide. Design your architecture to span multiple AZs
within a region for high availability.
• Backup and Restore: Implement regular backups of your data and configurations
using services like Amazon S3, Amazon EBS snapshots, or database backups.
• Use Multi-Region Replication: For critical workloads, replicate data and services
across multiple regions using services like Amazon S3 cross-region replication,
Amazon RDS Multi-AZ, or third-party solutions.
• Disaster Recovery as Code: Use Infrastructure as Code (IaC) tools like AWS
CloudFormation or AWS CDK to define your infrastructure. This enables you to
recreate your environment quickly in case of a disaster.
• Automate Deployment and Scaling: Leverage AWS services like Amazon EC2
Auto Scaling and Amazon RDS Read Replicas to automatically scale and distribute
traffic during normal and peak loads.
• Implement High Availability Patterns: Use AWS services like Elastic Load
Balancing (ELB), Amazon Route 53 DNS failover, and Auto Scaling to distribute
traffic and ensure continuous availability.
66
Internship Report on Cloud Foundations
❖Becoming a certified AWS Cloud Architect is a great way to validate your skills and
expertise in designing and implementing scalable, reliable, and secure applications on
the Amazon Web Services platform. Here's a step-by-step guide on how to bridge the
gap and prepare for an AWS Cloud Architect certification:
67
Internship Report on Cloud Foundations
68
Internship Report on Cloud Foundations
• Take the Exam:On the exam day, stay calm, read the questions thoroughly, and
answer to the best of your knowledge. Remember that you have the option to mark
questions for review and return to them later.
Conclusion:
After successfully passing the exam, your AWS Cloud Architect certification will validate your
skills and enhance your credibility in the field. Keep in mind that AWS services and best
practices evolve, so continue to stay updated with the latest developments in the AWS.
REFERENCE :
69
Internship Report on Cloud Foundations
70
71