0% found this document useful (0 votes)
14 views73 pages

Cloud Foundation Report

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views73 pages

Cloud Foundation Report

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 73

Internship Report on Cloud Foundations

Accredited by NBA & NAAC with “A” Grade Recognized by UGC under section 2(f) & 12(B) Approved by
AICTE – New Delhi Permanently Affiliated to JNTUK,SBTET Ranked as “A” Grade by Govnt.of A.P.

Internship Report

On

CLOUD FOUNDATIONS AND ARCHITECTING

ADAPA LAKSHMI PRASANNA


(21K61A0403)
BACHELOR OF TECHNOLOGY

In

ELECTRONICS AND COMMUNICATION ENGINEERING

AICTE-EDUSKILLS supported by AWS CLOUD FOUNDATIONS


#806, DLF Cyber City, Technology Corridor, Bhubaneswar, Odisha

751024,eduskillsfoundation.org, cloud foundations - Chandler, USA

FROM MAY -2023

TO JULY-2023

(10 Weeks)

1
Internship Report on Cloud Foundations

List of Contents
Topic Pg. No
Cloud Foundation
List of Figures
Chapter 1:Introduction to Cloud Computing 10

1.1: Introduction
Fig 1.1.1: Cloud Computing

Fig 1.1.2:Cloud Computing

1.2: Advantages of Cloud Computing 11

1.3: Introduction to AWS 12


1.3.1: Advantages of aws

1.4 AWS Cloud Computing Models 12

Chapter 2:Cloud Economics and Billing

2.1: Introduction 13

2.2: Key aspects of cloud economics


 Total Cost of Ownership (TCO)
 On-demand
 Case study

2.3: Aws Organization

2.4 Billing dashboard


2.5 AWS summary

Chapter 3:AWS Global Infrastructure


3.1: Introduction
Fig 3.1.1:Components of Global Infrastructure

Fig 3.1.2:Region
Chapter 4:AWS Cloud Security

4
Internship Report on Cloud Foundations

4.1: AWS Shared Responsibility Model


Fig 4.1.1:AWS Shared Responsibility Model

4.2: AWS IAM


4.2.1: IAM gives you the following features

Fig 4.1.2:AWS IAM


4.3: Security in AWS Account Management

Chapter5:Networking and Content Delivery

5.1: Networking Basics

5.2: Cloud Networking

5.3:Amazon VPC
Fig 5.3.1:Amazon VPC

5.4:VPC Networking

5.5: VPC Security

5.6:Route S3

5.7:CloudFront

Chapter6:Compute

6.1:Cloud Compute Services

6.2:Amazon EC2

6.3:AWS Lambda
Fig 6.31:AWS Lambda

6.4:AWS Elastic Beanstalk

Chapter7:Storage

7.1:AWS EBS

7.2: AWS S3

7.3: AWS EFS

7.4:Amazon S3 Glacier

5
Internship Report on Cloud Foundations

Chapter8:Databases

8.1: AmazonRDS

8.2:Amazon Redshift

8.3:Amazon Aurora

Chapter9:Autoscaling and Monitoring

9.1:Elastic Load Balancing

9.2:Types of Load Balancer

9.3: Amazon CloudWatch

Cloud Architecture
Chapter1:Introduction to Cloud Architecture

1.1:Introduction

1.2:Roles of Cloud Computing

1.3:AWS Well-Architected Framework

Chapter2: Adding a Storage Layer

2.1:Cloud Storage

2.2: Types of AWS Storage


Fig 2.21:Types of AWS storage

2.3: Before AWS S3

2.4 What Is AWS S3


Fig 2.41:AWS S3 Benefits
Chapter3: Adding a Compute Layer

3.1:Introduction
Fig 3.1:Adding a compute layer

Chapter4: Adding a Database Layer

6
Internship Report on Cloud Foundations

4.1: Introduction

Chapter5:Creating a Networking Environment

5.1: Introduction
Fig 5.1: Creating a networking environment

Chapter6:Connecting Networks
6.1: Introduction
Fig 6.11: Connecting Networks

Chapter7:Security User and Application Access


7.1: Introduction
7.2: Identity and Access Management (IAM)
Chapter8:ImplementingElasticity, High Availability &
Monitoring
8.1: Introduction
Chapter9: Automating Your Architecture
9.1: Introduction
Chapter10: Caching Content
10.1: Introduction
Chapter11: Building Decoupled Architectures
11.1: Introduction
Chapter12: Planning for Disaster and Bridging to
Certification
12.1: Introduction

7
Internship Report on Cloud Foundations

List of contents
SI. No Fig. No Name of the figure Page No

1. 1.1 Introduction to Cloud Computing 10

2. 1.2 Advantages of Cloud Computing 11

3. 1.3 Introduction to AWS 12

4. 1.4 AWS Computing Models 12

5. 2.1 Cloud Economics and Billing 13

6. 2.2 Key aspects of cloud economics 13

7. 2.3 AWS Organization 15

8. 2.4 Billing Dashboard 15

9. 3.1 Introduction 16

10. 4.1 AWS Shared Responsibility Model 18

11. 4.2 AWS IAM 20

12. 4.3 Security in AWS Account Management 21

13. 5.1 Networking Basics 22

14. 5.2 Cloud Networking 22

15. 5.3 Amazon VPC 23

16. 5.4 VPC Security 23

17. 6.1 Cloud Compute Services 26

18. 6.2 Amazon EC2 27

19. 6.3 AWS Lambda 28

20. 6.4 AWS Elastic Beanstalk 29

21. 7.1 AWS EBS 30

22. 7.4 AMAZON S3 Glacier 32

23. 8.1 Amazon RDS 34

24. 8.3 Amazon Aurora 36

25. 9.1 Elastic Load Balancing 37

8
Internship Report on Cloud Foundations

26. 9.3 Amazon CloudWatch 37

27. 1.2 Roles of Cloud Computing 40

28. 1.3 AWS Well-Architectured Framework 41

29. 2.1 Cloud Storage 44

30. 2.2 Types of AWS Storage 44

31. 2.3 Before AWS S3 45

32. 2.4 What Is AWS S3 45

33. 5.1 Creating a Networking Environment 50

34. 6.1 Introduction to Connecting Networks 52

35. 7.2 Identity and Access Management(IAM) 54

36. 8.1 Introduction to Implementing Elasticity,High 56


Availability&Monitoring

37. 9.1 Introduction to Caching Content 57

38. 11.1 Introducton to Building Decoupled Architectures 61

39. 12.1 Introduction to Planning for Disaster and Bridging to 63


Certification

9
Internship Report on Cloud Foundations

CLOUD FOUNDATATION

CHAPTER 1
Introduction to Cloud Computing
1.1 Introduction:Cloud Computing is the delivery of computing services such
as servers, storage, databases, networking, software, analytics, intelligence, and
more, over the Cloud (Internet).

Fig 1.1.1:- Cloud Computing

Cloud Computing provides an alternative to the on-premises datacentre. With an


onpremises datacentre, we have to manage everything, such as purchasing and
installing hardware, virtualization, installing the operating system, and any other
required applications, setting up the network, configuring the firewall, and setting
up storage for data. After doing all the set-up, we become responsible for
maintaining it through its entire lifecycle.

But if we choose Cloud Computing, a cloud vendor is responsible for the


hardware purchase and maintenance. They also provide a wide variety of software
and platform as a service. We can take any required services on rent. The cloud
computing services will be charged based on usage.

10
Internship Report on Cloud Foundations

Fig 1.1.2:-Cloud Computing

The cloud environment provides an easily accessible online portal that makes
handy for the user to manage the compute, storage, network, and application
resources. Some cloud service providers are in the following figure.

1.2 Advantages of cloud computing :


o Cost: It reduces the huge capital costs of buying hardware and software.
oSpeed: Resources can be accessed in minutes, typically within a few
clicks.
o Scalability: We can increase or decrease the requirement of resources
according to the business requirements.
o Productivity: While using cloud computing, we put less operational
effort. We do not need to apply patching, as well as no need to maintain
hardware and software. So, in this way, the IT team can be more
productive and focus on achieving business goals.
o Reliability: Backup and recovery of data are less expensive and very fast
for business continuity.
o Security: Many cloud vendors offer a broad set of policies, technologies,
and controls that strengthen our data security.

11
Internship Report on Cloud Foundations

1.3 Introduction to AWS :


Amazon Web Services (AWS), a subsidiary of Amazon.com, has invested billions
of dollars in IT resources distributed across the globe. These resources are shared
among all the AWS account holders across the globe. These account themselves
are entirely isolated from each other. AWS provides on-demand IT resources to its
account holders on a pay-as-you-go pricing model with no upfront cost. Amazon
Web services offers flexibility because you can only pay for services you use or
you need. Enterprises use AWS to reduce capital expenditure of building their
own private IT infrastructure (which can be expensive depending upon the
enterprise’s size and nature). AWS has its own Physical fiber network that
connects with Availability zones, regions and Edge locations. All the maintenance
cost is also bared by the AWS that saves a fortune for the enterprises.
Security of cloud is the responsibility of AWS but Security in the cloud is Customer’s
Responsibility. The Performance efficiency in the cloud has four main areas:-
• Selection
• Review
• Monitoring
• Tradeoff

1.3.1 Advantages of aws :

• allows you to easily scale your resources up or down as your needs change,
helping you to save money and ensure that your application always has the
resources it needs.
• AWS provides a highly reliable and secure infrastructure, with multiple data
centers and a commitment to 99.99% availability for many of its services.
• AWS offers a wide range of services and tools that can be easily combined to
build and deploy a variety of applications, making it highly flexible.
• AWS offers a pay-as-you-go pricing model, allowing you to only pay for the
resources you actually use and avoid upfront costs and long-term
commitments.

1.4 AWS Cloud Computing Models :

There are three cloud computing models available on AWS.

1. Infrastructure as a Service (IaaS): It is the basic building block of cloud IT.


It generally provides access to data storage space, networking features, and
computer hardware(virtual or dedicated hardware). It is highly flexible and
gives management controls over the IT resources to the developer. For
example, VPC, EC2, EBS.

12
Internship Report on Cloud Foundations

2. Platform as a Service (PaaS): This is a type of service where AWS manages


the underlying infrastructure (usually operating system and hardware). This
helps the developer to be more efficient as they do not have to worry about
undifferentiated heavy lifting required for running the applications such as
capacity planning, software maintenance, resource procurement, patching, etc.,
and focus more on deployment and management of the applications. For
example, RDS, EMR, ElasticSearch.
3. Software as a Service(SaaS): It is a complete product that usually runs on a
browser. It primarily refers to end-user applications. It is run and managed by
the service provider. The end-user only has to worry about the application of
the software suitable to its needs. For example, Saleforce.com, Web-based
email, Office 365

Cloud Economics and Billing

CHAPTER 2
2.1 Introduction :
Cloud computing provides organizations with numerous benefits. These
include additional security of resources, scalable infrastructure, agility, and more.
However, these benefits come at a cost.

Cloud economics establishes the cost-benefit situation of an organization upon


building resources on the cloud. You pay for storage, backup, networking, load
balancing, security, and more with the cloud. In addition, you need the IT
capability to architect the cloud properly. By analyzing these facets, IT leaders can
know whether the organization stands to leverage the advantages of cloud
computing.

Since cloud economics helps businesses determine if cloud computing is right for
them, it’s essential to take before getting on with migration aspects

2.2 Key aspects of cloud economics :


Cloud economics deals with financial-related aspects such as returns and
costs. Some of the critical aspects of cloud economics include:
 Total Cost of Ownership (TCO): The total cost of ownership (TCO) is the cost
incurred in cloud planning, migrating, architecting, and operating the cloud
infrastructure. It helps you understand how much your business will incur
following adopting a cloud model.

13
Internship Report on Cloud Foundations

TCO defines all the direct and indirect costs involved. These include data centers,
maintenance and support, development, business continuity and disaster recovery,
network, and more. This analysis compares the cost of on-premise infrastructure
with the cost of cloud computing, enabling a business to make the right impact.

Businesses also learn about opportunity costs through TCO. The main aim is to
attain a lower TCO than when operating on-premise. A business can either pause
migration efforts, pay the extra costs if it wants to achieve other goals, or migrate
in phases.

 On-demand :
On-demand pricing is a major factor to consider when considering cloud
migration. With on-premise computing, you buy a fixed capacity that you own.

The fixed capacity charges, however, change when you migrate to the cloud and
choose on-demand pricing. Costs become elastic and can quickly spiral out of
control if you don’t regularly monitored or control them.

Cost fluctuations resulting from the pricing as you go model can cost you lots of
money. Therefore, you need a cost management tool to help you detect any
anomalies.

 Case study :
Amazon.comis the world’s largest online retailer. In 2011, Amazon.com switched
from tape backup to usingAmazon Simple Storage Service(Amazon S3) for
backing up the majority of its Oracle databases. This strategy reduces complexity
and capital expenditures, provides faster backup and restore performance,
eliminates tape capacity planning for backup and archive, and frees up
administrative staff for higher value operations. The company was able to replace
their backup tape infrastructure with cloud-based Amazon S3 storage, eliminate
backup software, and experienced a 12X performance improvement, reducing
restore time from around 15 hours to 2.5 hours in select scenarios

2.3 Aws organization :


AWS Organizations is anaccountmanagement service that enables you to
consolidate multiple AWS accounts into an organization that you create and
centrally manage. AWS Organizations includes account management and
consolidated billing capabilities that enable you to better meet the budgetary,
security, and compliance needs of your business. As an administrator of an
organization, you can create accounts in your organization and invite existing
accounts to join the organization.

14
Internship Report on Cloud Foundations

This user guide defineskey concepts for AWSOrganizations, providestutorials, and


explains how tocreate and manage an organization.

Aws billing and cost management :


The AWS Billing console contains features to pay your AWS bills and report your
AWS cost and usage. You can also use the AWS Billing console to manage your
consolidated billing if you're a part of AWS Organizations. Amazon Web Services
automatically charges the credit card that you provided when you sign up for an
AWS account. You can view or update your credit card information at any time,
including designating a different credit card for AWS to charge. You can do this
on thePayment
Methodspage in the Billing console. For more details on the Billing features
available, seeFeatures of AWS Billing console.

The AWS Cost Management console has features that you can use for budgeting
and forecasting costs and methods for you to optimize your pricing to reduce your
overall AWS bill.

The AWS Cost Management console is integrated closely with the Billing
console. Using both together, you can manage your costs in a holistic manner.
You can use Billing console resources to manage your ongoing payments, and
AWS Cost Management console resources to optimize your future costs. For
information about AWS resources to understand, pay, or organize your AWS bills,
see theAWS BillingUser Guide.

2.4 Billing dashboard :


You can use the dashboard page of the AWS Billing console to gain a general
view of your AWS spending. You can also use it to identify your highest cost
service or Region and view trends in your spending over the past few months. You
can use the dashboard page to see various breakdowns of your AWS usage. This is
especially useful if you're a Free Tier user. To view more details about your AWS
costs and invoices, choose Billing details in the left navigation pane. You can
customize your dashboard layout at any time by choosing the gear icon at the top
of the page to match your use case.
Your AWS Billing console dashboard contains the following sections. To create
your preferred layout, drag and drop sections of the Dashboard page. To
customize the visible sections and layout, choose the gear icon at the top of the
page. These preferences are stored for ongoing visits to the Dashboard page. To
temporarily remove sections from your view, choose the x icon for each section.
To make all sections visible, choose refresh at the top of the page.

2.5 AWS summary:


This section is an overview of your AWS costs across all accounts,
AWS Regions, service providers, and services, and other KPIs. Total
compared to prior period displays your total AWS costs for the most

15
Internship Report on Cloud Foundations

recent closed month. It also provides a comparison to your total


forecasted costs for the current month. Choose the gear icon on the card
to decide which KPIs you want to display.

Highest cost and usage details


This section shows your top service, account, or AWS Region by
estimated month-to-date (MTD) spend. To choose which to view,
choose the gear icon on the top right.
Cost trend by top five services
In this section, you can see the cost trend for your top five services for the
most recent three to six closed billing periods.

You can choose between chart types and time periods on the top of the
section. You can adjust additional preferences using the gear icon.

The columns provide the following information:

• Average: The average cost over the trailing three months.

• Total: The total for the most recent closed month.


• Trend: Compares the Total column with the Average column.
Account cost trend
This section shows the cost trend for your account for the most recent
three to six closed billing periods. If you're a management account of
AWS Organizations, the cost trend by top five section shows your top
five AWS accounts for the most recent three to six closed billing
periods. If invoices weren't already issued, the data isn't visible in this
section.

You can choose between chart types and time periods on the top of the
section. Adjust additional preferences using the gear icon.

The columns provide the following information:

• Average: The average cost over the trailing three months.


• Total: The total for the most recent closed month.
• Trend: Compares the Total column with the Average column.

AWS Global Infrastructure

CHAPTER 3

16
Internship Report on Cloud Foundations

3.1Introduction:
o AWS is a cloud computing platform which is globally available.
o Global infrastructure is a region around the world in which AWS is based.
Global infrastructure is a bunch of high-level IT services which is shown
below:
o AWS is available in 19 regions, and 57 availability zones in December
2018 and 5 more regions 15 more availability zones for 2019.

The following are the components that make up the AWS infrastructure:

o Availability Zones oRegion oEdge locations oRegional Edge Caches

Fig 3.1.1:-Components of Global Infrastructure

Availability zone as a Data Center

o An availability zone is a facility that can be somewhere in a country or in a


city. Inside this facility, i.e., Data Centre, we can have multiple servers,
switches, load balancing, firewalls. The things which interact with the
cloud sits inside the data centers.
o An availability zone can be a several data centers, but if they are close
together, they are counted as 1 availability zone.

Region

o A region is a geographical area. Each region consists of 2 more availability


zones.

17
Internship Report on Cloud Foundations

o A region is a collection of data centers which are completely isolated from


other regions.
o A region consists of more than two availability zones connected to each
other through links.

Fig 3.1.2:-Region
o Availability zones are connected through redundant and isolated metro
fibers.

Edge Locations

o Edge locations are the endpoints for AWS used for caching content.
o Edge locations consist of CloudFront, Amazon's Content Delivery
Network (CDN).
o Edge locations are more than regions. Currently, there are over 150 edge
locations.
o Edge location is not a region but a small location that AWS have. It is used
for caching the content.
o Edge locations are mainly located in most of the major cities to distribute
the content to end users with reduced latency.
o For example, some user accesses your website from Singapore; then this
request would be redirected to the edge location closest to Singapore
where cached data can be read.

Regional Edge Cache

o AWS announced a new type of edge location in November 2016, known


as a Regional Edge Cache. oRegional Edge cache lies between CloudFront
Origin servers and the edge locations.
o A regional edge cache has a large cache than an individual edge location.
o Data is removed from the cache at the edge location while the data is
retained at the Regional Edge Caches.
o When the user requests the data, then data is no longer available at the
edge location. Therefore, the edge location retrieves the cached data from

18
Internship Report on Cloud Foundations

the Regional edge cache instead of the Origin servers that have high
latency.

AWS Cloud Security

CHAPTER 4
4.1 AWS Shared Responsibility Model :
Security and Compliance is a shared responsibility between AWS and the
customer. This shared model can help relieve the customer’s operational burden as
AWS operates, manages and controls the components from the host operating
system and virtualization layer down to the physical security of the facilities in
which the service operates. The customer assumes responsibility and management
of the guest operating system (including updates and security patches), other
associated application software as well as the configuration of the AWS provided
security group firewall. Customers should carefully consider the services they
choose as their responsibilities vary depending on the services used, the
integration of those services into their IT environment, and applicable laws and
regulations. The nature of this shared responsibility also provides the flexibility
and customer control that permits the deployment. As shown in the chart below,
this differentiation of responsibility is commonly referred to as Security “of” the
Cloud versus Security “in” the Cloud.AWS responsibility “Security of the Cloud”
- AWS is responsible for protecting the infrastructure that runs all of the services
offered in the AWS Cloud. This infrastructure is composed of the hardware,
software, networking, and facilities that run AWS Cloud services.

Customer responsibility “Security in the Cloud”– Customer responsibility will be


determined by the AWS Cloud services that a customer selects. This determines
the amount of configuration work the customer must perform as part of their
security responsibilities. For example, a service such as Amazon Elastic Compute
Cloud (Amazon EC2) is categorized as Infrastructure as a Service (IaaS) and, as
such, requires the customer to perform all of the necessary security configuration
and management tasks. Customers that deploy an Amazon EC2 instance are
responsible for management of the guest operating system (including updates and
security patches), any application software or utilities installed by the customer on
the instances, and the configuration of the AWS-provided firewall (called a
security group) on each instance. For abstracted services, such as Amazon S3 and
Amazon DynamoDB, AWS operates the infrastructure layer, the operating

19
Internship Report on Cloud Foundations

system, and platforms, and customers access the endpoints to store and retrieve
data. Customers are responsible for managing their data (including encryption
options), classifying their assets, and using IAM tools to apply the appropriate
permissions.

Fig 4.1.1:-AWS Shared Responsibility Model

4.2 AWS IAM :


AWS Identity and Access Management (IAM) is a web service that helps you
securely control access to AWS resources. With IAM, you can centrally manage
permissions that control which AWS resources users can access. You use IAM to
control who is authenticated (signed in) and authorized (has permissions) to use
resources.

When you create an AWS account, you begin with one sign-in identity that has
complete access to all AWS services and resources in the account. This identity is
called the AWS account root user and is accessed by signing in with the email
address and password that you used to create the account. We strongly
recommend that you don't use the root user for your everyday tasks. Safeguard
your root user credentials and use them to perform the tasks that only the root user
can perform. For the complete list of tasks that require you to sign in as the root
user, seeTasks that require root usercredentialsin the AWS Account Management
Reference Guide.

4.2.1 IAM gives you the following features:

Shared access to your AWS account


You can grant other people permission to administer and use resources
in your AWS account without having to share your password or access
key. You can grant different permissions to different people for

20
Internship Report on Cloud Foundations

different resources. For example, you might allow some users complete
access to
Granular permissions
Amazon Elastic Compute Cloud (Amazon EC2), Amazon Simple Storage
Service (Amazon S3), Amazon DynamoDB, Amazon Redshift, and other
AWS services. For other users, you can allow read-only access to just
some S3 buckets, or permission to administer just some EC2 instances, or
to access your billing information but nothing else.
Secure access to AWS resources for applications that run on Amazon EC2
You can use IAM features to securely provide credentials for
applications that run on EC2 instances. These credentials provide
permissions for your application to access other AWS resources.
Examples include S3 buckets and DynamoDB tables.

How it’s works :


IAM provides the infrastructure necessary to control
authentication and authorization for your AWS account. The IAM infrastructure
IS illustrated by the following

21
Internship Report on Cloud Foundations

Fig 4.1.2:-AWS IAM


4.3 Security in AWS Account Management :
Cloud security at AWS is the highest priority. As an AWS customer, you benefit
from a data center and network architecture that is built to meet the requirements
of the most security-sensitive organizations.

Security is a shared responsibility between AWS and you. Theshared


responsibilitymodeldescribes this as security of the cloud and security in the
cloud:

• Security of the cloud– AWS is responsible for protecting the infrastructure that
runs AWS services in the AWS Cloud. AWS also provides you with services that
you can use securely. Third-party auditors regularly test and verify the
effectiveness of our security as part of theAWS Compliance Programs. To learn
about the compliance programs that apply to Account Management, seeAWS
services in scope by complianceprogram.

22
Internship Report on Cloud Foundations

• Security in the cloud– Your responsibility is determined by the AWS service that
you use. You are also responsible for other factors including the sensitivity of
your data, your company’s requirements, and applicable laws and regulations

This documentation helps you understand how to apply the shared responsibility
model when using AWS Account Management. It shows you how to configure
Account Management to meet your security and compliance objectives. You also
learn how to use other AWS services that help you to monitor and secure your
Account Management resources.

Networking and Content Delivery

CHAPTER 5
5.1 Networking Basics :
Starting your cloud networking journey can seem overwhelming. Especially if you
are accustomed to the traditional on-premises way of provisioning hardware and
managing and configuring networks. Having a good understanding of core
networking concepts like IP addressing, TCP communication, IP routing, security,
and virtualization will help you as you begin gaining familiarity with cloud
networking on AWS. In the following sections, we answer common questions
about cloud networking and explore best practices for building infrastructure on
AWS.

5.2 Cloud networking :


Similar to traditional on-premises networking, cloud networking provides the
ability to build, manage, operate, and securely connect your networks across all
your cloud environments and distributed cloud and edge locations. Cloud
networking allows you to architect infrastructure that is resilient and highly
available, helping you to deploy your applications faster, at scale, and closer to
your end users when you need it.

5.3 Amazon VPC :


With Amazon Virtual Private Cloud (Amazon VPC), you can launch AWS
resources in a logically isolated virtual network that you've defined. This virtual
network closely resembles a traditional network that you'd operate in your own
data center, with the benefits of using the scalable infrastructure of AWS.

23
Internship Report on Cloud Foundations

The following diagram shows an example VPC. The VPC has one subnet in each
of the Availability Zones in the Region, EC2 instances in each subnet, and an
internet gateway to allow communication between the resources in your VPC and
the internet.

Fig 5.3.1:-Amazon VPC

5.4 VPC Networking :


VPC (Virtual Private Cloud) is a fundamental networking service provided by
Amazon Web Services (AWS). It allows you to create a logically isolated section
of the AWS cloud where you can launch AWS resources, such as EC2 instances,
databases, and load balancers. With VPC, you have complete control over your
virtual network environment, including selecting your IP address range, creating
subnets, and configuring route tables and network gateways.

Here are some key aspects of VPC networking in AWS:

1. Virtual Private Cloud (VPC): When you create a VPC, it represents your
private virtual network in the AWS cloud. You can think of it as your own data
center in the cloud.

2. Subnets: Within a VPC, you can create one or more subnets, each associated
with a specific Availability Zone in a region. Subnets help you logically segment
your resources and provide high availability and fault tolerance.

3. IP Addressing: You can define the IP address range for your VPC using
CIDR (Classless Inter-Domain Routing) notation. For example, you can choose a
range like 10.0.0.0/16, which allows for up to 65,536 IP addresses.

24
Internship Report on Cloud Foundations

4. Internet Gateway (IGW): An Internet Gateway is a horizontally scalable,


redundant, and highly available component that allows resources within your VPC
to communicate with the internet and vice versa.

5. Route Tables: Each subnet in a VPC is associated with a route table, which
defines the rules for routing traffic in and out of the subnet. By default, the main
route table allows communication within the VPC, but you can create custom
route tables to control specific traffic patterns.

5.5 VPC Security :


VPC security is a critical aspect of AWS's Virtual Private Cloud (VPC) service.
When setting up and managing a VPC, it's essential to implement various security
measures to protect your cloud resources from unauthorized access and potential
threats. Here are some key components of VPC security in AWS:

Security Groups: Security groups act as virtual firewalls for your EC2 instances
within a VPC. You can specify inbound and outbound traffic rules for each
security group, allowing you to control what traffic is allowed to reach your
instances. They operate at the instance level and can be associated with one or
more instances.

Network ACLs (Access Control Lists): Network ACLs are another layer of
security that operate at the subnet level. They control inbound and outbound
traffic at the subnet level and provide additional control over traffic flow between
subnets. Unlike security groups, network ACLs are stateless, meaning that you
must define rules for both inbound and outbound traffic.

Public and Private Subnets: By carefully designing your VPC with public and
private subnets, you can control which resources are exposed to the internet and
which remain private. Public subnets typically have a route to an Internet
Gateway, allowing instances within them to communicate with the internet, while
private subnets do not have direct internet access.

Internet Gateway: The Internet Gateway is a horizontally scalable, redundant,


and highly available component that enables resources within your VPC to access
the internet and allows the internet to reach your resources. Properly configuring
access to the Internet Gateway ensures secure internet connectivity for your public
resources.

5.6 Route 53 :
Amazon Route 53 is a highly scalable and reliable Domain Name System (DNS)
web service provided by Amazon Web Services (AWS). It helps you manage the

25
Internship Report on Cloud Foundations

domain names (e.g., example.com) and route incoming requests to the appropriate
AWS resources, such as EC2 instances, load balancers, or S3 buckets. Here's an
overview of Amazon Route 53:

Domain Registration: Route 53 allows you to register new domain names or


transfer existing ones. When you register a domain with Route 53, it becomes
available for use, and you can start configuring its DNS settings.

DNS Management: Route 53 provides a fully featured DNS management service.


You can create various types of DNS records like A records (IPv4 addresses),
AAAA records (IPv6 addresses), CNAME records (aliases), MX records (mail
exchange servers), and more. These records allow you to associate your domain
names with specific IP addresses or other resources.

Routing Policies: Route 53 offers several routing policies that allow you to
control how incoming traffic is distributed among multiple resources. Some of the
routing policies include:

Simple Routing: Directs traffic to a single resource.

Weighted Routing: Distributes traffic based on assigned weights to resources.

Latency-Based Routing: Routes traffic to the resource with the lowest latency
for the user.

Geolocation Routing: Directs traffic based on the user's geographic location.

Health Checks: Route 53 enables you to set up health checks for your resources,
such as EC2 instances or load balancers. Health checks monitor the health and
availability of resources, and Route 53 can automatically reroute traffic away from
unhealthy resources.

5.7 Cloudfront :
Amazon CloudFront is a web service that speeds up distribution of your static and
dynamic web content, such as .html, .css, .js, and image files, to your users.
CloudFront delivers your content through a worldwide network of data centers
called edge locations. When a user requests content that you're serving with
CloudFront, the request is routed to the edge location that provides the lowest
latency (time delay), so that content is delivered with the best possible
performance.

• If the content is already in the edge location with the lowest latency, CloudFront
delivers it immediately.
• If the content is not in that edge location, CloudFront retrieves it from an origin
that you've defined—such as an Amazon S3 bucket, a MediaPackage channel, or

26
Internship Report on Cloud Foundations

an HTTP server (for example, a web server) that you have identified as the source
for the definitive version of your content.

As an example, suppose that you're serving an image from a traditional web


server, not from CloudFront. For example, you might serve an image,
sunsetphoto.png, using the URL https://example.com/sunsetphoto.png.

Your users can easily navigate to this URL and see the image. But they probably
don't know that their request is routed from one network to another—through the
complex collection of interconnected networks that comprise the internet—until
the image is found.

CloudFront speeds up the distribution of your content by routing each user request
through the AWS backbone network to the edge location that can best serve your
content. Typically, this is a CloudFront edge server that provides the fastest
delivery to the viewer. Using the AWS network dramatically reduces the number
of networks that your users' requests must pass through, which improves
performance. Users get lower latency—the time it takes to load the first byte of
the file—and higher data transfer rates.

You also get increased reliability and availability because copies of your files (also
known as objects) are now held (or cached) in multiple edge locations around the
world.

CHAPTER 6

COMPUTE

6.1 Cloud compute services :


In Amazon Web Services (AWS), compute services provide the infrastructure and
resources to run your applications and workloads in the cloud. AWS offers a
variety of compute services to suit different use cases and application
requirements. Here are some of the key compute services provided by AWS:

Amazon EC2 (Elastic Compute Cloud): Amazon EC2 is a web service that
provides resizable compute capacity in the cloud. It allows you to launch virtual
machines, known as instances, with various operating systems and configurations.
EC2 offers flexibility in terms of instance types, storage options, and networking
capabilities.

Amazon ECS (Elastic Container Service): Amazon ECS is a fully managed


container orchestration service. It allows you to easily run and scale Docker

27
Internship Report on Cloud Foundations

containers on instances or AWS Fargate (serverless compute for containers)


without managing the underlying infrastructure.

AWS Lambda: AWS Lambda is a serverless compute service that lets you run
code without provisioning or managing servers. You can upload your code and
specify the triggering events, and Lambda automatically scales and executes the
code in response to those events.

Amazon EKS (Elastic Kubernetes Service): Amazon EKS is a fully managed


Kubernetes service that allows you to deploy, manage, and scale containerized
applications using Kubernetes. EKS takes care of the underlying Kubernetes
infrastructure.

AWS Batch: AWS Batch enables you to run batch computing workloads at scale.
It dynamically provisions the optimal amount of compute resources based on the
job's requirements.

6.2 Amazon EC2 :


Amazon Elastic Compute Cloud(Amazon EC2) is a web service that provides
secure, resizable compute capacity in the cloud. It is designed to make web-scale
computing easier for developers.

The simple web interface of Amazon EC2 allows you to obtain and configure
capacity with minimal friction. It provides you with complete control of your
computing resources and lets you run on Amazon’s proven computing
environment. Amazon EC2 reduces the time required to obtain and boot new
server instances (called Amazon EC2 instances) to minutes, allowing you to
quickly scale capacity, both up and down, as your computing requirements
change. Amazon EC2 changes the economics of computing by allowing you to
pay only for capacity that you actually use. Amazon EC2 provides developers and
system administrators the tools to build failure resilient applications and isolate
themselves from common failure scenarios.

Instance types

Amazon EC2 passes on to you the financial benefits of Amazon scale. You pay a
very low rate for the compute capacity you actually consume. Refer to
AmazonEC2Instance Purchasing Optionsfor a more detailed description.

•On-Demand Instances— With On-Demand Instances, you pay for compute


capacity by the hour or the second depending on which instances you run. No
longer-term commitments or upfront payments are needed. You can increase or
decrease your compute capacity depending on the demands of your application
and only pay the specified per hourly rates for the instance you use. On-Demand
Instances are recommended for:

28
Internship Report on Cloud Foundations

o Users that prefer the low cost and flexibility of Amazon EC2 without any up-front
payment or long-term commitment
o Applications with short-term, spiky, or unpredictable workloads that cannot be
interrupted
o Applications being developed or tested on Amazon EC2 for the first time
•Spot Instances—Spot Instancesare available at up to a 90% discount compared to
On-Demand prices and let you take advantage of unused Amazon EC2 capacity in
the AWS Cloud. You can significantly reduce the cost of running your
applications, grow your application’s compute capacity and throughput for the
same budget, and enable new types of cloud computing applications. Spot
Instances are recommended for:
o Applications that have flexible start and end times oApplications
that are only feasible at very low compute prices
o Users with urgent computing needs for large amounts of additional
capacity Cost optimization :

Cost optimization in Amazon EC2 is crucial to ensure that you are getting the
most value out of your cloud infrastructure while keeping your expenses under
control. Here are some strategies and best practices to optimize costs with
Amazon EC2:
Right-Sizing Instances: Choose the instance type that best matches your
workload requirements. If your workload is not resource-intensive, consider using
smaller or lower-cost instance types to avoid overprovisioning.

Reserved Instances (RIs): Utilize Reserved Instances for stable workloads with
predictable usage. RIs offer significant cost savings compared to On-Demand
Instances when you commit to a one- or three-year term.

Spot Instances: Take advantage of Spot Instances for fault-tolerant or flexible


workloads. Spot Instances can be much cheaper than On-Demand Instances but
are subject to availability and can be terminated with little notice if the spot price
exceeds your bid.

Scheduled Instances: Use Scheduled Instances for workloads that have


predictable schedules. Scheduled Instances allow you to reserve capacity for
specific time periods, ensuring you have the necessary resources when needed.

6.3 AWS Lambda :


AWS Lambda is an event-driven, serverless computing platform provided by
Amazon as a part of Amazon Web Services. Therefore you don’t need to worry
about which AWS resources to launch, or how will you manage them. Instead,
you need to put the code on Lambda, and it runs.

29
Internship Report on Cloud Foundations

In AWS Lambda the code is executed based on the response of events in AWS
services such as add/delete files in S3 bucket, HTTP request from Amazon API
gateway, etc. However, Amazon Lambda can only be used to execute background
tasks.

AWS Lambda function helps you to focus on your core product and business logic
instead of managing operating system (OS) access control, OS patching, right-
sizing, provisioning, scaling, etc.

How it’s works?

The following AWS Lambda example with block diagram explains the working of
AWS Lambda in a few easy steps:

Fig 6.31:-AWS Lambda

6.4 AWS Elastic Beanstalk:


AWS Elastic Beanstalk is an AWS-managed service for web applications.
Elastic Beanstalk is a pre-configuredEC2server that can directly take up your
application code and environment configurations and use it to automatically
provision and deploy the required resources within AWS to run the web
application. Unlike EC2 which is Infrastructure as a service, Elastic Beanstalk is a
Platform As A Service (PAAS) as it allows users to directly use a pre-configured
server for their application. Of course, you can deploy applications without ever
having to use elastic beanstalk but that would mean having to choose the
appropriate service from the vastarrayof services offered byAWS, manually

30
Internship Report on Cloud Foundations

provisioning these AWS resources, and stitching them up together to form a


complete web application. Elastic Beanstalk abstracts the underlying
configuration work and allows you as a user to focus on more pressing matters.
This raises a concern that if elastic Beanstalk configures most of the resources
itself and abstracts the underlying details. Can developers change the
configuration if needed? The answer is Yes. Elastic Beanstalk is provided to make
application deployment simpler but at no level will it restrict the developers from
changing any configurations.
How Elastic Beanstalk Works
Elastic Beanstalk is a fully managed service provided by AWS that makes it easy
to deploy and manage applications in the cloud without worrying about the
underlying infrastructure. First, create an application and select an environment,
configure the environment, and deploy the application.

CHAPTER 7

STORAGE

7.1 AWS EBS:-


AWS EBS (Amazon Elastic Block Store) is a scalable block storage service provided by
Amazon Web Services (AWS). It offers persistent block-level storage volumes that can
be attached to Amazon EC2 instances, providing durable storage for your applications
and data. Here are the key features and characteristics of AWS Elastic Block Store:

Persistent Storage: EBS volumes provide durable and persistent block storage
that persists independently from the lifecycle of the EC2 instance. This means that
data stored in EBS volumes remains intact even if the associated EC2 instance is
stopped, terminated, or fails.

Multiple Volume Types: AWS offers different EBS volume types to cater to
various use cases and performance requirements:

General Purpose SSD (gp2): Provides a balance of price and performance for
most workloads.

Provisioned IOPS SSD (io1): Offers high-performance for I/O-intensive


workloads with consistent and low-latency performance.

Cold HDD (sc1): Optimized for low-cost, infrequently accessed workloads with
throughput-oriented performance.

31
Internship Report on Cloud Foundations

Throughput Optimized HDD (st1): Designed for large, frequently accessed


workloads that require sustained throughput.

Magnetic (standard): Legacy storage option with cost-effective magnetic disks


suitable for workloads with light I/O requirements.

EBS Snapshots: You can create point-in-time snapshots of EBS volumes, which
are stored in Amazon S3. These snapshots serve as backups and can be used to
restore volumes or create new volumes with the same data.

EBS Encryption: EBS volumes support encryption using AWS Key Management
Service (KMS) keys. Encryption provides an additional layer of data security,
especially for sensitive workloads.

EBS Volume Resizing: You can dynamically resize EBS volumes without
disrupting the associated EC2 instance. This allows you to adjust storage capacity
as per your evolving application needs.

EBS Multi-Attach: Some EBS volume types, like io1 and io2, support multi-
attach. This enables attaching a single EBS volume to multiple EC2 instances in
the same Availability Zone, allowing for shared storage for clustered or high-
availability applications.

7.2 AWS S3 :
AWS EBS (Amazon Elastic Block Store) is a scalable block storage service
provided by Amazon Web Services (AWS). It offers persistent block-level storage
volumes that can be attached to Amazon EC2 instances, providing durable storage
for your applications and data. Here are the key features and characteristics of
AWS Elastic Block Store:

Persistent Storage: EBS volumes provide durable and persistent block storage
that persists independently from the lifecycle of the EC2 instance. This means that
data stored in EBS volumes remains intact even if the associated EC2 instance is
stopped, terminated, or fails.

Multiple Volume Types: AWS offers different EBS volume types to cater to
various use cases and performance requirements:

General Purpose SSD (gp2): Provides a balance of price and performance for
most workloads.

Provisioned IOPS SSD (io1): Offers high-performance for I/O-intensive


workloads with consistent and low-latency performance.

32
Internship Report on Cloud Foundations

Cold HDD (sc1): Optimized for low-cost, infrequently accessed workloads with
throughput-oriented performance.

Throughput Optimized HDD (st1): Designed for large, frequently accessed


workloads that require sustained throughput.

Magnetic (standard): Legacy storage option with cost-effective magnetic disks


suitable for workloads with light I/O requirements.

EBS Snapshots: You can create point-in-time snapshots of EBS volumes, which
are stored in Amazon S3. These snapshots serve as backups and can be used to
restore volumes or create new volumes with the same data.

EBS Encryption: EBS volumes support encryption using AWS Key Management
Service (KMS) keys. Encryption provides an additional layer of data security,
especially for sensitive workloads.

EBS Volume Resizing: You can dynamically resize EBS volumes without
disrupting the associated EC2 instance. This allows you to adjust storage capacity
as per your evolving application needs.

7.3 AWS EFS :


AWS EFS (Amazon Elastic File System) is a fully managed, scalable file storage
service provided by Amazon Web Services (AWS). It is designed to provide
shared file storage across multiple EC2 instances, making it ideal for applications
that require shared access to files and data. Here are the key features and
characteristics of Amazon EFS:

Shared File System: Amazon EFS allows you to create a scalable and shared file
system that can be mounted simultaneously by multiple EC2 instances. This
enables multiple instances to read and write data to the file system concurrently,
making it suitable for applications with shared workloads.

Elastic and Scalable: EFS automatically scales its file systems as data storage
needs grow or shrink. It can accommodate an almost unlimited number of files
and data, and there is no need to pre-provision storage capacity.

Data Durability and Availability: EFS is designed for high durability and
availability. It automatically replicates data across multiple Availability Zones
(AZs) within a region, ensuring that data is protected against hardware failures
and provides 99.99% availability.

Performance Modes: EFS offers two performance modes to cater to different


application requirements:

33
Internship Report on Cloud Foundations

General Purpose Mode (default): Suitable for most workloads, providing a


balance of low latency and high throughput.

Max I/O Mode: Designed for applications with higher levels of aggregate
throughput and higher performance at the cost of slightly higher latency.

7.4 Amazon S3 Glacier :


Amazon S3 Glacier is a low-cost storage service provided by Amazon Web
Services (AWS) for data archiving and long-term backup. It is designed to store
data that is infrequently accessed and doesn't require real-time retrieval. Glacier
provides a secure, durable, and scalable solution for long-term storage of data,
making it ideal for compliance, regulatory, and archival requirements. Here are
the key features and characteristics of Amazon S3 Glacier:

Archival Storage: Glacier is primarily used for data archiving rather than
frequently accessed data storage. It is an excellent option for data that needs to be
retained for long periods without the need for real-time retrieval.

Durability and Availability: Similar to Amazon S3, Glacier ensures data


durability by replicating data across multiple facilities within a region. It provides
high availability to protect your data against hardware failures.

Vaults and Archives: In Glacier, data is organized into "vaults." A vault is a


container for storing archives, which are individual objects stored in Glacier. Each
archive can be up to 40 terabytes in size.

Data Retrieval Options: Glacier offers three retrieval options, each with different
costs and retrieval times:

Expedited Retrieval: Provides real-time access to your data but comes with
higher costs.

Standard Retrieval: The default option, which provides data retrieval within a
few hours.

Bulk Retrieval: Designed for large data retrieval, typically taking 5-12 hours.

Data Lifecycle Policies: You can create data lifecycle policies to automatically
transition data from S3 to Glacier based on specific criteria, such as data age or
access frequency. This helps optimize storage costs by moving infrequently
accessed data to Glacier.

Security and Encryption: Glacier provides data security through SSL (Secure
Sockets Layer) for data in transit and server-side encryption at rest. You can also

34
Internship Report on Cloud Foundations

use AWS Key Management Service (KMS) to manage encryption keys for added
security.

Cost-Effective Storage: Glacier offers a lower cost per gigabyte compared to


Amazon S3, making it an economical solution for long-term data retention.

CHAPTER 8

DATABASES

8.1 Amazon RDS:


Amazon RDS (Relational Database Service) is a managed database service
provided by Amazon Web Services (AWS). It simplifies the process of setting up,
operating, and scaling relational databases in the cloud. RDS supports various
database engines and takes care of routine database tasks, allowing you to focus
on your applications and data. Here are the key features and characteristics of
Amazon RDS:

Managed Database Service: With RDS, AWS takes care of database


management tasks such as database setup, patching, backups, and automatic
failure detection. This allows you to offload administrative burdens and focus on
your application development.

Multiple Database Engines: RDS supports several popular database engines,


including:

Amazon Aurora (MySQL and PostgreSQL-compatible): A high-performance,


fully managed database engine designed for the cloud.

MySQL: A widely used open-source relational database.

PostgreSQL: An open-source, object-relational database.

Oracle: A commercial database provided by Oracle Corporation.

Microsoft SQL Server: A commercial database provided by Microsoft.

Easy Scalability: RDS allows you to scale your database instance vertically (by
increasing its compute and memory resources) or horizontally (by creating Read
Replicas for read-heavy workloads).

35
Internship Report on Cloud Foundations

Automated Backups and Point-in-Time Recovery: RDS automatically creates


backups of your database, allowing you to restore to any specific point in time
within the retention period.

High Availability: Amazon RDS offers high availability through Multi-AZ


(Availability Zone) deployments. In a Multi-AZ configuration, a synchronous
standby replica is created in a different Availability Zone, providing automatic
failover in case of a primary instance failure.

Security Features: RDS provides security features such as encryption at rest and
in transit, IAM database authentication, and network isolation within a VPC
(Virtual Private Cloud).

Monitoring and Metrics: Amazon RDS integrates with Amazon CloudWatch,


allowing you to monitor database performance metrics and set up alarms to get
notified about critical events.

Read Replicas: For read-intensive workloads, you can create Read Replicas of
your primary database to offload read traffic and improve performance.

Amazon DynamoDB:
Amazon DynamoDB is a fully managed, NoSQL database service provided by
Amazon Web Services (AWS). It is designed to provide fast and scalable
performance for both read and write operations while maintaining low-latency
responses. DynamoDB is suitable for a wide range of applications, from small-
scale web applications to largescale enterprise solutions.

Here are the key features and characteristics of Amazon DynamoDB:

Fully Managed: With DynamoDB, AWS takes care of the database management
tasks, such as hardware provisioning, setup, configuration, scaling, backups, and
maintenance. This allows developers to focus on building applications without
worrying about database administration.

NoSQL Database: DynamoDB is a NoSQL database, which means it provides


flexible schema design and can handle unstructured or semi-structured data. It
does not require a fixed schema like traditional relational databases.

Scalable Performance: DynamoDB is designed for high scalability. It


automatically scales both read and write capacity to handle varying workloads.
This makes it suitable for applications with unpredictable or rapidly changing
traffic patterns.

36
Internship Report on Cloud Foundations

Low-Latency Response Times: DynamoDB offers single-digit millisecond


latency for both read and write operations, making it well-suited for applications
that require realtime access to data.

Data Replication and Availability: DynamoDB replicates data across multiple


Availability Zones (AZs) within a region to ensure high availability and fault
tolerance. It offers strong consistency for both read and write operations.

Data Encryption: DynamoDB provides encryption at rest using AWS Key


Management Service (KMS) to enhance data security.

8.2 Amazon Redshift :


Amazon Redshift is a fully managed, petabyte-scale data warehousing service
provided by Amazon Web Services (AWS). It is designed for analyzing large
volumes of data with high performance and cost-efficiency. Redshift is based on a
columnar storage architecture and is optimized for online analytical processing
(OLAP) workloads. Here are the key features and characteristics of Amazon
Redshift:

Columnar Storage: Redshift stores data in columns rather than rows, which
allows for high compression rates and improved query performance for analytical
workloads. This columnar storage reduces I/O and improves query execution
times.

Scalability: Amazon Redshift is highly scalable and can easily scale up or down
based on your data volume and performance requirements. You can add or remove
nodes to handle changing workloads.

Fully Managed: Redshift is a fully managed service, meaning AWS takes care of
the underlying infrastructure, backups, patching, and other administrative tasks.
This allows you to focus on analyzing your data without worrying about managing
the database.

Massive Parallel Processing (MPP): Redshift distributes data and query


execution across multiple nodes, enabling parallel processing for faster query
performance. This allows Redshift to handle large datasets and complex queries
efficiently.

Column Compression and Encoding: Redshift uses various compression and


encoding techniques to reduce storage costs and improve query performance.

Integration with Other AWS Services: Redshift seamlessly integrates with other
AWS services, such as Amazon S3 for data loading, AWS Data Pipeline for data
ETL (Extract, Transform, Load), and AWS Glue for data cataloging and
transformation.
37
Internship Report on Cloud Foundations

Security and Encryption: Redshift supports various security features, including


encryption at rest and in transit. It also integrates with AWS Identity and Access
Management (IAM) for access control.
8.3 Amazon Aurora :

Amazon Aurora is a fully managed relational database service provided by


Amazon Web Services (AWS). It is designed to be compatible with MySQL and
PostgreSQL, offering the performance and availability of commercial-grade
databases with the cost-effectiveness and ease of management of open-source
databases. Amazon Aurora is a popular choice for applications that require high
performance, scalability, and durability. Here are the key features and
characteristics of Amazon Aurora:
Compatibility: Amazon Aurora is compatible with MySQL and PostgreSQL,
which means you can use existing MySQL or PostgreSQL applications, drivers,
and tools with Aurora without any code changes.

Performance: Aurora is designed for high performance and can deliver up to five
times the throughput of standard MySQL and up to three times the throughput of
standard PostgreSQL.
Scalability: Aurora can automatically scale both compute and storage resources to
handle increasing workloads. It can also create up to 15 read replicas, providing
high read scalability for read-heavy applications.
High Availability: Aurora offers high availability through Multi-AZ
deployments. In a Multi-AZ configuration, Aurora automatically replicates data to
a standby instance in a different Availability Zone, providing automatic failover in
case of a primary instance failure.

Chapter 9

Auto Scaling And Montioring

9.1 Elastic Load Balancing :


The elastic load balancer is a service provided by Amazon in which the incoming
traffic is efficiently and automatically distributed across a group of backend
servers in a manner that increases speed and performance. It helps to improve the
scalability of your application and secures your applications. Load Balancer
allows you to configure health checks for the registered targets. In case any of the
registered targets(Autoscaling group)fails the health check, the load balancer will

38
Internship Report on Cloud Foundations

not route traffic to that unhealthy target. Thereby ensuring your application is
highly available and fault tolerant. To know more about load balancing refer
toLoad Balancing inCloud Computing.

9.2 Types of Load Balancer:


1. Classic Load Balancer: It is the traditional form of load balancer which was
used initially. It distributes the traffic among the instances and is not
intelligent enough to support host-based routing or path-based routing. It ends
up reducing efficiency and performance in certain situations. It is operated on
the connection level as well as the request level. Classic Load Balancer is in
between the transport layer (TCP/SSL) and the application
layer(HTTP/HTTPS).
2. Application Load Balancer: This type of Load Balancer is used when
decisions are to be made related to HTTP and HTTPS traffic routing. It
supports path-based routing and host-based routing. This load balancer works
at the Application layer of the OSI Model. The load balancer also supports
dynamic host port mapping.
3. Network Load Balancer: This type of load balancer works at the transport
layer(TCP/SSL) of the OSI model. It’s capable of handling millions of
requests per second. It is mainly used for load-balancing TCP traffic.
4. Gateway Load Balancer: Gateway Load Balancers provide you the facility to
deploy, scale, and manage virtual appliances like firewalls.Gateway Load
Balancerscombine a transparent network gateway and then distribute the
traffic.

9.3 Amazon Cloudwatch :

Amazon CloudWatch monitors your Amazon Web Services (AWS) resources and
the applications you run on AWS in real time. You can use CloudWatch to collect
and track metrics, which are variables you can measure for your resources and
applications.

The CloudWatch home page automatically displays metrics about every AWS
service you use. You can additionally create custom dashboards to display metrics
about your custom applications, and display custom collections of metrics that you
choose.

You can create alarms that watch metrics and send notifications or automatically
make changes to the resources you are monitoring when a threshold is breached.
For example, you can monitor the CPU usage and disk reads and writes of your
Amazon EC2 instances and then use that data to determine whether you should
launch additional instances to handle increased load. You can also use this data to
stop underused instances to save money.
With CloudWatch, you gain system-wide visibility into resource utilization,
application performance, and operational health.

39
Internship Report on Cloud Foundations

Accessing CloudWatch
You can access CloudWatch using any of the following methods:

Amazon CloudWatch console–https://console.aws.amazon.com/cloudwatch/


AWS CLI– For more information, seeGetting Set Up with the AWS Command
Line Interfacein the AWS Command Line Interface User Guide.
CloudWatch API– For more information, see theAmazon CloudWatch API Reference.

AWS SDKs– For more information, seeTools for Amazon Web


ServicesAmazon EC2 Auto Scaling :
Amazon EC2 Auto Scaling helps you ensure that you have the correct number of
Amazon EC2 instances available to handle the load for your application. You
create collections of EC2 instances, called Auto Scaling groups. You can specify
the minimum number of instances in each Auto Scaling group, and Amazon EC2
Auto Scaling ensures that your group never goes below this size. You can specify
the maximum number of instances in each Auto Scaling group, and Amazon EC2
Auto Scaling ensures that your group never goes above this size. If you specify the
desired capacity, either when you create the group or at any time thereafter,
Amazon EC2 Auto Scaling ensures that your group has this many instances. If
you specify scaling policies, then Amazon EC2 Auto Scaling can launch or
terminate instances as demand on your application increases or decreases.

For example, the following Auto Scaling group has a minimum size of one
instance, a desired capacity of two instances, and a maximum size of four
instances. The scaling

policies that you define adjust the number of instances, within your minimum and
maximum number of instances, based on the criteria that you specify.

40
Internship Report on Cloud Foundations

Fig 9.3.1:-AWS SDK

CLOUD ARCHITECTURE

CHAPTER 1
1.1Introduction to cloud architecture:

Cloud architecture is a key element of building in the cloud. It refers to the


layout and connects all the necessary components and technologies required
forcloudcomputing.

41
Internship Report on Cloud Foundations

Migrating to the cloud can offer many business benefits compared to on-premises
environments, from improved agility and scalability to cost efficiency. While
many organizations may start with a “lift-and-shift” approach, where on-premises
applications are moved over with minimal modifications, ultimately it will be
necessary to construct and deploy applications according to the needs and
requirements of cloud environments.

Cloud architecture dictates how components are integrated so that you can pool,
share, and scale resources over a network. Think of it as a building blueprint for
running and deploying applications in cloud environments.

Explore how Google Cloud helps you designcloud architectureto match your
business needs. Use ourArchitecture Frameworkfor guidance, recommendations,
and best practices to build and migrate your workloads to the cloud. Use
ourArchitectureDiagramming Toolfor pre-built reference architectures and
customizing them to your use cases.

1.2 Roles of cloud computing :

In cloud computing, there are various roles and responsibilities that individuals or
teams can take on to manage and utilize cloud resources effectively. The specific
roles may vary depending on the cloud service provider and the organization's
structure. Here are some common roles in cloud computing:

Cloud Architect: Responsible for designing and implementing the overall cloud
infrastructure, including selecting appropriate services, security protocols, and
integration with existing systems.

Cloud Engineer: Works on the technical aspects of the cloud infrastructure, such
as setting up and configuring cloud services, managing virtual machines, and
implementing networking solutions.

DevOps Engineer: Focuses on the integration of development and operations to


ensure smooth and automated deployment, monitoring, and management of
applications in the cloud environment.

42
Internship Report on Cloud Foundations

Cloud Security Specialist: Ensures the security and compliance of the cloud
infrastructure, including implementing security protocols, monitoring for
vulnerabilities, and responding to incidents.

Cloud Administrator: Manages day-to-day operations of the cloud environment,


including user management, access control, and resource provisioning.

Cloud Developer: Develops applications specifically for cloud environments,


leveraging cloud-native technologies and services.

Data Engineer: Works with big data and analytics solutions in the cloud,
designing and maintaining data pipelines, databases, and data storage solutions.

1.3 AWS Well-Architected Framework:


TheAWS Well-Architected Frameworkhelps cloud architects build the most
secure, high-performing, resilient, and efficient infrastructure possible for their
applications. The framework provides a consistent approach for customers
andAWS Partnersto evaluate architectures, and provides guidance to implement
designs that scale with your application needs over time.

In this post, we provide an overview of the Well-Architected Framework’s six


pillars and explore design principles and best practices. You can find more
details—including definitions, FAQs, and resources—in each pillar’s whitepaper
we link to below.

1. Operational Excellence

The Operational Excellence pillar includes the ability to support development and
run workloads effectively, gain insight into their operation, and continuously
improve supporting processes and procedures to delivery business value. You can
find prescriptive guidance on implementation in theOperational Excellence
Pillarwhitepaper.

Design Principles

There are five design principles for operational excellence in the cloud:

• Perform operations as code

• Make frequent, small, reversible changes

• Refine operations procedures frequently

• Anticipate failure

43
Internship Report on Cloud Foundations

• Learn from all operational failures

2. Security
The Security pillar includes the ability to protect data, systems, and assets to take
advantage of cloud technologies to improve your security. You can find
prescriptive guidance on implementation in theSecurity Pillar whitepaper.

Design Principles
There are seven design principles for security in the cloud:

• Implement a strong identity foundation

• Enable traceability

• Apply security at all layers

• Automate security best practices

• Protect data in transit and at rest

• Keep people away from data

• Prepare for security events

3. Reliability
The Reliability pillar encompasses the ability of a workload to perform its
intended function correctly and consistently when it’s expected to. This includes
the ability to operate and test the workload through its total lifecycle. You can find
prescriptive guidance on implementation in theReliability Pillar whitepaper.

Design Principles
There are five design principles for reliability in the cloud:

• Automatically recover from failure

• Test recovery procedures

• Scale horizontally to increase aggregate workload availability

• Stop guessing capacity

• Manage change in automation

4. Performance Efficiency
The Performance Efficiency pillar includes the ability to use computing resources
efficiently to meet system requirements, and to maintain that efficiency as demand

44
Internship Report on Cloud Foundations

changes and technologies evolve. You can find prescriptive guidance on


implementation in thePerformance Efficiency Pillar whitepaper.

Design Principles
There are five design principles for performance efficiency in the cloud:

• Democratize advanced technologies

• Go global in minutes

• Use serverless architectures

• Experiment more often

• Consider mechanical sympathy

5. Cost Optimization
The Cost Optimization pillar includes the ability to run systems to deliver business
value at the lowest price point. You can find prescriptive guidance on
implementation in theCost Optimization Pillar whitepaper.

Design Principles

There are five design principles for cost optimization in the cloud:

• Implement cloud financial management

• Adopt a consumption model

• Measure overall efficiency

• Stop spending money on undifferentiated heavy lifting

• Analyze and attribute expenditure

6. Sustainability
The discipline of sustainability addresses the long-term environmental, economic,
and societal impact of your business activities. You can find prescriptive guidance
on implementation in theSustainability Pillar whitepaper.

Design Principles
There are six design principles for sustainability in the cloud:

• Understand your impact

• Establish sustainability goals


• Maximize utilization

45
Internship Report on Cloud Foundations

• Anticipate and adopt new, more efficient hardware and software offerings

• Use managed services

• Reduce the downstream impact of your cloud workloads

Chapter 2

Adding a storage layer

2.1 Cloud Storage :


Cloud storageis a web service where your data can be stored, accessed, and
quickly backed up by users on the internet. It is more reliable, scalable, and secure
than traditional on-premises storage systems.

Cloud storage is offered in two models:

1. Pay only for what you use

2. Pay on a monthly basis

Now, let’s have a look at the different types of storage services offered by AWS.

2.2 Types of AWS Storage

AWS offers the following services for storage purposes:

46
Internship Report on Cloud Foundations

Fig 2.21:-Types of AWS storage

2.3 Before AWS S3 :Organizations had a difficult time finding, storing, and
managing all of your data. Not only that, running applications, delivering content to
customers, hosting high traffic websites, or backing up emails and other files required a
lot of storage. Maintaining the organization’s repository was also expensive and time-
consuming for several reasons. Challenges included the following:

1. Having to purchase hardware and software components

2. Requiring a team of experts for maintenance

3. A lack of scalability based on your requirements

4. Data security requirements

2.4What is AWS S3?


Amazon S3 (Simple Storage Service) provides object storage, which is built for
storing and recovering any amount of information or data from anywhere over the
internet. It provides this storage through a web services interface. While designed
for developers for easier web-scale computing, it provides 99.999999999 percent
durability and 99.99 percent availability of objects. It can also store computer files
up to 5 terabytes in size.

AWS S3 Benefits:

Some of the benefits of AWS S3 are:

Durability: S3 provides 99.999999999 percent durability.

Low cost: S3 lets you store data in a range of “storage classes.” These classes are
based on the frequency and immediacy you require in accessing files.
Scalability: S3 charges you only for what resources you actually use, and there are no
hidden fees or overage charges. You can scale your storage resources to easily meet
your organization’s ever-changing demands.

Availability: S3 offers 99.99 percent availability of objects

Security: S3 offers an impressive range of access management tools and


encryption features that provide top-notch security.

47
Internship Report on Cloud Foundations

Flexibility: S3 is ideal for a wide range of uses like data storage, data backup,
software delivery, data archiving, disaster recovery, website hosting, mobile
applications, IoT devices, and much more.

Transfer data storage to amazon s3 :

Account A: The AWS account that you use for managing network resources. The
service endpoint that you activate the DataSync agent with also belongs to this
account.
Account B: The AWS account for the S3 bucket that you want to copy data to.

The following diagram illustrates this scenario.

Fig 2.41:-AWS S3 Benefits


Chapter 3

Adding A Compute Layer

3.1 Introduction:
Adding a compute layer in AWS cloud architecture involves provisioning and
configuring the necessary resources to run your applications and workloads. AWS
offers several compute services that cater to different needs and use cases. Here's how
you can add a compute layer to your architecture:

1. Selecting the Right Compute Service:


• Amazon EC2 (Elastic Compute Cloud): Virtual servers that you can
configure and manage, giving you full control over the operating system
and software stack.
• AWS Lambda: A serverless compute service that runs code in response to
events, automatically scaling based on demand.
48
Internship Report on Cloud Foundations

• Amazon ECS (Elastic Container Service): A container orchestration


service that allows you to manage and scale Docker containers.
• Amazon EKS (Elastic Kubernetes Service): A managed Kubernetes
service for orchestrating containerized applications.
• AWS Batch: A service for running batch computing workloads on the
AWS Cloud.
2. Provisioning Compute Resources:
• For Amazon EC2:Launch EC2 instances based on your application's
requirements. Choose the appropriate instance type, operating system, and
networking options.
• For AWS Lambda: Write and deploy serverless functions using AWS
Lambda. Set up triggers (such as API Gateway, S3 events, etc.) to execute
your code in response to events.
• For Amazon ECS and Amazon EKS:Define and configure containers for
your application, create tasks or pods, and specify resource requirements.
• For AWS Batch:Define batch jobs, queues, and compute environments.
AWS Batch manages the execution environment, including provisioning
and scaling resources.
3. Networking and Security:
• Configure security groups and network settings for your compute instances,
containers, or serverless functions.
• Implement network segmentation and isolation using Amazon VPC
(Virtual Private Cloud) to control traffic flow and ensure security.
4. Load Balancing and Scaling:
• Use Elastic Load Balancing (ELB) to distribute incoming traffic across
multiple compute instances or containers to improve availability and fault
tolerance.
• Implement auto scaling to dynamically adjust the number of compute
resources based on traffic and workload demands.
5. Monitoring and Management:
• Set up monitoring and logging using Amazon CloudWatch to collect and
analyze metrics, logs, and events from your compute resources.
• Implement application and infrastructure monitoring to ensure performance
and identify potential issues.
6. Application Deployment:
• Deploy your applications using deployment tools like AWS CodeDeploy,
AWS Elastic Beanstalk, or Kubernetes deployments.
7. Cost Optimization:
• Optimize costs by selecting the appropriate instance types, using reserved
instances, and implementing efficient scaling strategies.

8. High Availability and Fault Tolerance:


• Design your compute layer to be highly available by distributing resources
across multiple Availability Zones (AZs) within a region.

49
Internship Report on Cloud Foundations

Fig 3.1:-Adding a compute layer


Conclusion:
Remember to consider your application's requirements, scalability needs, and expected traffic
patterns when selecting and configuring compute resources in your AWS cloud architecture.
Each compute service offers unique benefits and features.

Chapter 4

Adding A Database Layer

4.1 Introduction:
Adding a database layer in AWS cloud architecture involves designing and deploying the
appropriate database solutions to store, manage, and access your application's data. Here's
how you can add a database layer to your architecture:
1. Selecting the Right Database Service:
• Amazon RDS (Relational Database Service): Managed relational database service
that supports popular database engines like MySQL, PostgreSQL, Oracle, and SQL
Server.
• Amazon DynamoDB: Managed NoSQL database service that offers seamless
scalability and low-latency performance.
• Amazon Aurora: High-performance, MySQL and PostgreSQL-compatible relational
database engine.

50
Internship Report on Cloud Foundations

Amazon DocumentDB: Fully managed document database service compatible with


MongoDB workloads.
• Amazon Redshift: Fully managed data warehousing service for analytics and
reporting.
• Amazon Neptune: Managed graph database service for building applications that
require graph representations of data.
• Amazon ElastiCache: Managed in-memory caching service for improving the
performance of your applications.
2. Database Provisioning and Configuration:
• For Amazon RDS: Choose the appropriate database engine, instance type, storage,
and configuration options. Set up backups, automated snapshots, and enable Multi-AZ
deployments for high availability.
• For Amazon DynamoDB: Define tables, specify read and write capacity units, and
set up partition keys and secondary indexes.
• For other database services: Configure the database settings and options based on
your application's requirements.
3. Data Modeling and Schema Design:
• Design your database schema and data model to optimize for read and write
operations. Follow best practices for relational or NoSQL data structures.
• Define indexes, primary keys, and secondary indexes to ensure efficient data retrieval.
4. Data Migration and Import:
• If you're migrating an existing database, plan and execute the data migration process
using AWS Database Migration Service or other appropriate tools.
5. Data Security and Access Control:
• Implement security measures like encryption at rest and in transit for sensitive data.
• Set up Identity and Access Management (IAM) roles and policies to control access to
your databases.
6. High Availability and Failover:
• For Amazon RDS: Utilize Multi-AZ deployments to ensure database availability in
case of a failure.
• Implement read replicas for read scalability and fault tolerance.
7. Monitoring and Performance Optimization:
• Set up monitoring using Amazon CloudWatch to track database performance metrics,
such as CPU utilization, disk I/O, and query performance.
• Optimize query performance by analyzing and tuning slow queries, using appropriate
indexing strategies, and optimizing database parameters.
8. Backup and Recovery:
• Configure automated backups and database snapshots to protect your data.
Test the backup and restore process to ensure data integrity.

51
Internship Report on Cloud Foundations

9.Scaling and Elasticity:


Depending on the database service, implement scaling strategies such as increasing instance
sizes, adding read replicas, or partitioning data.
10.Disaster Recovery and Data Replication:
• Implement data replication across regions or Availability Zones for improved disaster
recovery and geographic redundancy.
11.Integration with Application:
• Integrate your application with the database by using appropriate database drivers,
libraries, and connection strings.
12.Lifecycle Management:
• As your application evolves, consider data archiving, retention policies, and data
cleanup strategies to manage the lifecycle of your data.
Conclusion:
Select the database service that aligns best with your application's requirements and design
considerations. Properly planning and configuring your database layer is essential for
ensuring data availability, durability, and performance in your AWS cloud architecture.

Chapter 5

Creating a NetworkingEnvironment

5.1Introduction:
Create two new AWS accounts for testing purposes in the same Region. When you create an
AWS account, it automatically creates a dedicated virtual private cloud (VPC) in each
account.
Configure a VPC peering connection between the directory owner and the directory
consumer account
The VPC peering connection you will create is between the directory consumer and directory
owner VPCs. Follow these steps to configure a VPC peering connection for connectivity with
the directory consumer account.
To create a VPC peering connection between the directory owner and directory consumer
account
1. Open the Amazon VPC console athttps://console.aws.amazon.com/vpc/. Makes sure to
sign in as a user with administrator credentials in the directory owner account.

2. In the navigation pane, choose Peering Connections. Then choose Create Peering
Connection.
3. Configure the following information:

52
Internship Report on Cloud Foundations

 Peering connection name tag: Provide a name that clearly identifies this
connection with the VPC in the directory consumer account.
 VPC (Requester): Select the VPC ID for the directory owner account.
 Under Select another VPC to peer with, ensure that My account and This region
are selected.
 VPC (Accepter): Select the VPC ID for the directory consumer account.
4. Choose Create Peering Connection. In the confirmation dialog box, choose OK.

Since both VPCs are in the same Region, the administrator of the directory owner account
who sent the VPC peering request can also accept the peering request on behalf of the
directory consumer account.

To accept the peering request on behalf of the directory consumer account

1. Open the Amazon VPC console athttps://console.aws.amazon.com/vpc/.


2. In the navigation pane, choose Peering Connections.
3. Select the pending VPC peering connection. (Its status is Pending Acceptance.) Choose
Actions, Accept Request.
4. In the confirmation dialog, choose Yes, Accept. In the next confirmation dialog box,
choose Modify my route tables now to go directly to the route tables page.
To add an entry to the VPC route table in the directory owner account
1. While in the Route Tables section of the Amazon VPC console, select the route
table for the directory owner VPC.
2. Choose the Routes tab, choose Edit routes, and then choose Add route.
3. In the Destination column, enter the CIDR block for the directory consumer VPC.
4. In the Target column, enter the VPC peering connection ID (such as
pcx123456789abcde000) for the peering connection that you created earlier in the
directory owner account. 5.Choose Save changes.
To add an entry to the VPC route table in the directory consumer account
1. While in the Route Tables section of the Amazon VPC console, select the route
table for the directory consumer VPC.
2. Choose the Routes tab, choose Edit routes, and then choose Add route.
3. In the Destination column, enter the CIDR block for the directory owner VPC.

4. In the Target column, type in the VPC peering connection ID (such as


pcx123456789abcde001) for the peering connection that you created earlier in the
directory consumer account.
5. Choose Save changes.

53
Internship Report on Cloud Foundations

Fig 5.1:- Creating a networking environment


Bastion host :
It is a server that provides access to a private network from an external network, such
as the internet. You can use a bastion host to minimize the chances of penetration and
potential attack on resources in your private network.

CHAPTER 6

Connecting Networks

6.1 Introduction:
1.VPC Peering: VPC peering allows you to connect two VPC together.To set up VPC
peering, both VPC must have non-overlapping CIDR blocks, and you need to create a
peering connection between them.

2.VPN (Virtual Private Network): AWS provides the option to set up a hardware VPN
connection or a software VPN using AWS VPN Cloud Hub.AWS VPN Cloud Hub enables you
to connect multiple on-premises VPN to your VPC over a secure, encrypted connection.

3.Direct Connect:AWS Direct Connect allows you to establish a dedicated network


connection between your on-premises data center and AWS. Direct Connect can be used to
access public AWS services, private VPC resources, or a combination of both.

54
Internship Report on Cloud Foundations

4.Transit Gateway: AWS Transit Gateway acts as a hub that simplifies network connectivity
and management for multiple VPCs and on-premises networks.

5.AWS Global Accelerator:When setting up network connections in AWS, it's crucial to


consider security, performance, and scalability requirements. Additionally, AWS offers
various networking tools and features to enhance monitoring, security, and control over your
network traffic, such as Network Access Control Lists (NACLs), Security Groups, and AWS
Private Certificate Authority (CA).

Diagram:

Fig 6.11:-Connecting Networks


Description:
 The diagram depicts an on-premises data center connected to AWS using a VPN Gateway.

 VPC-A and VPC-B are in different AWS regions (us-east-1 and us-west-2, respectively).

 Each VPC has a NAT Gateway to allow instances in the private subnet to access the
internet.

 EC2 instances, RDS databases, and other resources are placed in the respective VPCs.

Chapter 7

Security User and Application Access

55
Internship Report on Cloud Foundations

7.1 Introduction:
• Securing user and application access in AWS (Amazon Web Services) cloud is
crucial to protect your resources and data from unauthorized access and potential
security breaches. AWS provides various tools and best practices to help you
achieve this. Below are some key strategies for securing user and application
access in AWS:

7.2 Identity and Access Management (IAM):

1. IAM allows you to manage users, groups, and roles to control access to AWS
resources.
2. Create individual IAM users for each person needing access and assign appropriate
permissions through IAM policies.
3. Use IAM roles for AWS services and applications to access resources securely
without using long-term access keys.
 Multi-Factor Authentication (MFA):

1. Enable MFA for IAM users to add an extra layer of security to their login process.
2. AWS supports various MFA options, such as virtual MFA devices, hardware MFA
devices, or SMS-based MFA.
 Security Groups and Network Access Control Lists (NACLs):
1. Utilize security groups to control inbound and outbound traffic for EC2 instances and
other AWS resources.
2. Network Access Control Lists (NACLs) help control traffic at the subnet level.

 Encryption:

1. Enable encryption for data at rest and data in transit.


2. Use AWS Key Management Service (KMS) to manage encryption keys securely.
 AWS Web Application Firewall (WAF):

1. Protect web applications from common web exploits using WAF.


2. Define rules to allow, block, or monitor web requests based on defined conditions.
 VPC and Subnet Segmentation:

1. Use Virtual Private Cloud (VPC) to isolate your resources logically.


2. Subnet segmentation allows you to divide resources into separate subnets based on their
security requirements.
 Audit Logging and Monitoring:

56
Internship Report on Cloud Foundations

1. Enable AWS CloudTrail to log all API calls and monitor activities within your AWS
account.
2. Use Amazon CloudWatch to monitor and receive alerts for unusual activities.

 Least Privilege Principle:

1. Follow the least privilege principle while assigning permissions to users and
applications.
2. Grant the minimum required permissions to perform specific tasks and regularly
review and update access as needed.

 Security Assessments and Compliance:

1. Conduct regular security assessments and audits to identify potential vulnerabilities.


2. Ensure your AWS environment complies with relevant security standards and
regulations.
Use AWS Security Services:

1.AWS provides a range of security services like AWS Identity and Access
Management (IAM), AWS Shield, AWS WAF, AWS Firewall Manager, etc.,

Conclusion:

Remember that security is an ongoing process, and it's essential to stay updated with the latest
security best practices and implement them as necessary to keep your AWS environment
secure.

57
Internship Report on Cloud Foundations

Chapter 8

Implementing Elasticity, High Availability and Monitoring

8.1 Introduction
•Implementing elasticity, high availability, and monitoring in AWS cloud architecture can help
ensure your applications are scalable, resilient, and efficiently managed. Here are some key
components and best practices to achieve these goals:
1. Elasticity:
• Auto Scaling Groups (ASG): Use ASGs to automatically adjust the number of
instances in response to changes in demand. ASGs can be based on CPU
utilization, network traffic, or custom metrics.
• Elastic Load Balancer (ELB):Distribute incoming traffic across multiple
instances to ensure even workload distribution and to achieve fault tolerance.
• Amazon RDS Read Replicas:If using Amazon RDS for databases, implement
read replicas to scale read-heavy workloads.
• AWS Lambda:Use serverless computing with AWS Lambda to automatically
scale compute resources based on event-driven triggers.
2. High Availability:
• Multi-Availability Zone (AZ) Deployment: Distribute your application across
multiple AZs to ensure redundancy and fault tolerance. If one AZ goes down, the
others can continue to handle requests.
• Load Balancing:Employ Elastic Load Balancing to distribute traffic across
multiple instances in different AZs.
• Amazon RDS Multi-AZ:For critical databases, enable Multi-AZ deployment to
have a standby replica in a different AZ for failover.
3. Monitoring:
• Amazon CloudWatch:Use CloudWatch to monitor AWS resources and
applications. Set up alarms to notify you of important events or performance
thresholds.
• AWS CloudTrail:Enable CloudTrail to record all API activity in your AWS
account, providing an audit trail for security and compliance purposes.
• AWS Config:Use AWS Config to track changes to your AWS resources and
maintain a history of resource configurations.

58
Internship Report on Cloud Foundations

• AWS X-Ray:Implement X-Ray to gain insights into the behavior of your


applications and identify performance bottlenecks.
• AWS Trusted Advisor: Utilize Trusted Advisor to receive automated checks
and recommendations for cost optimization, security, fault tolerance, and
performance improvement.
4. Data Backup and Disaster Recovery:
• Amazon S3 for Data Backup:Regularly backup your data to Amazon S3 or use
S3 versioning for maintaining historical versions of objects.
• Disaster Recovery (DR):Implement a Disaster Recovery strategy using services
like AWS Backup, AWS CloudEndure, or third-party solutions to replicate data
and applications to another region.
5. Fault-Tolerant Architectures:
• Distributed Systems:Design applications to be distributed and decoupled to
reduce the impact of individual component failures.
• Retry Mechanisms:Implement retry mechanisms for transient errors to ensure
requests are eventually processed.
6. Security:
• Implement the security best practices mentioned earlier, such as IAM,
encryption, VPC, etc., to ensure a secure environment.

Conclusion:
Remember that the architecture's specific implementation may vary depending on your
application requirements and use case. Regularly review and test your architecture to ensure
it meets the desired performance, scalability, and availability goals.

Chapter 9

Automating Your Architecture

59
Internship Report on Cloud Foundations

9.1 Introduction:
❖Automating your architecture in AWS cloud can significantly improve operational
efficiency, reduce human errors, and facilitate seamless scaling. There are several AWS
services and tools that you can use to automate various aspects of your architecture.
1. Infrastructure as Code (IaC):
• Use IaC tools like AWS CloudFormation or AWS CDK (Cloud Development
Kit) to define and provision your infrastructure resources in a declarative
manner.
• Infrastructure as Code allows you to version control your infrastructure and
replicate environments easily.
2. Configuration Management:
• Utilize configuration management tools like AWS Systems Manager (SSM) or
third-party tools (e.g., Ansible, Chef, Puppet) to automate the configuration and
management of instances and applications.
• SSM Parameter Store can help centralize and manage configuration data
securely.
3. Continuous Integration and Continuous Deployment (CI/CD):
• Implement CI/CD pipelines using AWS CodePipeline, AWS CodeCommit,
AWS CodeBuild, and AWS CodeDeploy to automate the build, testing, and
deployment of your applications.
• CI/CD pipelines enable you to quickly and reliably release new features and
updates to your applications.
4. Auto Scaling and Elasticity:
• Set up Auto Scaling Groups (ASGs) to automatically scale resources based on
defined criteria (e.g., CPU utilization, network traffic).
• Use AWS Lambda to trigger automatic scaling based on event-driven triggers.
5. Serverless Architectures:
• Embrace serverless computing using AWS Lambda to automate eventdriven
functions without the need to manage servers.
• Use AWS Step Functions to coordinate complex workflows involving multiple
Lambda functions.
6. Monitoring and Alerts:
• Leverage AWS CloudWatch for monitoring and set up alarms to trigger
automated actions based on predefined thresholds or patterns.
• Integrate AWS CloudWatch with AWS Lambda to automate responses to
specific events.
7. Backup and Recovery:
•Automate data backups using AWS Backup or custom scripts to schedule regular
backups and retention policies.
• Implement disaster recovery automation using services like AWS CloudEndure
or AWS Backup.
8. Automated Testing:

60
Internship Report on Cloud Foundations

• Use AWS Device Farm for automated mobile application testing across different
devices and platforms.
• Implement automated testing for web applications using services like AWS
CodePipeline and CodeBuild.
9. Event-Driven Automation:
• Utilize AWS EventBridge (formerly known as Amazon CloudWatch Events) to
create rules and automate responses to events within your AWS environment.
10. Third-Party Integrations:
• Consider using third-party automation tools that integrate with AWS services to
enhance automation capabilities.

Conclusion:
Remember to thoroughly test and validate your automation scripts and workflows before
deploying them into production environments. Additionally, continuous monitoring and
periodic updates to your automation processes will ensure your architecture remains efficient
and secure over time.

Chapter 10

Caching Content

10.1 Introduction:
•Caching content is an important aspect of optimizing the performance and scalability of
applications, especially when hosted on cloud platforms like AWS (Amazon Web

61
Internship Report on Cloud Foundations

Services). AWS provides several services and tools that you can use to implement
caching for your content. Here's a general approach:
1. Amazon CloudFront: Amazon CloudFront is a content delivery network (CDN)
service that distributes your content globally to reduce latency and deliver content
faster to users. CloudFront can be used to cache static and dynamic content, such as
images, videos, web pages, and API responses.
2. Amazon ElastiCache:Amazon ElastiCache is a managed in-memory caching service
that supports popular caching engines like Redis and Memcached.This is particularly
useful for applications that require low-latency access to data.

3. Caching within Applications:Depending on the architecture of your application, you


can also implement caching directly within your application code. This approach
provides more control over what data is cached and how it's managed.
4. Database Caching: If you're using AWS database services like Amazon RDS, you
can implement database-level caching mechanisms like query caching to speed up
database access and reduce load.
5. Elastic Load Balancing: AWS Elastic Load Balancing (ELB) can distribute
incoming traffic across multiple instances of your application. While not a caching
solution per se, using ELB can help distribute the load and improve application
performance.

6. API Gateway Caching: If you're serving APIs, AWS API Gateway provides builtin
caching mechanisms that can cache the responses from your APIs, reducing the need
to repeatedly execute backend processes.

62
Internship Report on Cloud Foundations

Conclusion:
• Remember that the choice of caching strategy will depend on your specific use case,
requirements, and the architecture of your application.
• Always monitor and fine-tune your caching setup to ensure that it's effectively improving
performance without causing data staleness or other issues.

Chapter 11:

Building Decoupled Architectures

11.1:Introduction
➢Building decoupled architectures is a fundamental principle in designing modern,
scalable, and maintainable software systems. Here are some key concepts and practices

63
Internship Report on Cloud Foundations

forbuilding

Fig 11.1:-Decoupled Architectures

64
Internship Report on Cloud Foundations

• Microservices Architecture: Microservices is an architectural style where an


application is composed of loosely coupled, independently deployable services.This
enables teams to work independently on different services, making it easier to scale,
maintain, and update the system.
• Service-Oriented Architecture (SOA): SOA is an approach that involves organizing
software components into distinct services that interact through standardized
protocols.
• Message-Oriented Middleware: Using message brokers or message queues (e.g.,
Apache Kafka, Amazon SQS) allows communication between different parts of the
system through asynchronous messaging. This helps decouple components by
enabling them to send and receive messages without direct dependencies.
• APIs and Contracts: Define clear and stable APIs (Application Programming
Interfaces) for communication between different components. Versioning APIs and
adhering to well-defined contracts help ensure that changes can be made to one
component without affecting others.
• Event-Driven Architecture: Implementing event-driven patterns allows components
to react to events without needing to know the specifics of the event source. Events
can be published and subscribed to, enabling decoupled communication between
components.
• Loose Coupling: Minimize direct dependencies between components by using
techniques like dependency injection, interfaces, and abstractions. This allows
components to be replaced or upgraded without affecting the entire system.
• Separation of Concerns: Divide your application into modular components, each
responsible for a specific concern. This separation makes it easier to manage and
modify individual parts of the system without impacting others.
• Isolation and Containerization: Use container technologies like Docker to package
applications and their dependencies. Containers provide a consistent environment,
allowing components to run in isolation while communicating through well-defined
interfaces.
• Decoupled Data Storage: Separate data storage concerns by using different databases
or storage systems for different parts of the application. This prevents changes in one
area from affecting others' data access patterns.
• Continuous Integration and Deployment (CI/CD): Adopt CI/CD practices to
automate testing, integration, and deployment. This ensures that changes are
thoroughly tested and can be deployed independently.
• Monitoring and Observability: Implement comprehensive monitoring and logging to
understand how different components interact and perform. This helps identify
bottlenecks, failures, or areas requiring optimization.
Documentation: Maintain clear documentation for APIs, interfaces, and
communication protocols. This helps developers understand how components interact
and reduces misunderstandings.

Conclusion:
Remember that building a decoupled architecture requires careful design,
planning, and ongoing maintenance. While it offers benefits like scalability, flexibility, and
resilience, it also introduces complexities that need to be managed effectively.

65
Internship Report on Cloud Foundations

Chapter 12:

Planning For Disaster and Bridging to


Certification

12.1 Introduction:
 Planning for disaster recovery in AWS cloud architecture is crucial to ensure the
availability, resilience, and continuity of your applications and data, even in the
face of unexpected events. Here's a step-by-step guide to help you plan for
disaster recovery in the AWS cloud:

Identify Critical Assets and Services: Determine which applications, data, and services are
critical for your business operations. This includes identifying dependencies between
components and understanding their interconnections.
• Define Recovery Objectives: Establish Recovery Time Objective (RTO) and
Recovery Point Objective (RPO) metrics. RTO defines the maximum acceptable
downtime, while RPO defines the maximum data loss that your business can tolerate.
• Choose a Region and Availability Zones: AWS offers multiple regions and
Availability Zones (AZs) worldwide. Design your architecture to span multiple AZs
within a region for high availability.
• Backup and Restore: Implement regular backups of your data and configurations
using services like Amazon S3, Amazon EBS snapshots, or database backups.
• Use Multi-Region Replication: For critical workloads, replicate data and services
across multiple regions using services like Amazon S3 cross-region replication,
Amazon RDS Multi-AZ, or third-party solutions.
• Disaster Recovery as Code: Use Infrastructure as Code (IaC) tools like AWS
CloudFormation or AWS CDK to define your infrastructure. This enables you to
recreate your environment quickly in case of a disaster.
• Automate Deployment and Scaling: Leverage AWS services like Amazon EC2
Auto Scaling and Amazon RDS Read Replicas to automatically scale and distribute
traffic during normal and peak loads.
• Implement High Availability Patterns: Use AWS services like Elastic Load
Balancing (ELB), Amazon Route 53 DNS failover, and Auto Scaling to distribute
traffic and ensure continuous availability.

66
Internship Report on Cloud Foundations

• Database Resilience: Implement database resilience by using Amazon RDS Multi-


AZ deployments, read replicas, and automated backups.
• Global Acceleration: Use AWS Global Accelerator to improve the availability and
performance of applications by leveraging the AWS global network.
• Test Failover and Recovery: Regularly conduct disaster recovery tests to validate the
effectiveness of your plan. Use AWS services like AWS Disaster Recovery Testing
and AWS Service Catalog to automate and streamline testing.
• Monitor and Alerting: Implement monitoring and alerting using AWS CloudWatch,
Amazon CloudWatch Alarms, and AWS CloudTrail to detect and respond to
anomalies and potential failures.
• Documentation and Runbooks: Create detailed documentation and runbooks that
outline the steps to take during different disaster scenarios. This helps ensure a
consistent response and reduces recovery time.
• People and Communication Plan: Define roles and responsibilities for disaster
recovery tasks. Establish a communication plan to notify stakeholders, employees,
and customers during a disaster.
• Third-Party Solutions: Consider third-party disaster recovery solutions that provide
additional capabilities and automation beyond AWS native services.

❖Becoming a certified AWS Cloud Architect is a great way to validate your skills and
expertise in designing and implementing scalable, reliable, and secure applications on
the Amazon Web Services platform. Here's a step-by-step guide on how to bridge the
gap and prepare for an AWS Cloud Architect certification:

67
Internship Report on Cloud Foundations

• Assess Your Knowledge: Start by evaluating your current knowledge of AWS


services, architecture best practices, and related concepts. Identify areas where you
might need more in-depth understanding.
• Choose the Right Certification: AWS offers several certification paths, including the
AWS Certified Solutions Architect - Associate and AWS Certified Solutions
Architect - Professional. Choose the certification that aligns with your experience and
career goals.
• Review the Exam Guide: Download the official exam guide for the certification
you're targeting from the AWS Certification website. The guide outlines the topics
covered in the exam and provides recommended resources. ❖Training and Learning
Resources:
• AWS Documentation: Familiarize yourself with the AWS services and concepts by
reading the official documentation.
• Online Courses: Enroll in online courses from platforms like AWS Training and
Certification, Udemy, Coursera, and A Cloud Guru. These courses are designed to
help you understand the exam topics in-depth.
• Hands-on Labs: Practice using AWS services through hands-on labs available in the
AWS Management Console.
• Whitepapers and FAQs: Read AWS whitepapers and FAQs on architecture best
practices, security, and various services.
• Practice with Real-world Scenarios:Practice designing architectures for different
use cases and requirements, considering factors like scalability, availability, and cost
optimization.
• Sample Questions and Practice Exams:Use sample questions and practice exams to
test your knowledge and get a sense of the exam format.
• Hands-on Experience:Gain practical experience by working on real projects or
setting up personal projects using AWS services.
• Join Study Groups and Forums:Engage with AWS communities, forums, and study
groups to ask questions, share knowledge, and learn from others who are preparing for
the same certification.
• Review and Revision:Periodically review your notes, practice materials, and any
challenging topics.
• Time Management and Exam Strategy:Develop a strategy for managing your time
during the exam. Read questions carefully, eliminate obviously wrong answers, and
allocate time wisely.
• Exam Registration:When you feel confident and well-prepared, schedule your exam
through the AWS Certification website.

68
Internship Report on Cloud Foundations

• Take the Exam:On the exam day, stay calm, read the questions thoroughly, and
answer to the best of your knowledge. Remember that you have the option to mark
questions for review and return to them later.

Conclusion:

After successfully passing the exam, your AWS Cloud Architect certification will validate your
skills and enhance your credibility in the field. Keep in mind that AWS services and best
practices evolve, so continue to stay updated with the latest developments in the AWS.

REFERENCE :

1.AWS Official Documentation: https://docs.aws.amazon.com/

2.AWS Free Tier: https://aws.amazon.com/free/

3.AWS Management Console: https://aws.amazon.com/console/

69
Internship Report on Cloud Foundations

4.AWS Solutions Architect – Associate Certification:


https://aws .amazon.com/certification/certified-solutinos-architect-associate/

5.AWS YouTube Channel:


https://www.youtube.com/user/AmazonWebServices

70
71

You might also like