0% found this document useful (0 votes)
31 views42 pages

Internship

Uploaded by

yash vats
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views42 pages

Internship

Uploaded by

yash vats
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

INTERNSHIP REPORT

On

Cloud Virtual Internship


For the partial fulfilment for the award of the degree of
BACHELOR OF TECHNOLOGY
In
Computer Science and Engineering
Submitted By
Yash Vats (2101921520199)
(Duration: From September 2023 to November 2023)

Under the supervision of

G.L. Bajaj Institute of Technology & Management


Greater Noida
Affiliated to

Dr. A.P.J. Abdul Kalam Technical University


Lucknow
(2023-24)
G.L. BAJAJ INSTITUTE OF TECHNOLOGY MANAGEMENT,

GREATER NOIDA
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

CERTIFICATE

This is to certify that the “Internship Report” entitled “Cloud Virtual Internship” is being
done by Yash Vats (2101921520199) in partial fulfillment of the requirements for the award of
the degree of BACHELOR OF TECHNOLOGY in COMPUTER SCIENCE AND
ENGINEERING for the academic session 2023-2024. She has completed her Summer
Internship from “AWS academy in collaboration with AICTE-Eduskills”.

Internship Coordinator

ii
Declaration

We hereby declare that the project work presented in this report entitled “Cloud Virtual
Internship”, in partial fulfillment of the requirement for the award of the degree of
Bachelor of Technology in Computer Science & Engineering, submitted to A.P.J. Abdul Kalam
Technical University, Lucknow, is based on our own work carried out at Department of
Computer Science & Engineering, G.L. Bajaj Institute of Technology & Management, Greater
Noida. The work contained in the report is true and original to the best of ou knowledge and
internship work reported in this report has not been submitted by us for award of any other
degree or diploma.

Signature:

Name: Yash Vats

Roll No:2101921520199

Date:

Place: Greater Noida

iii
Certificate of Internship

iv
Acknowledgement

First I would like to thank , Sagar Tomar designation Tech Lead ,Cybernauts, Urbtech NPX,
Sector153, Noida for giving me the opportunity to do an internship within the organization.

I also would like to acknowledge all the people that worked along with me at Cybernauts, Urbtech
NPX, Sector153, Noida with their patience and openness they created an enjoyable working
environment. It is indeed with a great sense of pleasure and immense sense of gratitude that I
acknowledge the help of these individuals.

I pay special thanks to my Head of the Department Prof.(Dr.) Naresh Kumar for his constructive
criticism throughout my internship.

I would like to thank Prof. Rajiv Kumar, Internship coordinator for his support and advise to get and
complete internship in above said organization.

I am extremely grateful to my department staff members and friends who helped me in successful
completion of this internship.

Name: Yash Vats

Roll No.: 2101921520199

v
v

Table of Content

Certificate …………………………………………………………………………….…….(ii)

Declaration…………………………………………………………………………………...(iii)

Certificate of Internship…………………………………………………………………….(iv)

Acknowledgement………………………………………………………………..…...…….(v)

Chapter 1. Introduction……………………..………………………………………………..8

1.1 Introduction of Cloud Computing…………….....….…….………..…...………..8

1.2 Introduction to Amazon Web Services…………………………………….……..9

Chapter 2. Motivation………………………………..………………………….………….10

Chapter 3. Plan of Work…………………………………………………………………….11

3.1 Tools and technology Used……………………………………………………...11

Chapter 4. Cloud Foundation…………………..…………………….…….…….…………12

4.1 Cloud Concept Overview……………………………………………………….12

4.2 Cloud Economics and Billing……………………………………………...…..14

4.3 AWS Global Infrastructure Overview…………………………………………16

4.4 AWS Cloud Security…………………………………………………………...18

4.5 Networking and Content Delivery……………………………………………..20

4.6 Compute………………………………………………………………………..22

4.7 Storage…………………………………………………………………………24

4.8 Databases………………………………………………………………………26

4.9 Cloud Architecture……………………………………………………………..27

4.10 Automatic Scaling and Monitoring…………………………………………….28


vi
4.11 Certificate for Cloud Foundations…………………………………………….29

Chapter 5. Cloud Architecting …………………………….…….…….…………………..30

5.1 Introducing Cloud Architecting …………………….…….…….……………..30

5.2 Adding a Storage Layer……………………...…………………………………31

5.3 Adding a Compute Layer……………………………………………………….32


5.4 Creating a Networking Environment………………………………….………..34

5.5 Connecting Networks…………………………………………………………..35

5.6 Securing User and Application Access………………………………………...36


5.7 Implementing Elasticity, High Availability, and Monitoring………………….37
5.8 Caching Content and decoupled architectures…………………………………38
5.9 Planning for disaster……………………………………………………..……39
5.10 Certificate of Cloud Architecting……………………………………………...40

Chapter 6. Conclusion……………………………………………………………………..41

6.1 Conclusion………………………………………………………………………41

6.2 Future Scope……………………………………………………………………..42

List of Figures

Figure 1.1.1 How Cloud Computing Works.....................................................................8


Figure 4.1.1 Advantages of Cloud Computing............................................................... 12

Figure 4.3.1 Introduction of AWS Global Infrastructure and Services ........................ 36


Figure 4.4.1 AWS Shared Responsibility Model ………………………..................... 19

Figure 5.1.1 Cloud Architecture.................................................................................... 31

Figure 5.9.1 Cloud Disaster Recovery Plan................................................................. 39

vii
Chapter-1
Introduction

1.1 Introduction to Cloud Computing

Cloud computing means storing and accessing the data and programs on remote servers that
are hosted on the internet instead of the computer’s hard drive or local server. Cloud computing
is also referred to as Internet-based computing, it is a technology where the resource is
provided as a service through the Internet to the user. The data which is stored can be files,
images, documents, or any other storable document.

The fundamental concept of cloud computing revolves around providing on-demand access to
a wide array of computing resources, without the need for users to invest in, own, or maintain
physical infrastructure. This shift to the cloud introduces a paradigm where users pay for the
resources they consume, promoting cost efficiency and scalability.

Figure 1.1.1: How Cloud Computing Works

viii
1.2 Introduction to Amazon Web Services

Amazon Web Services, Inc. (AWS) is a subsidiary of Amazon providing on-demand cloud
computing platforms and APIs to individuals, companies, and governments, on a metered payas-
you-go basis. These cloud computing web services provide a variety of basic abstract technical
infrastructure and distributed computing building blocks and tools. One of these services is
Amazon Elastic Compute Cloud (EC2), which allows users to have at their disposal a virtual
cluster of computers, available all the time, through the Internet. AWS's virtual computers
emulate most of the attributes of a real computer, including hardware central processing units
(CPUs) and graphics processing units (GPUs) for processing; local/RAM memory; hard-
disk/SSD storage; a choice of operating systems; networking; and pre-loaded application
software such as web servers, databases, and customer relationship management (CRM).

ix
Chapter-2
Motivation

Cloud Computing is the process of using shared resources in order to achieve cost effective
ness and better performance.Joining the AWS virtual internship with AICTE and Eduskill
helped me to level up my skills in emerging cloud computing technologies. AWS, being a
leading cloud services provider, offered a chance to enhance my knowledge in cloud foundation
and cloud architecture.

This internship added more value on my academic journey; it was a chance to actually work on
tools provided by AWS to understand cloud computing in a effective way.The collaboration
between AWS, AICTE, and Eduskill added credibility to the whole internship program. It
provided an environment that's not just about theory but also about practical, real-world
applications – something I believe is crucial as a computer science student getting ready to dive
into the professional world. It was not just about knowing the tech; it's about figuring out how
we can used it to solve real-life problems(For ex-in managing and delivering digital resources,
medical imaging files).

I was enthusiastic about this AWS internship because it gave an opportunity to explore, create,
and maybe even contribute something new to the ever-evolving field of cloud computing.

x
Chapter 3
Plan of Work

3.1 Tools and Technology Used:

• Amazon EC2

• Amazon S3

• Amazon DynamoDB

• Amazon RDS

• AWS Lambda

• Amazon Sage-maker

• Amazon VPC

• Amazon Aurora

• AWS Elastic Beanstalk

• Amazon ECS

• Amazon DevOps

• Amazon EKS

• Amazon Redshift

• AWS CloudTrail

xi
Chapter-4
Cloud Foundation

4.1 Cloud Concepts Overview


• Cloud Computing: Cloud computing is the on-demand delivery of IT resources over the
Internet with pay-as-you-go pricing. Instead of buying, owning, and maintaining physical data
centers and servers, you can access technology services, such as computing power, storage,
and databases, on an as-needed basis from a cloud provider like Amazon Web Services
(AWS).

• Advantages of Cloud Computing:

Figure 4.1.1: Advantages of Cloud Computing


1. Efficiency: Cloud computing allows for efficient resource allocation and utilization.
Resources such as computing power, storage, and network bandwidth can be scaled up
or down based on demand, leading to optimal use of resources.
2. Accessibility: Cloud services enable users to access data and applications from anywhere
with an internet connection. This accessibility promotes remote work and collaboration
among geographically dispersed teams.

xii
3. Device Independence: Cloud services are typically device-agnostic, allowing users to
access resources from various devices, including laptops, tablets, and smartphones.

4. Savings: Cloud computing eliminates the need for organizations to invest heavily in
onpremises hardware and infrastructure. This shift to a pay-as-you-go model reduces
upfront costs and allows organizations to pay only for the resources they consume.
5. Opportunities: Cloud services provide a platform for rapid prototyping and development,
fostering innovation. Organizations can experiment with new ideas, deploy applications
quickly, and adapt to changing market conditions.

• AWS Services:
Amazon Web Services (AWS) offers a comprehensive suite of cloud services that empower
businesses, organizations, and individuals to build, deploy, and scale applications and
infrastructure with unprecedented flexibility and efficiency. AWS, a subsidiary of Amazon,
has established itself as a leading cloud computing platform, providing a vast array of
services across computing power, storage, databases, machine learning, analytics, and more.

Key categories of AWS services include compute, storage, databases, networking, machine
learning, analytics, security, and more. Services like Amazon EC2 enable the provisioning
of virtual servers, while Amazon S3 provides scalable object storage. AWS Lambda allows
for serverless computing, and Amazon RDS offers managed relational databases

• AWS Cloud -The AWS Cloud Adoption Framework (AWS CAF):


Amazon Web Services Cloud Adoption Framework (AWS CAF) provides a standardized
blueprint for migrating, configuring, and maintaining IT workloads, databases, assets on
the AWS Cloud. It streamlines the cloud migration, modernization journey.

It leverages AWS experience and best practices to help us digitally transform and
accelerate our business outcomes through innovative use of AWS. AWS CAF identifies
specific organizational capabilities that underpin successful cloud transformations. These
capabilities provide best practice guidance that helps us improve our cloud readiness. AWS
CAF groups its capabilities in six perspectives: Business, People, Governance, Platform,
Security, and Operations. Each perspective comprises a set of capabilities that functionally
xiii
related stakeholders own or manage in the cloud transformation journey. Using the AWS
CAF one can identify and prioritize transformation opportunities, evaluate and improve
cloud readiness, and iteratively evolve transformation roadmap.

4.2 Cloud Economics and Billing

• Fundamentals of pricing:
The fundamentals of pricing in cloud economics and billing is crucial for effectively
managing costs and optimizing resources in cloud environments, such as AWS (Amazon
Web Services). Here are key concepts:

1. Pay-as-You-Go Model: Cloud services typically follow a pay-as-you-go pricing model,


where users pay only for the resources and services they consume. This allows for
flexibility and cost efficiency, as expenses are directly tied to usage.
2. Resource Metering: Cloud providers measure resource usage, such as computing power,
storage, and data transfer. Pricing is based on these usage metrics, providing
transparency into how resources are utilized.

• Total Cost of Ownership:

When considering the total cost of ownership (TCO) in the context of AWS (Amazon Web
Services) or any cloud platform, it involves a range of factors that contribute to the overall
cost of running applications and services in the cloud.

1. Compute Costs: Charges for virtual machines (EC2 instances) based on their type, size,
and usage (on-demand, reserved, or spot instances).
2. Storage Costs: Charges for data storage, which can include costs for Amazon S3 (Simple
Storage Service), Amazon EBS (Elastic Block Store), and other storage services.
3. Data Transfer Costs: Fees for data transfer in and out of AWS, including data transfer
between regions and to the internet.

xiv
4. Networking Costs: Costs associated with networking resources, such as Amazon VPC
(Virtual Private Cloud), Elastic Load Balancers, and data transfer within a VPC.

• AWS Organisations:

AWS Organizations is a service provided by Amazon Web Services (AWS) that allows you
to consolidate multiple AWS accounts into an organization that you create and centrally
manage. This service simplifies the management of billing and permissions across multiple

AWS accounts, providing a hierarchical structure for organization units (OUs) and accounts.

• AWS Billing and Cost Management:

AWS Billing and Cost Management is a set of tools and services provided by Amazon Web
Services (AWS) to help users monitor, control, and optimize their costs on the AWS cloud
platform. It includes features and services that enable users to understand their spending
patterns, set up budgets, and implement cost controls. Key Components of billing and cost
management:

1. AWS Billing Console: The AWS Billing Console is the primary interface for managing
your AWS billing and costs. It provides an overview of your current usage and costs,
allowing you to view detailed billing reports, download invoices, and analyze cost
trends.
2. Cost Explorer: Cost Explorer is a tool within the AWS Billing Console that allows you
to visualize, understand, and analyze your AWS costs and usage. It provides interactive
charts and graphs to help you explore historical data and forecast future costs.
3. Budgets: AWS Budgets enable you to set custom cost and usage budgets that alert you
when you exceed your thresholds. You can create budgets based on various criteria, such
as service, linked account, or specific tags.

• AWS technical Support:

AWS (Amazon Web Services) provides a range of support plans to help customers with
their technical needs, ranging from basic technical support to premium plans with
additional features. The four different AWS technical support: Basic Support, Developer
Support, Business Support, and Enterprise Support.

xv
Anyone can access AWS support through the AWS Management Console or by visiting the
AWS Support Center. Different support plans offer different levels of access, response
times, and additional features.

4.3 AWS Global Infrastructure Overview

• AWS Global Infrastructure:

The AWS Global Infrastructure refers to the vast and geographically distributed network of
data centers and facilities that Amazon Web Services (AWS) operates worldwide. AWS has
strategically positioned data centers, known as Availability Zones (AZs), in various regions
around the globe to provide high availability, fault tolerance, and low-latency access to cloud
services for its customers. Here are key components of the AWS Global Infrastructure:

1. Regions:AWS Regions are geographical areas where AWS has multiple Availability
Zones. Each region is completely independent and isolated from the others, and it consists
of multiple data centers. AWS currently has numerous regions around the world. 2.
Availability Zones (AZs): Availability Zones are isolated locations within an AWS Region,
and they are designed to be independent of each other in terms of power, cooling, and
network connectivity. Each Availability Zone is essentially a separate data center with its
own infrastructure.

xvi
3. Edge Locations: AWS Edge Locations are part of the CloudFront content delivery network
(CDN). These locations are spread globally and are used to cache content closer to end-
users, reducing latency for delivering web content and other services.

Figure 4.3.1: Introduction of AWS Global Infrastructure and Services

• AWS service Categories: Amazon Web Services (AWS) offers a wide range of services
to meet the diverse needs of users, from computing power to storage, databases, machine
learning, and more.

1. Compute:

Amazon EC2: Virtual servers in the cloud.

Amazon ECS: Container orchestration service.

AWS Lambda: Serverless computing service.

Amazon Lightsail: Easy compute instances for small-scale applications.

2. Storage:

Amazon S3: Object storage service.

Amazon EBS: Block storage service for EC2 instances.

Amazon Glacier: Low-cost storage for archiving and backup.

Amazon EFS: Fully managed file storage for EC2 instances.

xvii
3. Databases:

Amazon RDS: Managed relational database service.

Amazon DynamoDB: Managed NoSQL database service.

Amazon Redshift: Fully managed data warehouse service.

Amazon Aurora: High-performance relational database.

4. Networking:

Amazon VPC: Virtual Private Cloud for creating isolated network environments.

Amazon Route 53: Scalable domain name system (DNS) web service.

AWS Direct Connect: Dedicated network connection to AWS.

5. Security, Identity, and Compliance:

AWS IAM: Identity and Access Management for secure control of AWS resources.

Amazon Inspector: Automated security assessment service.

AWS Key Management Service (KMS): Managed encryption service.

4.4 AWS Cloud Security

• AWS shared responsibility Model:

The AWS Shared Responsibility Model is a key concept in understanding the distribution of
responsibilities between AWS (the cloud service provider) and the customer. It defines the
security responsibilities for both parties in the context of using AWS services. The model
helps customers understand which aspects of security they are responsible for and which
aspects AWS manages.

xviii
Figure 4.4.1: AWS Shared Responsibility Model

• AWS Identity and Access Management (IAM): AWS Identity and Access Management
(IAM) is a web service that helps you securely control access to AWS resources. IAM
enables you to create and manage AWS users and groups, and it allows you to grant
permissions to access AWS resources.

• Securing a new AWS account:

Securing a new AWS account is a critical step to ensure the integrity, confidentiality, and
availability of your cloud resources. Here are essential steps to secure new AWS account:

1. Enable MFA (Multi-Factor Authentication): Enable MFA for the root user as well as for
IAM (Identity and Access Management) users. This adds an extra layer of security by
requiring a second form of authentication.

2. Create IAM Users with Least Privileges: Avoid using the root account for daily tasks.
Create IAM users with the minimum permissions necessary for their roles. Follow the
principle of least privilege to reduce the risk of accidental or intentional misuse.

xix
3. Use IAM Roles: Instead of using long-term access keys for IAM users, leverage IAM
roles for temporary credentials. Assign roles to EC2 instances, Lambda functions, or other
services to enhance security.

• Securing data on AWS:

Securing data on AWS is crucial to maintaining the confidentiality, integrity, and availability
of sensitive information. AWS provides a variety of tools and services to help you implement
a robust data security strategy. Here are key considerations for securing data on AWS:

1. Encryption: For encrypting data in transit use SSL/TLS. This applies to communication
between clients and AWS services (e.g., API calls, web traffic) as well as data transfers
between AWS services.

2. Amazon S3 Security:

Bucket Policies and ACLs: Configure S3 bucket policies and access control lists (ACLs) to
control who can access your S3 buckets and what they can do.

Versioning: Enable versioning for S3 buckets to maintain a history of object versions and
provide a mechanism for data recovery in case of accidental deletion or modification.

3. Database Security:

Amazon RDS Encryption: Enable encryption for Amazon RDS instances to secure data at
rest. Use AWS KMS to manage database encryption keys.

4. Key Management: Use AWS KMS to create and manage encryption keys. Implement
key rotation and regularly audit key usage

4.5 Networking and Content Delivery

• Networking Basics:

In Amazon Web Services (AWS), networking is a fundamental aspect that involves creating
and managing virtual networks to connect your resources securely.

xx
• Amazon VPC:

Amazon Virtual Private Cloud (Amazon VPC) is a service that lets you provision a logically
isolated section of the Amazon Web Services (AWS) Cloud where you can launch AWS
resources in a virtual network. With Amazon VPC, you have control over your virtual
networking environment, including the selection of your IP address range, creation of
subnets, and configuration of route tables and network gateways.

• VPC networking:
1. IP Addressing and Subnetting: Carefully plan and allocate IP address ranges for your
VPC and its subnets. Subnetting helps organize resources and allows for better network
segmentation.
2. Subnet Design: Create multiple subnets within your VPC to represent different tiers of
your application. Associate each subnet with a specific availability zone to enhance fault
tolerance.
3. Internet Gateway (IGW): Attach an Internet Gateway to your VPC to enable
communication between instances within the VPC and the internet. This is necessary
for resources that need internet access.
4. Route Tables: Configure route tables to control traffic between subnets within the VPC
and to control the flow of traffic in and out of the VPC, including routing traffic to the
internet via the Internet Gateway.
5. Elastic Load Balancer (ELB):Utilize Elastic Load Balancers to distribute incoming
application traffic across multiple instances in different availability zones. This
enhances application availability and fault tolerance

• VPC Security:
1. IAM Roles: Assign IAM roles to EC2 instances to manage access permissions securely.
IAM roles help control access to AWS services and resources.

xxi
2. Data Encryption: Implement encryption for data in transit using SSL/TLS and for data
at rest using services like AWS Key Management Service (KMS) for key management.
3. Security Best Practices: Adhere to security best practices for your operating systems,
applications, and AWS resources. Regularly update and patch your systems.
• Amazon Route 53:

Amazon Route 53 is a scalable and highly available Domain Name System (DNS) web
service provided by Amazon Web Services (AWS). It is designed to route end-user requests
to globally distributed AWS resources, such as Amazon EC2 instances, Elastic Load
Balancers, or Amazon S3 buckets. Route 53 offers domain registration, DNS routing, and
health checking services, making it a comprehensive solution for managing domain names
and their associated resources.

• Amazon CloudFront:

Amazon CloudFront is a content delivery network (CDN) service provided by Amazon Web
Services (AWS). It is designed to deliver content, including web pages, videos, images, and
other static and dynamic assets, to users with low latency and high transfer speeds.
CloudFront accelerates the distribution of content by caching it at edge locations,
strategically placed around the world, reducing the load on origin servers and improving the
overall user experience.

4.6 Compute

• Compute Services Overview:

AWS provides a comprehensive suite of compute services, including Amazon EC2 for
scalable virtual servers, Amazon ECS and EKS for container orchestration, AWS Lambda

xxii
for serverless computing, and services like AWS Batch and Lightsail for efficient batch
processing and simplified deployment, respectively. AWS Outposts extends cloud
capabilities to on-premises environments, while AWS Wavelength enables ultra-low latency
edge computing on 5G networks.This diverse range of compute services allows users to
select the most suitable option based on their application requirements, offering flexibility,
scalability, and ease of management.

• Amazon EC2:

Amazon Elastic Compute Cloud (Amazon EC2) is a core compute service offered by
Amazon Web Services (AWS) that provides resizable and scalable virtual machines
(instances) in the cloud. EC2 enables users to run applications, host websites, and process
data with the flexibility to choose from a variety of instance types optimized for different
use cases. Users can select instances with varying CPU, memory, storage, and networking
capacities based on their specific requirements.The service supports on-demand pricing for
flexibility, reserved instances for cost savings, and spot instances for acquiring spare
capacity at lower costs.

Amazon EC2 cost optimization:

Optimizing costs for Amazon EC2 involves various strategies to ensure efficient resource
usage while minimizing expenses. Here are some ways for Amazon EC2 cost optimization:

1. Rightsize Instances: Regularly analyze your workloads to choose the most costeffective
instance types based on CPU, memory, and storage requirements.
2. Reserved Instances (RIs): Utilize Reserved Instances for predictable workloads with
steady-state usage. RIs offer substantial discounts compared to on-demand pricing and
can be a cost-effective option for long-term commitments.
• Container Services: AWS provides a suite of container services designed to simplify
the deployment, management, and scaling of containerized applications. Amazon Elastic
Container Service (ECS) offers a fully managed environment for running Docker
containers, allowing users to easily launch, stop, and manage containers without the
need to manage the underlying infrastructure. For more advanced orchestration needs,

xxiii
Amazon Elastic Kubernetes Service (EKS) provides a managed Kubernetes service that
makes it straightforward to deploy, scale, and manage containerized applications using
Kubernetes. AWS Fargate takes containerization a step further by offering a serverless
compute engine for containers, eliminating the need for users to manage the underlying
infrastructure. These services provide flexibility, scalability, and seamless integration
with other AWS services, making it efficient for developers to build and deploy
containerized applications in the cloud.

• Introduction to AWS Lambda:

AWS Lambda is a serverless compute service offered by Amazon Web Services (AWS)
that enables developers to run code without provisioning or managing servers. As a
serverless computing platform, Lambda allows users to focus on writing code and
executing functions in response to events, without the need to worry about server
provisioning, maintenance, or scaling. With its pay-as-you-go pricing model, users only
pay for the compute time consumed by their functions, making Lambda a cost-effective
and efficient solution for serverless computing in the AWS cloud.

• Introduction to AWS Elastic Beanstalk:

AWS Elastic Beanstalk is a fully managed platform-as-a-service (PaaS) offering by Amazon


Web Services (AWS) that simplifies the deployment, management, and scaling of web
applications. It enables developers to focus on writing code while AWS handles the
underlying infrastructure provisioning, monitoring, and maintenance. AWS services,
providing features like database connectivity, storage, and monitoring. Developers can
choose from various predefined application environments or customize their own, and the
service supports both single-container and multi-container Docker environments. Overall,
AWS Elastic Beanstalk streamlines the application deployment process, making it accessible
to developers of all levels without sacrificing scalability and reliability.

4.7 Storage

• Amazon Elastic Block Store (Amazon EBS):

xxiv
Amazon Elastic Block Store (Amazon EBS) is a scalable block storage service designed for
use with Amazon EC2 instances, offering persistent and high-performance storage volumes.
With different volume types catering to diverse workloads, such as General Purpose,
Provisioned IOPS, and Throughput Optimized, users can tailor storage to specific
performance requirements. EBS volumes provide a block-level storage approach, making
them suitable for operating systems, databases, and applications that require direct access to
raw storage

• Amazon Simple Storage Service (Amazon S3):

Amazon Simple Storage Service (Amazon S3) is a highly scalable, durable, and secure
object storage service provided by Amazon Web Services (AWS). It is designed to store and
retrieve any amount of data from anywhere on the web, making it a fundamental component
for building scalable and reliable applications.. Data redundancy is achieved through
automatic replication of objects across multiple devices and facilities within a region,
ensuring high durability and availability.

• Amazon Elastic File System (Amazon EFS):

Amazon Elastic File System (Amazon EFS) is a fully managed, scalable file storage service
provided by Amazon Web Services (AWS). It is designed to provide scalable and highly
available file storage for use with Amazon EC2 instances and other AWS services. Amazon
EFS supports the Network File System version 4 (NFSv4) protocol, making it easy for
multiple EC2 instances to share and access a common set of files.Amazon Simple Storage
Service Glacier:

Amazon Glacier is a cost-effective, secure, and durable archival storage service within the
Amazon Simple Storage Service (Amazon S3) family. Designed for infrequently accessed
data with long-term retention requirements, Glacier allows users to store large volumes of
data in vaults, with each archive scalable up to 40 terabytes. With its low storage costs,
Glacier is an ideal solution for organizations seeking an economical and scalable archival
storage option for large datasets

4.8 Databases

xxv
• Amazon Relational Database Service (Amazon RDS):

Amazon Relational Database Service (Amazon RDS) is a fully managed database


service by Amazon Web Services (AWS) that simplifies the deployment, operation, and
scaling of relational databases. It supports various database engines, including MySQL,
PostgreSQL, Oracle, and Microsoft SQL Server, allowing users to choose the engine
that best fits their application needs.

• Amazon DynamoDB:

Amazon DynamoDB is a fully managed, highly scalable NoSQL database service


provided by Amazon Web Services (AWS). Designed for seamless and low-latency
access to data, DynamoDB supports both document and key-value data models. With
features like encryption at rest, multi-region support, and integration with AWS services,
DynamoDB simplifies database management, allowing users to build responsive and
globally distributed applications.

• Amazon Redshift:

Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the


cloud, designed for high-performance analysis using SQL queries. Leveraging a
massively parallel processing (MPP) architecture, Redshift efficiently processes large
datasets and provides fast query performance. It integrates with popular business
intelligence tools and supports a variety of data loading options.

• Amazon Aurora:

Amazon Aurora is a fully managed relational database service offered by Amazon Web
Services (AWS). It is compatible with MySQL and PostgreSQL, providing the
performance and availability of high-end commercial databases at a fraction of the cost.
Aurora is designed for ease of use, offering automatic backups, continuous monitoring,
and automatic scaling capabilities.

xxvi
4.9 Cloud Architecture

• AWS Well-Architected Framework:

The AWS Well-Architected Framework is a set of best practices and guidelines provided by
Amazon Web Services (AWS) to help users design and build reliable, secure, efficient, and
cost-effective systems in the cloud. The framework consists of a collection of whitepapers,
design principles, and a self-service tool that enables users to assess their workloads against
these best practices. The main pillars of the Well-Architected Framework are:

1. Operational Excellence: Focuses on operational practices, such as monitoring, incident


response, and evolving procedures over time. It aims to ensure efficient and effective
management of systems.
2. Security: Emphasizes the implementation of robust security measures to protect data,
systems, and assets. This includes identity and access management, data encryption, and
other security controls.
3. Reliability: Aims to ensure a workload operates consistently and predictably, even in the
face of failures. It covers areas like fault tolerance, high availability, and disaster
recovery.
4. Performance Efficiency: Focuses on optimizing performance and using resources
efficiently. This involves selecting the right types and sizes of resources, as well as
monitoring and improving performance over time.
5. Cost Optimization: Addresses strategies to manage costs effectively without sacrificing
performance or reliability. It includes considerations for resource optimization, cost
monitoring, and the use of pricing models.

xxvii
4.10 Automatic Scaling and Monitoring

• Elastic Load Balancing:


Elastic Load Balancing (ELB) is an Amazon Web Services (AWS) service that automatically
distributes incoming application traffic across multiple targets, such as Amazon EC2
instances, containers, and IP addresses, within one or more Availability Zones. ELB
enhances the availability and fault tolerance of applications by ensuring that traffic is evenly
distributed, preventing any single point of failure.

• Amazon CloudWatch:
Amazon CloudWatch is a monitoring and observability service provided by Amazon Web
Services (AWS). It allows users to collect and track metrics, collect and monitor log files,
and set alarms to be notified of changes in the environment. CloudWatch provides insights
into resource utilization, application performance, and operational health, enabling users to
make informed decisions about their AWS resources

• Amazon EC2 Auto Scaling:


Amazon EC2 Auto Scaling is a service that automatically adjusts the number of Amazon
Elastic Compute Cloud (EC2) instances in a fleet to maintain performance and meet defined
criteria. It allows users to set up scaling policies based on metrics from Amazon
CloudWatch, automatically adjusting capacity to handle changes in demand. EC2 Auto
Scaling helps optimize costs by dynamically adjusting resources, ensuring that applications
are running at the desired scale, with improved availability and fault tolerance. Combined,
Amazon CloudWatch and EC2 Auto Scaling provide a powerful solution for monitoring,
managing, and scaling AWS resources based on real-time performance metrics and
operational requirements.

xxviii
4.11 Certificate for Cloud Foundations

xxix
Chapter-5
Cloud Architecting

5.1 Introducing Cloud Architecting

Cloud architecting involves designing and implementing robust and scalable cloud-based
solutions that align with an organization's business objectives. Cloud architects leverage cloud
services and resources to create architectures that prioritize factors such as performance,
security, availability, and cost efficiency. This process often includes selecting appropriate cloud
services, defining data storage and management strategies, optimizing network configurations,
and ensuring seamless integration with other systems. A successful cloud architecture not only
meets current operational needs but also anticipates future growth and changes, providing a
foundation for agility, innovation, and optimal resource utilization within the cloud
environment.

xxx
Figure 5.1.1: Cloud Architecture

5.2 Adding a Storage Layer

• The Simplest Architecture:


A storage layer architecture typically involves leveraging a cloud storage service such as
Amazon Simple Storage Service (S3) or Azure Blob Storage. This layer serves as a scalable
and durable repository for storing various types of data, including files, documents, or
object-based information. Applications interact with this storage layer through APIs,
enabling seamless access to data while benefiting from the cloud provider's inherent
redundancy and reliability.

• Storing data in Amazon S3:


To store data in Amazon S3, you can use the AWS Management Console, AWS Command
Line Interface (CLI), or SDKs to upload files or objects to S3 buckets. These buckets act as
containers for organizing and storing data, offering high durability, scalability, and
lowlatency access. Different storage classes in S3, such as Standard, Intelligent-Tiering, and
Glacier, provide flexibility in optimizing costs based on data access patterns.

When moving data to and from Amazon S3, AWS offers various tools and services. AWS
DataSync, AWS Snowball, and the S3 Transfer Acceleration feature help facilitate secure
and efficient data transfer, whether dealing with large datasets or ensuring high-speed
transfers over the internet.

• Choosing regions for the architecture:


Choosing the right AWS Regions for your architecture is crucial for optimizing latency,
compliance, and data residency. AWS has a global network of data centers (Regions) and
Availability Zones. Selecting a Region close to your users minimizes latency, ensuring faster
access to S3 resources. Additionally, considering data residency requirements and regulatory
compliance may influence the choice of the AWS Region where you store your S3 data.

xxxi
5.3 Adding a Compute Layer

• Architectural Need:
Architectural needs in AWS encompass designing scalable, highly available, and secure
solutions that optimize performance, manage costs effectively, and ensure operational
excellence. This involves leveraging AWS services for scalability and fault tolerance,
implementing robust security measures, optimizing resource usage for cost efficiency,
and incorporating best practices for operational management.

• Adding compute with Amazon EC2:


1. Selecting an Amazon EC2 Instance Type: EC2 offers a variety of instance families
optimized for different use cases, such as compute-optimized, memory-optimized,
storage-optimized, and GPU instances.
2. Using User Data to Configure an Amazon EC2 Instance: User data allows you to
customize the configuration of your EC2 instances during launch. This can include
executing scripts, installing software, or applying specific configurations..
3. Adding Storage to an Amazon EC2 Instance: EC2 instances can be associated with
various types of storage to meet specific performance and capacity needs. Understanding
your application's storage requirements and selecting the appropriate storage type
ensures efficient data management and performance optimization for your EC2
instances.

• Amazon EC2 pricing options:


Amazon EC2 offers various pricing options to cater to different business needs.
OnDemand Instances provide flexibility with no upfront costs, allowing users to pay for
compute capacity by the hour or second. EC2 Instance Savings Plans provide additional
flexibility and cost savings for a commitment to a consistent amount of compute usage
for a 1 or 3-year period.
• Amazon EC2 considerations:
When considering Amazon EC2 for your workloads, several factors should be taken into
account. Instance types should align with your application's specific requirements,

xxxii
balancing factors like CPU, memory, and storage. Availability Zones should be
strategically chosen to ensure high availability and fault tolerance.

xxxiii
5.4 Creating a Networking Environment

• Creating an AWS Networking Environment:

Creating an AWS networking environment involves designing and configuring the


infrastructure to establish secure and scalable communication between resources. Start
by setting up a Virtual Private Cloud (VPC), a logically isolated section of the AWS
Cloud where you can launch AWS resources. Define the VPC's IP address range,
subnets, and route tables. Utilize security groups to control inbound and outbound
traffic and Network Access Control Lists (NACLs) for additional network-level
security. Consider connecting the VPC to your on-premises data center using AWS
Direct Connect or VPN for hybrid cloud scenarios.Finally, implement Elastic
Load Balancers to distribute incoming traffic across multiple instances for enhanced
availability and fault tolerance within the network. • Connecting your AWS
networking environment to the internet:

Connecting your AWS networking environment to the internet involves configuring


the necessary components to enable communication between your resources and the
internet. Start by setting up an Internet Gateway (IGW) and attaching it to your Virtual
Private Cloud (VPC).To enable outbound internet access for resources within your
VPC, associate a public subnet with the route table containing the route to the IGW.

• Securing your AWS networking environment:

Securing your AWS networking environment involves implementing a comprehensive


set of measures to protect data and resources. Leverage Amazon VPC tools like
Security Groups, NACLs, and encryption protocols for effective access control and
data protection. Utilize IAM for user permissions, AWS Key Management Service for
encryption key management, and implement continuous monitoring with AWS
CloudWatch and AWS Config. Strengthen your defenses against DDoS attacks with
AWS WAF and Shield, while also employing regular updates, patches, and AWS
Security Hub for centralized security findings.

34
5.5 Connecting Networks

• Connecting to your remote network with AWS Site-to-Site VPN and AWS Direct
Connect:
1. AWS Site-to-Site VPN: It extends your on-premises network to the AWS Cloud over an
encrypted virtual private network (VPN) connection. This allows secure
communication between your local data center and AWS, enabling access to resources
on both sides. Configuration involves setting up a Virtual Private Gateway in AWS,
defining customer gateway information for your on-premises router, and configuring
VPN connections with appropriate encryption settings.
• AWS Direct Connect: AWS Direct Connect establishes a dedicated network
connection between your on-premises data center and AWS. This dedicated connection
bypasses the public internet, providing more reliable, lower-latency access to AWS
resources. To set up AWS Direct Connect, you need to choose a Direct Connect
location, work with a Direct Connect partner if necessary, and establish physical
connectivity. Once set up, create a virtual interface to connect to your Virtual Private
Cloud (VPC). • Connecting virtual private clouds (VPCs) in AWS with VPC
peering:
VPC peering allows direct connectivity between two VPCs, enabling instances in one
VPC to communicate with instances in another VPC using private IP addresses. This
connection is established without the need for internet access, VPNs, or dedicated
connections. To set up VPC peering, both VPCs must have non-overlapping IP address
ranges, and the peering connection must be initiated and accepted by both VPC owners.
Once established, instances in the peered VPCs can communicate as if they were within
the same network.

• Connecting your VPC to supported AWS services:


Connecting your Virtual Private Cloud (VPC) to supported AWS services involves
configuring seamless and secure communication pathways. Utilize VPC endpoints for
services like Amazon S3, enabling direct access without traversing the public internet.

35
Deploy AWS services such as Amazon RDS, Redshift, Elastic Cache, Lambda,
DynamoDB, and messaging services within the VPC for optimized performance and
low-latency interactions with your application instances.

5.6 Securing User and Application Access


• Account users and AWS Identity and Access Management (IAM):
In AWS, user management is streamlined through AWS Identity and Access
Management (IAM), where individual account users are created and configured with
specific access permissions. IAM allows organizations to organize users into logical
groups based on roles or responsibilities, simplifying permission management through
group policies. Each user is associated with unique credentials and access keys,
facilitating secure authentication and authorization.
• Organizing users:
In AWS, organizing users is efficiently achieved through AWS Identity and Access
Management (IAM). IAM provides a centralized platform for creating, managing, and
organizing user accounts within an AWS environment. Users can be grouped based on
roles or responsibilities, streamlining the assignment of permissions through IAM
policies attached to user groups. This organizational structure enhances administrative
efficiency, allowing for the consistent management of access across various users.
• Federating users:
Federating users in AWS involves integrating external identity providers (IdPs) with
AWS Identity and Access Management (IAM) to enable users to access AWS
resources using their existing credentials. This process streamlines authentication and
authorization, allowing organizations to leverage their existing identity systems.
• Multiple Accounts:
Managing multiple AWS accounts is facilitated by AWS Organizations, allowing
organizations to create a structured hierarchy of accounts, apply centralized policies,
and simplify billing through consolidated billing. Cross-account access in AWS IAM
enables secure resource sharing and centralized administration across accounts, while
AWS Resource Access Manager (RAM) facilitates resource sharing among accounts.

36
5.7 Implementing Elasticity, High Availability, and Monitoring
• Scaling your compute resources:
Scaling compute resources in the AWS cloud involves dynamically adjusting the
capacity of your computing infrastructure to meet changing demands. Amazon EC2
Additionally, AWS Lambda enables serverless computing, automatically scaling
functions in response to triggered events
Scaling your databases:
Scaling databases in AWS involves adapting your database infrastructure to handle
varying workloads efficiently. Amazon RDS (Relational Database Service) offers
automated scaling capabilities, allowing you to vertically scale (resize) or horizontally
scale (read replicas) your database based on demand. Vertical scaling involves
adjusting the instance type to provide more or less computing power and memory,
while horizontal scaling uses read replicas to distribute read traffic and improve
overall performance.
• Designing an environment that’s highly available:
Designing a highly available environment in AWS involves architecting with
redundancy and fault-tolerance to ensure continuous and reliable operation. This
includes distributing resources across multiple Availability Zones (AZs) to mitigate
the impact of failures in a specific zone, utilizing load balancing for even traffic
distribution, and implementing automated scaling to adapt to varying workloads.
Regular testing, monitoring, and using AWS CloudWatch alarms for proactive
responses contribute to maintaining a robust high-availability architecture.
• Monitoring:
Monitoring in AWS is achieved through a suite of robust tools. Amazon CloudWatch
collects and tracks metrics, logs, and events, enabling visualization, analysis, and the
setting of alarms. AWS CloudTrail records API calls for auditing and tracking
changes, while AWS X-Ray provides insights into application performance. Regularly
leveraging these tools allows for efficient management and optimization of AWS
environments.

37
5.8 Caching Content and decoupled architectures
• Overview of caching:
Caching is a technique that involves storing copies of frequently accessed data in a
temporary location to expedite subsequent retrieval, reducing latency and improving
overall system performance. In the context of web applications, caching is commonly
applied to various layers, including content, databases, and sessions, to optimize response
times and enhance user experience.
• Edge caching:
Edge caching involves strategically placing caching servers or Content Delivery Networks
(CDNs) at the edge of a network, closer to end-users. This decentralized approach
accelerates content delivery by caching static assets like images, scripts, and videos closer
to the user's geographical location. This not only minimizes latency but also reduces the
load on the origin server, contributing to a more scalable and responsive application.

• Caching databases:
Caching databases involves storing frequently accessed query results or data in-memory to
accelerate subsequent requests. This approach, often referred to as database caching or
query caching, is beneficial in scenarios where read-heavy workloads can be optimized by
serving data from cache rather than executing resource-intensive database queries. Caching
databases enhance performance, reduce database load, and contribute to more efficient use
of resources.

• Decoupled architectures:
Decoupled architectures in AWS involve designing systems where components operate
independently, reducing interdependencies and enhancing flexibility. AWS services like
Simple Queue Service (SQS) and Simple Notification Service (SNS) facilitate
asynchronous communication, allowing components to interact without direct
dependencies. Additionally, AWS Lambda supports serverless computing, enabling the
execution of code in response to events without the need for managing servers. By
leveraging these services, decoupled architectures in AWS enhance scalability,
maintainability, and overall system resilience.

38
5.9 Planning for disaster

• Disaster Planning and Recovery:


Disaster planning and recovery in AWS encompass a range of strategies to ensure
business continuity. Employing a multi-region deployment strategy enhances resilience
by distributing resources across geographically distinct AWS Regions. Data
redundancy is achieved through regular backups and replication, leveraging services
like Amazon S3 and RDS, while features such as versioning and cross-region
replication add layers of protection against data loss.Adopting a comprehensive disaster
recovery pattern involves considering Recovery Time Objectives (RTO) and Recovery
Point Objectives (RPO) to determine the optimal balance between resource availability
and data consistency.

Figure 5.9.1: Cloud Disaster Recovery Plan

39
5.10 Certificate of Cloud Architecting

40
Chapter-6
Conclusion and Future Scope

6.1 Conclusion

Participating in a cloud virtual internship provides a unique and enriching experience that
goes beyond traditional learning environments. Throughout this internship, I had the
opportunity to delve into the dynamic realm of cloud computing, gaining hands-on
experience with cutting-edge technologies and industry-leading platforms. The exposure to
real-world projects, collaboration with seasoned professionals, and the practical application
of cloud services like AWS has significantly enhanced my understanding of cloud
architectures, deployment strategies, and best practices.

6.2 Future Scope

1. Hybrid and Multi-Cloud Adoption: Organizations are increasingly adopting hybrid


and multi-cloud strategies to leverage the benefits of both on-premises and cloud
solutions. AWS's robust services for hybrid cloud architectures, such as AWS
Outposts, facilitate seamless integration, and this trend is likely to grow as businesses
seek flexibility and scalability.
2. Serverless Computing: Serverless architectures, supported by AWS Lambda, are
gaining momentum. The serverless model allows developers to focus on writing code
without managing underlying infrastructure, resulting in improved efficiency and
reduced costs.
3. Edge Computing: With the rise of Internet of Things (IoT) devices, there is a growing
need for processing data closer to the source, known as edge computing. AWS offers
services like AWS Wavelength, enabling low-latency processing at the edge of the
network. As IoT adoption increases, the demand for edge computing is set to expand.

41
4. Quantum Computing: While still in its early stages, quantum computing is on the
horizon. AWS is actively exploring quantum computing services, and as this
technology matures, cloud providers are likely to play a crucial role in making
quantum computing accessible to a broader range of organizations.

42

You might also like