Internship
Internship
On
GREATER NOIDA
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
CERTIFICATE
This is to certify that the “Internship Report” entitled “Cloud Virtual Internship” is being
done by Yash Vats (2101921520199) in partial fulfillment of the requirements for the award of
the degree of BACHELOR OF TECHNOLOGY in COMPUTER SCIENCE AND
ENGINEERING for the academic session 2023-2024. She has completed her Summer
Internship from “AWS academy in collaboration with AICTE-Eduskills”.
Internship Coordinator
ii
Declaration
We hereby declare that the project work presented in this report entitled “Cloud Virtual
Internship”, in partial fulfillment of the requirement for the award of the degree of
Bachelor of Technology in Computer Science & Engineering, submitted to A.P.J. Abdul Kalam
Technical University, Lucknow, is based on our own work carried out at Department of
Computer Science & Engineering, G.L. Bajaj Institute of Technology & Management, Greater
Noida. The work contained in the report is true and original to the best of ou knowledge and
internship work reported in this report has not been submitted by us for award of any other
degree or diploma.
Signature:
Roll No:2101921520199
Date:
iii
Certificate of Internship
iv
Acknowledgement
First I would like to thank , Sagar Tomar designation Tech Lead ,Cybernauts, Urbtech NPX,
Sector153, Noida for giving me the opportunity to do an internship within the organization.
I also would like to acknowledge all the people that worked along with me at Cybernauts, Urbtech
NPX, Sector153, Noida with their patience and openness they created an enjoyable working
environment. It is indeed with a great sense of pleasure and immense sense of gratitude that I
acknowledge the help of these individuals.
I pay special thanks to my Head of the Department Prof.(Dr.) Naresh Kumar for his constructive
criticism throughout my internship.
I would like to thank Prof. Rajiv Kumar, Internship coordinator for his support and advise to get and
complete internship in above said organization.
I am extremely grateful to my department staff members and friends who helped me in successful
completion of this internship.
v
v
Table of Content
Certificate …………………………………………………………………………….…….(ii)
Declaration…………………………………………………………………………………...(iii)
Certificate of Internship…………………………………………………………………….(iv)
Acknowledgement………………………………………………………………..…...…….(v)
Chapter 1. Introduction……………………..………………………………………………..8
Chapter 2. Motivation………………………………..………………………….………….10
4.6 Compute………………………………………………………………………..22
4.7 Storage…………………………………………………………………………24
4.8 Databases………………………………………………………………………26
Chapter 6. Conclusion……………………………………………………………………..41
6.1 Conclusion………………………………………………………………………41
List of Figures
vii
Chapter-1
Introduction
Cloud computing means storing and accessing the data and programs on remote servers that
are hosted on the internet instead of the computer’s hard drive or local server. Cloud computing
is also referred to as Internet-based computing, it is a technology where the resource is
provided as a service through the Internet to the user. The data which is stored can be files,
images, documents, or any other storable document.
The fundamental concept of cloud computing revolves around providing on-demand access to
a wide array of computing resources, without the need for users to invest in, own, or maintain
physical infrastructure. This shift to the cloud introduces a paradigm where users pay for the
resources they consume, promoting cost efficiency and scalability.
viii
1.2 Introduction to Amazon Web Services
Amazon Web Services, Inc. (AWS) is a subsidiary of Amazon providing on-demand cloud
computing platforms and APIs to individuals, companies, and governments, on a metered payas-
you-go basis. These cloud computing web services provide a variety of basic abstract technical
infrastructure and distributed computing building blocks and tools. One of these services is
Amazon Elastic Compute Cloud (EC2), which allows users to have at their disposal a virtual
cluster of computers, available all the time, through the Internet. AWS's virtual computers
emulate most of the attributes of a real computer, including hardware central processing units
(CPUs) and graphics processing units (GPUs) for processing; local/RAM memory; hard-
disk/SSD storage; a choice of operating systems; networking; and pre-loaded application
software such as web servers, databases, and customer relationship management (CRM).
ix
Chapter-2
Motivation
Cloud Computing is the process of using shared resources in order to achieve cost effective
ness and better performance.Joining the AWS virtual internship with AICTE and Eduskill
helped me to level up my skills in emerging cloud computing technologies. AWS, being a
leading cloud services provider, offered a chance to enhance my knowledge in cloud foundation
and cloud architecture.
This internship added more value on my academic journey; it was a chance to actually work on
tools provided by AWS to understand cloud computing in a effective way.The collaboration
between AWS, AICTE, and Eduskill added credibility to the whole internship program. It
provided an environment that's not just about theory but also about practical, real-world
applications – something I believe is crucial as a computer science student getting ready to dive
into the professional world. It was not just about knowing the tech; it's about figuring out how
we can used it to solve real-life problems(For ex-in managing and delivering digital resources,
medical imaging files).
I was enthusiastic about this AWS internship because it gave an opportunity to explore, create,
and maybe even contribute something new to the ever-evolving field of cloud computing.
x
Chapter 3
Plan of Work
• Amazon EC2
• Amazon S3
• Amazon DynamoDB
• Amazon RDS
• AWS Lambda
• Amazon Sage-maker
• Amazon VPC
• Amazon Aurora
• Amazon ECS
• Amazon DevOps
• Amazon EKS
• Amazon Redshift
• AWS CloudTrail
xi
Chapter-4
Cloud Foundation
xii
3. Device Independence: Cloud services are typically device-agnostic, allowing users to
access resources from various devices, including laptops, tablets, and smartphones.
4. Savings: Cloud computing eliminates the need for organizations to invest heavily in
onpremises hardware and infrastructure. This shift to a pay-as-you-go model reduces
upfront costs and allows organizations to pay only for the resources they consume.
5. Opportunities: Cloud services provide a platform for rapid prototyping and development,
fostering innovation. Organizations can experiment with new ideas, deploy applications
quickly, and adapt to changing market conditions.
• AWS Services:
Amazon Web Services (AWS) offers a comprehensive suite of cloud services that empower
businesses, organizations, and individuals to build, deploy, and scale applications and
infrastructure with unprecedented flexibility and efficiency. AWS, a subsidiary of Amazon,
has established itself as a leading cloud computing platform, providing a vast array of
services across computing power, storage, databases, machine learning, analytics, and more.
Key categories of AWS services include compute, storage, databases, networking, machine
learning, analytics, security, and more. Services like Amazon EC2 enable the provisioning
of virtual servers, while Amazon S3 provides scalable object storage. AWS Lambda allows
for serverless computing, and Amazon RDS offers managed relational databases
It leverages AWS experience and best practices to help us digitally transform and
accelerate our business outcomes through innovative use of AWS. AWS CAF identifies
specific organizational capabilities that underpin successful cloud transformations. These
capabilities provide best practice guidance that helps us improve our cloud readiness. AWS
CAF groups its capabilities in six perspectives: Business, People, Governance, Platform,
Security, and Operations. Each perspective comprises a set of capabilities that functionally
xiii
related stakeholders own or manage in the cloud transformation journey. Using the AWS
CAF one can identify and prioritize transformation opportunities, evaluate and improve
cloud readiness, and iteratively evolve transformation roadmap.
• Fundamentals of pricing:
The fundamentals of pricing in cloud economics and billing is crucial for effectively
managing costs and optimizing resources in cloud environments, such as AWS (Amazon
Web Services). Here are key concepts:
When considering the total cost of ownership (TCO) in the context of AWS (Amazon Web
Services) or any cloud platform, it involves a range of factors that contribute to the overall
cost of running applications and services in the cloud.
1. Compute Costs: Charges for virtual machines (EC2 instances) based on their type, size,
and usage (on-demand, reserved, or spot instances).
2. Storage Costs: Charges for data storage, which can include costs for Amazon S3 (Simple
Storage Service), Amazon EBS (Elastic Block Store), and other storage services.
3. Data Transfer Costs: Fees for data transfer in and out of AWS, including data transfer
between regions and to the internet.
xiv
4. Networking Costs: Costs associated with networking resources, such as Amazon VPC
(Virtual Private Cloud), Elastic Load Balancers, and data transfer within a VPC.
• AWS Organisations:
AWS Organizations is a service provided by Amazon Web Services (AWS) that allows you
to consolidate multiple AWS accounts into an organization that you create and centrally
manage. This service simplifies the management of billing and permissions across multiple
AWS accounts, providing a hierarchical structure for organization units (OUs) and accounts.
AWS Billing and Cost Management is a set of tools and services provided by Amazon Web
Services (AWS) to help users monitor, control, and optimize their costs on the AWS cloud
platform. It includes features and services that enable users to understand their spending
patterns, set up budgets, and implement cost controls. Key Components of billing and cost
management:
1. AWS Billing Console: The AWS Billing Console is the primary interface for managing
your AWS billing and costs. It provides an overview of your current usage and costs,
allowing you to view detailed billing reports, download invoices, and analyze cost
trends.
2. Cost Explorer: Cost Explorer is a tool within the AWS Billing Console that allows you
to visualize, understand, and analyze your AWS costs and usage. It provides interactive
charts and graphs to help you explore historical data and forecast future costs.
3. Budgets: AWS Budgets enable you to set custom cost and usage budgets that alert you
when you exceed your thresholds. You can create budgets based on various criteria, such
as service, linked account, or specific tags.
AWS (Amazon Web Services) provides a range of support plans to help customers with
their technical needs, ranging from basic technical support to premium plans with
additional features. The four different AWS technical support: Basic Support, Developer
Support, Business Support, and Enterprise Support.
xv
Anyone can access AWS support through the AWS Management Console or by visiting the
AWS Support Center. Different support plans offer different levels of access, response
times, and additional features.
The AWS Global Infrastructure refers to the vast and geographically distributed network of
data centers and facilities that Amazon Web Services (AWS) operates worldwide. AWS has
strategically positioned data centers, known as Availability Zones (AZs), in various regions
around the globe to provide high availability, fault tolerance, and low-latency access to cloud
services for its customers. Here are key components of the AWS Global Infrastructure:
1. Regions:AWS Regions are geographical areas where AWS has multiple Availability
Zones. Each region is completely independent and isolated from the others, and it consists
of multiple data centers. AWS currently has numerous regions around the world. 2.
Availability Zones (AZs): Availability Zones are isolated locations within an AWS Region,
and they are designed to be independent of each other in terms of power, cooling, and
network connectivity. Each Availability Zone is essentially a separate data center with its
own infrastructure.
xvi
3. Edge Locations: AWS Edge Locations are part of the CloudFront content delivery network
(CDN). These locations are spread globally and are used to cache content closer to end-
users, reducing latency for delivering web content and other services.
• AWS service Categories: Amazon Web Services (AWS) offers a wide range of services
to meet the diverse needs of users, from computing power to storage, databases, machine
learning, and more.
1. Compute:
2. Storage:
xvii
3. Databases:
4. Networking:
Amazon VPC: Virtual Private Cloud for creating isolated network environments.
Amazon Route 53: Scalable domain name system (DNS) web service.
AWS IAM: Identity and Access Management for secure control of AWS resources.
The AWS Shared Responsibility Model is a key concept in understanding the distribution of
responsibilities between AWS (the cloud service provider) and the customer. It defines the
security responsibilities for both parties in the context of using AWS services. The model
helps customers understand which aspects of security they are responsible for and which
aspects AWS manages.
xviii
Figure 4.4.1: AWS Shared Responsibility Model
• AWS Identity and Access Management (IAM): AWS Identity and Access Management
(IAM) is a web service that helps you securely control access to AWS resources. IAM
enables you to create and manage AWS users and groups, and it allows you to grant
permissions to access AWS resources.
Securing a new AWS account is a critical step to ensure the integrity, confidentiality, and
availability of your cloud resources. Here are essential steps to secure new AWS account:
1. Enable MFA (Multi-Factor Authentication): Enable MFA for the root user as well as for
IAM (Identity and Access Management) users. This adds an extra layer of security by
requiring a second form of authentication.
2. Create IAM Users with Least Privileges: Avoid using the root account for daily tasks.
Create IAM users with the minimum permissions necessary for their roles. Follow the
principle of least privilege to reduce the risk of accidental or intentional misuse.
xix
3. Use IAM Roles: Instead of using long-term access keys for IAM users, leverage IAM
roles for temporary credentials. Assign roles to EC2 instances, Lambda functions, or other
services to enhance security.
Securing data on AWS is crucial to maintaining the confidentiality, integrity, and availability
of sensitive information. AWS provides a variety of tools and services to help you implement
a robust data security strategy. Here are key considerations for securing data on AWS:
1. Encryption: For encrypting data in transit use SSL/TLS. This applies to communication
between clients and AWS services (e.g., API calls, web traffic) as well as data transfers
between AWS services.
2. Amazon S3 Security:
Bucket Policies and ACLs: Configure S3 bucket policies and access control lists (ACLs) to
control who can access your S3 buckets and what they can do.
Versioning: Enable versioning for S3 buckets to maintain a history of object versions and
provide a mechanism for data recovery in case of accidental deletion or modification.
3. Database Security:
Amazon RDS Encryption: Enable encryption for Amazon RDS instances to secure data at
rest. Use AWS KMS to manage database encryption keys.
4. Key Management: Use AWS KMS to create and manage encryption keys. Implement
key rotation and regularly audit key usage
• Networking Basics:
In Amazon Web Services (AWS), networking is a fundamental aspect that involves creating
and managing virtual networks to connect your resources securely.
xx
• Amazon VPC:
Amazon Virtual Private Cloud (Amazon VPC) is a service that lets you provision a logically
isolated section of the Amazon Web Services (AWS) Cloud where you can launch AWS
resources in a virtual network. With Amazon VPC, you have control over your virtual
networking environment, including the selection of your IP address range, creation of
subnets, and configuration of route tables and network gateways.
• VPC networking:
1. IP Addressing and Subnetting: Carefully plan and allocate IP address ranges for your
VPC and its subnets. Subnetting helps organize resources and allows for better network
segmentation.
2. Subnet Design: Create multiple subnets within your VPC to represent different tiers of
your application. Associate each subnet with a specific availability zone to enhance fault
tolerance.
3. Internet Gateway (IGW): Attach an Internet Gateway to your VPC to enable
communication between instances within the VPC and the internet. This is necessary
for resources that need internet access.
4. Route Tables: Configure route tables to control traffic between subnets within the VPC
and to control the flow of traffic in and out of the VPC, including routing traffic to the
internet via the Internet Gateway.
5. Elastic Load Balancer (ELB):Utilize Elastic Load Balancers to distribute incoming
application traffic across multiple instances in different availability zones. This
enhances application availability and fault tolerance
• VPC Security:
1. IAM Roles: Assign IAM roles to EC2 instances to manage access permissions securely.
IAM roles help control access to AWS services and resources.
xxi
2. Data Encryption: Implement encryption for data in transit using SSL/TLS and for data
at rest using services like AWS Key Management Service (KMS) for key management.
3. Security Best Practices: Adhere to security best practices for your operating systems,
applications, and AWS resources. Regularly update and patch your systems.
• Amazon Route 53:
Amazon Route 53 is a scalable and highly available Domain Name System (DNS) web
service provided by Amazon Web Services (AWS). It is designed to route end-user requests
to globally distributed AWS resources, such as Amazon EC2 instances, Elastic Load
Balancers, or Amazon S3 buckets. Route 53 offers domain registration, DNS routing, and
health checking services, making it a comprehensive solution for managing domain names
and their associated resources.
• Amazon CloudFront:
Amazon CloudFront is a content delivery network (CDN) service provided by Amazon Web
Services (AWS). It is designed to deliver content, including web pages, videos, images, and
other static and dynamic assets, to users with low latency and high transfer speeds.
CloudFront accelerates the distribution of content by caching it at edge locations,
strategically placed around the world, reducing the load on origin servers and improving the
overall user experience.
4.6 Compute
AWS provides a comprehensive suite of compute services, including Amazon EC2 for
scalable virtual servers, Amazon ECS and EKS for container orchestration, AWS Lambda
xxii
for serverless computing, and services like AWS Batch and Lightsail for efficient batch
processing and simplified deployment, respectively. AWS Outposts extends cloud
capabilities to on-premises environments, while AWS Wavelength enables ultra-low latency
edge computing on 5G networks.This diverse range of compute services allows users to
select the most suitable option based on their application requirements, offering flexibility,
scalability, and ease of management.
• Amazon EC2:
Amazon Elastic Compute Cloud (Amazon EC2) is a core compute service offered by
Amazon Web Services (AWS) that provides resizable and scalable virtual machines
(instances) in the cloud. EC2 enables users to run applications, host websites, and process
data with the flexibility to choose from a variety of instance types optimized for different
use cases. Users can select instances with varying CPU, memory, storage, and networking
capacities based on their specific requirements.The service supports on-demand pricing for
flexibility, reserved instances for cost savings, and spot instances for acquiring spare
capacity at lower costs.
Optimizing costs for Amazon EC2 involves various strategies to ensure efficient resource
usage while minimizing expenses. Here are some ways for Amazon EC2 cost optimization:
1. Rightsize Instances: Regularly analyze your workloads to choose the most costeffective
instance types based on CPU, memory, and storage requirements.
2. Reserved Instances (RIs): Utilize Reserved Instances for predictable workloads with
steady-state usage. RIs offer substantial discounts compared to on-demand pricing and
can be a cost-effective option for long-term commitments.
• Container Services: AWS provides a suite of container services designed to simplify
the deployment, management, and scaling of containerized applications. Amazon Elastic
Container Service (ECS) offers a fully managed environment for running Docker
containers, allowing users to easily launch, stop, and manage containers without the
need to manage the underlying infrastructure. For more advanced orchestration needs,
xxiii
Amazon Elastic Kubernetes Service (EKS) provides a managed Kubernetes service that
makes it straightforward to deploy, scale, and manage containerized applications using
Kubernetes. AWS Fargate takes containerization a step further by offering a serverless
compute engine for containers, eliminating the need for users to manage the underlying
infrastructure. These services provide flexibility, scalability, and seamless integration
with other AWS services, making it efficient for developers to build and deploy
containerized applications in the cloud.
AWS Lambda is a serverless compute service offered by Amazon Web Services (AWS)
that enables developers to run code without provisioning or managing servers. As a
serverless computing platform, Lambda allows users to focus on writing code and
executing functions in response to events, without the need to worry about server
provisioning, maintenance, or scaling. With its pay-as-you-go pricing model, users only
pay for the compute time consumed by their functions, making Lambda a cost-effective
and efficient solution for serverless computing in the AWS cloud.
4.7 Storage
xxiv
Amazon Elastic Block Store (Amazon EBS) is a scalable block storage service designed for
use with Amazon EC2 instances, offering persistent and high-performance storage volumes.
With different volume types catering to diverse workloads, such as General Purpose,
Provisioned IOPS, and Throughput Optimized, users can tailor storage to specific
performance requirements. EBS volumes provide a block-level storage approach, making
them suitable for operating systems, databases, and applications that require direct access to
raw storage
Amazon Simple Storage Service (Amazon S3) is a highly scalable, durable, and secure
object storage service provided by Amazon Web Services (AWS). It is designed to store and
retrieve any amount of data from anywhere on the web, making it a fundamental component
for building scalable and reliable applications.. Data redundancy is achieved through
automatic replication of objects across multiple devices and facilities within a region,
ensuring high durability and availability.
Amazon Elastic File System (Amazon EFS) is a fully managed, scalable file storage service
provided by Amazon Web Services (AWS). It is designed to provide scalable and highly
available file storage for use with Amazon EC2 instances and other AWS services. Amazon
EFS supports the Network File System version 4 (NFSv4) protocol, making it easy for
multiple EC2 instances to share and access a common set of files.Amazon Simple Storage
Service Glacier:
Amazon Glacier is a cost-effective, secure, and durable archival storage service within the
Amazon Simple Storage Service (Amazon S3) family. Designed for infrequently accessed
data with long-term retention requirements, Glacier allows users to store large volumes of
data in vaults, with each archive scalable up to 40 terabytes. With its low storage costs,
Glacier is an ideal solution for organizations seeking an economical and scalable archival
storage option for large datasets
4.8 Databases
xxv
• Amazon Relational Database Service (Amazon RDS):
• Amazon DynamoDB:
• Amazon Redshift:
• Amazon Aurora:
Amazon Aurora is a fully managed relational database service offered by Amazon Web
Services (AWS). It is compatible with MySQL and PostgreSQL, providing the
performance and availability of high-end commercial databases at a fraction of the cost.
Aurora is designed for ease of use, offering automatic backups, continuous monitoring,
and automatic scaling capabilities.
xxvi
4.9 Cloud Architecture
The AWS Well-Architected Framework is a set of best practices and guidelines provided by
Amazon Web Services (AWS) to help users design and build reliable, secure, efficient, and
cost-effective systems in the cloud. The framework consists of a collection of whitepapers,
design principles, and a self-service tool that enables users to assess their workloads against
these best practices. The main pillars of the Well-Architected Framework are:
xxvii
4.10 Automatic Scaling and Monitoring
• Amazon CloudWatch:
Amazon CloudWatch is a monitoring and observability service provided by Amazon Web
Services (AWS). It allows users to collect and track metrics, collect and monitor log files,
and set alarms to be notified of changes in the environment. CloudWatch provides insights
into resource utilization, application performance, and operational health, enabling users to
make informed decisions about their AWS resources
xxviii
4.11 Certificate for Cloud Foundations
xxix
Chapter-5
Cloud Architecting
Cloud architecting involves designing and implementing robust and scalable cloud-based
solutions that align with an organization's business objectives. Cloud architects leverage cloud
services and resources to create architectures that prioritize factors such as performance,
security, availability, and cost efficiency. This process often includes selecting appropriate cloud
services, defining data storage and management strategies, optimizing network configurations,
and ensuring seamless integration with other systems. A successful cloud architecture not only
meets current operational needs but also anticipates future growth and changes, providing a
foundation for agility, innovation, and optimal resource utilization within the cloud
environment.
xxx
Figure 5.1.1: Cloud Architecture
When moving data to and from Amazon S3, AWS offers various tools and services. AWS
DataSync, AWS Snowball, and the S3 Transfer Acceleration feature help facilitate secure
and efficient data transfer, whether dealing with large datasets or ensuring high-speed
transfers over the internet.
xxxi
5.3 Adding a Compute Layer
• Architectural Need:
Architectural needs in AWS encompass designing scalable, highly available, and secure
solutions that optimize performance, manage costs effectively, and ensure operational
excellence. This involves leveraging AWS services for scalability and fault tolerance,
implementing robust security measures, optimizing resource usage for cost efficiency,
and incorporating best practices for operational management.
xxxii
balancing factors like CPU, memory, and storage. Availability Zones should be
strategically chosen to ensure high availability and fault tolerance.
xxxiii
5.4 Creating a Networking Environment
34
5.5 Connecting Networks
• Connecting to your remote network with AWS Site-to-Site VPN and AWS Direct
Connect:
1. AWS Site-to-Site VPN: It extends your on-premises network to the AWS Cloud over an
encrypted virtual private network (VPN) connection. This allows secure
communication between your local data center and AWS, enabling access to resources
on both sides. Configuration involves setting up a Virtual Private Gateway in AWS,
defining customer gateway information for your on-premises router, and configuring
VPN connections with appropriate encryption settings.
• AWS Direct Connect: AWS Direct Connect establishes a dedicated network
connection between your on-premises data center and AWS. This dedicated connection
bypasses the public internet, providing more reliable, lower-latency access to AWS
resources. To set up AWS Direct Connect, you need to choose a Direct Connect
location, work with a Direct Connect partner if necessary, and establish physical
connectivity. Once set up, create a virtual interface to connect to your Virtual Private
Cloud (VPC). • Connecting virtual private clouds (VPCs) in AWS with VPC
peering:
VPC peering allows direct connectivity between two VPCs, enabling instances in one
VPC to communicate with instances in another VPC using private IP addresses. This
connection is established without the need for internet access, VPNs, or dedicated
connections. To set up VPC peering, both VPCs must have non-overlapping IP address
ranges, and the peering connection must be initiated and accepted by both VPC owners.
Once established, instances in the peered VPCs can communicate as if they were within
the same network.
35
Deploy AWS services such as Amazon RDS, Redshift, Elastic Cache, Lambda,
DynamoDB, and messaging services within the VPC for optimized performance and
low-latency interactions with your application instances.
36
5.7 Implementing Elasticity, High Availability, and Monitoring
• Scaling your compute resources:
Scaling compute resources in the AWS cloud involves dynamically adjusting the
capacity of your computing infrastructure to meet changing demands. Amazon EC2
Additionally, AWS Lambda enables serverless computing, automatically scaling
functions in response to triggered events
Scaling your databases:
Scaling databases in AWS involves adapting your database infrastructure to handle
varying workloads efficiently. Amazon RDS (Relational Database Service) offers
automated scaling capabilities, allowing you to vertically scale (resize) or horizontally
scale (read replicas) your database based on demand. Vertical scaling involves
adjusting the instance type to provide more or less computing power and memory,
while horizontal scaling uses read replicas to distribute read traffic and improve
overall performance.
• Designing an environment that’s highly available:
Designing a highly available environment in AWS involves architecting with
redundancy and fault-tolerance to ensure continuous and reliable operation. This
includes distributing resources across multiple Availability Zones (AZs) to mitigate
the impact of failures in a specific zone, utilizing load balancing for even traffic
distribution, and implementing automated scaling to adapt to varying workloads.
Regular testing, monitoring, and using AWS CloudWatch alarms for proactive
responses contribute to maintaining a robust high-availability architecture.
• Monitoring:
Monitoring in AWS is achieved through a suite of robust tools. Amazon CloudWatch
collects and tracks metrics, logs, and events, enabling visualization, analysis, and the
setting of alarms. AWS CloudTrail records API calls for auditing and tracking
changes, while AWS X-Ray provides insights into application performance. Regularly
leveraging these tools allows for efficient management and optimization of AWS
environments.
37
5.8 Caching Content and decoupled architectures
• Overview of caching:
Caching is a technique that involves storing copies of frequently accessed data in a
temporary location to expedite subsequent retrieval, reducing latency and improving
overall system performance. In the context of web applications, caching is commonly
applied to various layers, including content, databases, and sessions, to optimize response
times and enhance user experience.
• Edge caching:
Edge caching involves strategically placing caching servers or Content Delivery Networks
(CDNs) at the edge of a network, closer to end-users. This decentralized approach
accelerates content delivery by caching static assets like images, scripts, and videos closer
to the user's geographical location. This not only minimizes latency but also reduces the
load on the origin server, contributing to a more scalable and responsive application.
• Caching databases:
Caching databases involves storing frequently accessed query results or data in-memory to
accelerate subsequent requests. This approach, often referred to as database caching or
query caching, is beneficial in scenarios where read-heavy workloads can be optimized by
serving data from cache rather than executing resource-intensive database queries. Caching
databases enhance performance, reduce database load, and contribute to more efficient use
of resources.
• Decoupled architectures:
Decoupled architectures in AWS involve designing systems where components operate
independently, reducing interdependencies and enhancing flexibility. AWS services like
Simple Queue Service (SQS) and Simple Notification Service (SNS) facilitate
asynchronous communication, allowing components to interact without direct
dependencies. Additionally, AWS Lambda supports serverless computing, enabling the
execution of code in response to events without the need for managing servers. By
leveraging these services, decoupled architectures in AWS enhance scalability,
maintainability, and overall system resilience.
38
5.9 Planning for disaster
39
5.10 Certificate of Cloud Architecting
40
Chapter-6
Conclusion and Future Scope
6.1 Conclusion
Participating in a cloud virtual internship provides a unique and enriching experience that
goes beyond traditional learning environments. Throughout this internship, I had the
opportunity to delve into the dynamic realm of cloud computing, gaining hands-on
experience with cutting-edge technologies and industry-leading platforms. The exposure to
real-world projects, collaboration with seasoned professionals, and the practical application
of cloud services like AWS has significantly enhanced my understanding of cloud
architectures, deployment strategies, and best practices.
41
4. Quantum Computing: While still in its early stages, quantum computing is on the
horizon. AWS is actively exploring quantum computing services, and as this
technology matures, cloud providers are likely to play a crucial role in making
quantum computing accessible to a broader range of organizations.
42