0% found this document useful (0 votes)
50 views38 pages

Report Format (Part 2)

Uploaded by

Arnav Dadhich
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views38 pages

Report Format (Part 2)

Uploaded by

Arnav Dadhich
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 38

Department of Computer Science and Engineering

(Artificial intelligence)
Jaipur Engineering College and Research Centre, Jaipur

TABLE OF CONTENTS
Certificate------------------------------------------------------------------------------------------------------i
Program Outcomes (POs)---------------------------------------------------------------------------------ii
Program Education Objectives (PEOs)----------------------------------------------------------------iii
Course Outcomes (COs)------------------------------------------------------------------------------------iv
Mapping: COs and POs------------------------------------------------------------------------------------v
Acknowledgement------------------------------------------------------------------------------------------vi
Abstract------------------------------------------------------------------------------------------------------vii
List of Figures-----------------------------------------------------------------------------------------------viii
List of Tables-------------------------------------------------------------------------------------------------ix
1. INTRODUCTION----------------------------------------------------------------------------------------3
1.1 Definition__________________________________________________________________3
1.2 Benefits____________________________________________________________________5
1.3 Scope_____________________________________________________________________6
1.4 Features 7
2. HISTORY OF DEVOPS & AWS--------------------------------------------------------------------8
2.1 Origin of DevOps____________________________________________________________8
2.2 Origin of Cloud Computing____________________________________________________8
2.3 Evolution of DevOps & AWS__________________________________________________9
3. KEY TECHNOLOGIES-------------------------------------------------------------------------------11
4. DEVOPS--------------------------------------------------------------------------------------------------12
4.1 DevOps Architecture________________________________________________________12
4.2 DevOps Lifecycle___________________________________________________________13
5. AMAZON WEB SERVICES-------------------------------------------------------------------------17
5.1 Introduction________________________________________________________________17
5.2 AWS Cloud Computing Models________________________________________________21
5.3 AWS EC2 Services__________________________________________________________22
5.4 Amazon VPS_______________________________________________________________23
5.5 Amazon AWS Elastic Load Balancer____________________________________________25
5.6 Amazon Elastic File System___________________________________________________26

1
Department of Computer Science and Engineering
(Artificial intelligence)
Jaipur Engineering College and Research Centre, Jaipur

5.7 Identify and Access Management______________________________________________28


6. TERRAFORM-------------------------------------------------------------------------------------------29
6.1 Features of Terraform 29
6.2 Benefits of Terraform________________________________________________________30
6.3 Use Cases for Terraform______________________________________________________30
7. APACHE--------------------------------------------------------------------------------------------------32
7.1 Introduction_______________________________________________________________32
7.2 Pros 33
7.3 Cons_____________________________________________________________________33
8. PROJECT------------------------------------------------------------------------------------------------34
9. REFERENCES------------------------------------------------------------------------------------------36

2
Department of Computer Science and Engineering
(Artificial intelligence)
Jaipur Engineering College and Research Centre, Jaipur

3
Department of Computer Science and Engineering
(Artificial intelligence)
Jaipur Engineering College and Research Centre, Jaipur

CHAPTER 1
INTRODUCTION

In today's fast-paced and highly competitive technological landscape, the efficient and reliable
delivery of software has become paramount for organizations striving to meet the ever-growing
demands of their users and customers. To meet these challenges, two distinct yet interconnected
methodologies have emerged as indispensable solutions: DevOps and Site Reliability
Engineering (SRE).

1.1 DEFINITION
DevOps, an abbreviation for "Development" and "Operations," represents a holistic and
collaborative approach to software development and IT operations. It seeks to break down the
traditional silos between these two domains, fostering a culture of cooperation and shared
responsibility. DevOps emphasizes automation, continuous integration, and continuous
deployment, with the ultimate goal of accelerating the software development lifecycle while
ensuring stability and reliability.

DevOps is a set of practices that combines software development (Dev) and IT operations (Ops)
into a single team. DevOps teams work together to automate and streamline the software
development and delivery process, from ideation to deployment and support. DevOps is based on
the following key principles:
● Collaboration: DevOps teams break down the silos between development and
operations teams, and work together to achieve common goals.
● Automation: DevOps teams use automation tools and practices to streamline
the software development and delivery process.
● Continuous integration and continuous delivery (CI/CD): CI/CD is a set of
practices that automate the building, testing, and deployment of software.
● Monitoring and observability: DevOps teams use monitoring and observability tools
to track the performance and health of their software systems.

4
Cloud computing is on-demand access, via the internet, to computing resources—applications,
servers (physical servers and virtual servers), data storage, development tools, networking
capabilities, and more—hosted at a remote data center managed by a cloud services provider (or
CSP). The CSP makes these resources available for a monthly subscription fee or bills them
according to usage.
Cloud computing offers a number of benefits, including:
● Cost savings: Businesses can save money on IT costs by avoiding the need to
purchase and maintain their own hardware and software.
● Scalability: Cloud computing is highly scalable, so businesses can easily add or
remove resources as needed.
● Agility: Cloud computing allows businesses to quickly deploy new applications
and services.
● Reliability: Cloud providers offer a high level of reliability and uptime.

Cloud Computing with AWS


Cloud computing with AWS is the delivery of computing services—including servers, storage,
databases, networking, software, analytics, and intelligence—over the Internet (“the cloud”) to
offer faster innovation, flexible resources, and economies of scale. AWS is the world's leading
cloud computing platform, offering over 200 fully featured services from data centers globally.
Millions of customers—including the fastest-growing startups, largest enterprises, and leading
government agencies—are using AWS to lower costs, become more agile, and innovate faster.

1.2 Benefits of DevOps and Cloud Computing with AWS


● Accelerated software delivery: DevOps and cloud computing can help organizations to
deliver software to users more quickly. This is because DevOps automates the software
development and delivery process, and cloud computing provides scalable infrastructure
resources that can be provisioned and deprovisioned on demand.
● Improved software quality: DevOps and cloud computing can help organizations to
improve the quality of their software. This is because DevOps emphasizes continuous
testing and integration, and cloud computing provides a platform for running automated
tests.

5
● Increased reliability: DevOps and cloud computing can help organizations to build and
operate reliable software systems. This is because DevOps focuses on monitoring and
observability, and cloud computing provides high-availability and disaster recovery
features.
● Reduced costs: DevOps and cloud computing can help organizations to reduce IT costs.
This is because DevOps automates tasks and optimizes resource utilization, and cloud
computing offers pay-as-you-go pricing.
● Increased agility and innovation: DevOps and cloud computing can help organizations
to be more agile and innovative. This is because DevOps enables organizations to quickly
deploy and scale applications, and cloud computing provides access to a wide range of
services and technologies.
Here are some examples of how organizations have used DevOps to achieve
significant benefits:
● Netflix: Netflix uses DevOps to deliver high-quality streaming video to millions of users
around the world. Netflix is able to release new features and bug fixes quickly and
reliably thanks to its use of DevOps practices.
● Amazon: Amazon uses DevOps to power its e-commerce platform and its Amazon Web
Services (AWS) cloud computing business. Amazon is able to scale its infrastructure up
and down quickly and reliably thanks to its use of DevOps practices.
● Google: Google uses DevOps to power its search engine, Gmail, and other popular
online services. Google is able to release new features and bug fixes quickly and reliably
thanks to its use of DevOps practices.

DevOps brings numerous advantages to organizations, including faster development cycles,


improved software quality, better collaboration, cost savings, and the ability to adapt to
changing business needs. These benefits make DevOps a valuable approach for
achievingorganizational goals and staying competitive in today's fast-paced technology
landscape.

1.3 SCOPE
This project on “Infrastructure design using DevOps and AWS by Terraform” can vary
depending on the specific needs of the organization. However, some common areas of focus
include:

6
● Automating the infrastructure provisioning process: DevOps and Terraform can be
used to automate the provisioning of infrastructure, which can save time and reduce
errors.
● Improving the scalability and reliability of infrastructure: Cloud computing provides
scalable and reliable infrastructure, while DevOps and Terraform can be used to automate
the scaling of infrastructure up or down as needed.
● Reducing infrastructure costs: Cloud computing can help organizations to reduce their
infrastructure costs by providing pay-as-you-go pricing. DevOps and Terraform can
further reduce costs by automating tasks and optimizing resource utilization.
● Improving the security of infrastructure: Terraform can be used to implement security
best practices in infrastructure design, while DevOps can help organizations to respond to
security incidents quickly and effectively.

1.4 FEATURES
DevOps and cloud computing are two of the most transformative and critical elements in today's
technology landscape. They have reshaped the way organizations build, deploy, and manage
software and infrastructure, playing a pivotal role in modern business operations.
Features of DevOps
● Automation: DevOps automates many of the manual tasks involved in software
development and operations, such as code testing, deployment, and infrastructure
provisioning. This helps to reduce errors, improve efficiency, and speed up the deliveryof
new features to customers.
● Collaboration: DevOps emphasizes collaboration between development and operations
teams. This helps to break down silos and ensure that everyone is working towards the
same goals.
● Integration: DevOps tools and processes are integrated with each other, which helps to
streamline workflows and reduce friction.
● Configuration management: DevOps uses configuration management tools to ensure
that all environments are consistent and up-to-date. This helps to reduce errors and
downtime.
● Monitoring and logging: DevOps teams use monitoring and logging tools to track the
performance and health of their systems. This helps them to identify and fix problems
quickly.

7
CHAPTER 2
HISTORY OF DEVOPS & AWS

The origins of DevOps and cloud computing can be traced back to the early 2000s. DevOps
emerged as a response to the need for better collaboration between development and operations
teams. Cloud computing emerged as a way for businesses to access computing resources on
demand, without having to provision and manage their own infrastructure.

2.1 Origin of DevOps


The term "DevOps" was coined in 2009 by Patrick Debois, a Belgian IT consultant and project
manager. Debois was one of the organizers of the first DevOpsDays conference, which was
held in Ghent, Belgium in 2009. The conference brought together developers and operations
professionals to discuss ways to bridge the gap between development and operations teams and
promote better collaboration and communication.
The DevOps movement was influenced by a number of factors, including:
• The rise of agile software development methodologies
• The increasing popularity of open source software
• The growth of the cloud computing market
DevOps teams typically use a variety of tools and practices to automate the software
development and delivery process. These tools and practices can help to improve the speed,
quality, and reliability of software delivery.

2.2 Origin of Cloud Computing


Amazon Web Services (AWS) was officially launched as a subsidiary of Amazon.com in March
2006. However, the origins of AWS can be traced back to Amazon's own need for scalable and
cost-effective computing infrastructure to support its rapidly growing e-commerce business.
The key events and factors that led to the creation of AWS include:
● Amazon's E-commerce Growth: In the late 1990s and early 2000s, Amazon's e-
commerce platform experienced significant growth, and the company needed a robust
and scalable IT infrastructure to handle increasing customer demand.
● Internal Infrastructure Innovation: To meet its own infrastructure needs, Amazon
began developing innovative technologies and solutions for data storage, computing, and

8
networking. These technologies laid the foundation for what would become AWS services.
● Realization of External Potential: Amazon's leadership recognized that the
infrastructure they had built to support their own operations could also be offered as a
service to other businesses, providing a new revenue stream.
● Launch of AWS: In 2006, AWS was officially launched with a suite of cloud computing
services, including Amazon Elastic Compute Cloud (EC2) and Amazon Simple Storage
Service (S3). These services allowed businesses to access scalable computing and storage
resources on a pay-as-you-go basis.
● Early Adoption: AWS quickly gained traction among startups and enterprises due to its
cost-effectiveness, scalability, and flexibility. It became a pioneer in the cloud
computing industry.
● Expansion of Services: Over the years, AWS expanded its service offerings to include a
wide range of cloud computing services, such as databases, AI and machine learning,
IoT, and more.
● Global Expansion: AWS built a global network of data centers (AWS Regions) and
Availability Zones, enabling customers to deploy applications and services closer to their
end-users for reduced latency and improved performance.
● Today, AWS is one of the world's largest and most widely used cloud computing
platforms, serving millions of customers, from startups to enterprises, across various
industries. Its origins in Amazon's own infrastructure needs and innovative technology
development have paved the way for the cloud computing revolution.

2.3 Evolution of DevOps and AWS


DevOps has evolved significantly since its early days, and AWS has played a major role in this
evolution. In the early days of DevOps, teams were using a variety of tools and technologies to
automate and streamline their development and delivery processes. However, these tools were
often difficult to integrate and manage. AWS has helped to address this challenge by providing a
wide range of managed services that can be used to implement DevOps practices.
For example, AWS CloudFormation provides a way to provision and manage infrastructure as
code. This helps teams to reduce errors and ensure consistency, and it also makes it easier to
automate infrastructure changes.

9
● AWS CodePipeline provides a way to automate the continuous integration and
continuous delivery (CI/CD) process. This helps teams to deliver new features and bug
fixes to production more quickly and reliably.
● AWS CloudWatch provides a way to monitor and log application performance
and infrastructure health. This helps teams to identify and resolve issues quickly.
● AWS also provides a variety of other services that can be used to implement
DevOps practices, such as CodeDeploy, CodeCommit, and CodeArtifact.

In addition to providing managed services, AWS has also helped to evolve DevOps by providing
a platform for innovation. For example, AWS Lambda has made it possible to run serverless
applications, which can reduce costs and simplify operational overhead. AWS Fargate has made
it possible to run containerized applications without having to manage servers or clusters.
AWS has also helped to evolve DevOps by providing a community of users and developers who
share best practices and collaborate on new tools and technologies. For example, the AWS
DevOps Blog is a great resource for learning about the latest DevOps trends and practices.
Overall, AWS has played a major role in the evolution of DevOps. By providing managed
services, a platform for innovation, and a community of users, AWS has helped teams to adopt
DevOps practices more easily and effectively.

Here are some specific examples of how AWS has helped to evolve DevOps:
● AWS CloudFormation has helped to make infrastructure as code more accessible
to teams of all sizes.
● AWS CodePipeline has helped to democratize the CI/CD process.
● AWS CloudWatch has helped to make monitoring and logging more efficient
and effective.
● AWS Lambda has enabled serverless computing, which has simplified
operational overhead and reduced costs.
● AWS Fargate has made it easier to run containerized applications.
● The AWS DevOps Blog and community have helped to share best practices
and collaborate on new tools and technologies.
As a result of these and other contributions, AWS is now the leading platform for DevOps.
Millions of organizations around the world use AWS to build, deploy, and manage their
applications.

10
CHAPTER 3
KEY TECHNOLOGIES
Devops

DevOps is a set of cultural philosophies, practices, and tools that aim to improve collaboration
and communication between software development (Dev) and IT operations (Ops) teams. It
seeks to break down the traditional silos between these two groups, fostering a culture of
collaboration and shared responsibility throughout the entire software development lifecycle
(SDLC).

Amazon Web Services


AWS or Amazon Web Services is a cloud computing platform that offers on-demand computing
services such as virtual servers and storage that can be used to build and run applications and
websites. AWS is known for its security, reliability, and flexibility, which makes it a popular
choice for organizations that need to store and process sensitive data.

Terraform
Terraform is an open-source infrastructure as code (IaC) tool developed by HashiCorp. It is
designed to help automate the provisioning and management of infrastructure resources in a
declarative and version-controlled manner. Terraform enables users to define infrastructure
configurations as code, making it easier to create, modify, and maintain cloud resources and
other infrastructure components.

Apache
Apache is free and open-source software of web server that is used by approx 40% of websites
all over the world. Apache HTTP Server is its official name. It is developed and maintained by
the Apache Software Foundation. Apache permits the owners of the websites for serving content
over the web. It is the reason why it is known as a "web server." One of the most reliable and
old versions of the Apache web server was published in 1995.

11
CHAPTER 4
DEVOPS
4.1 DevOps Architecture

Development and operations both play essential roles in order to deliver applications. The
deployment comprises analyzing the requirements, designing, developing, and testing of the
software components or frameworks.
The operation consists of the administrative processes, services, and support for the software.
When both the development and operations are combined with collaborating, then the DevOps
architecture is the solution to fix the gap between deployment and operation terms; therefore,
delivery can be faster.
DevOps architecture is used for the applications hosted on the cloud platform and large
distributed applications. Agile Development is used in the DevOps architecture so that
integration and delivery can be contiguous. When the development and operations team works
separately from each other, then it is time-consuming to design, test, and deploy. And if the
terms are not in sync with each other, then it may cause a delay in the delivery. So DevOps
enables the teams to change their shortcomings and increases productivity.
Below are the various components that are used in the DevOps architecture:

12
1) Build - Without DevOps, the cost of the consumption of the resources was evaluated based
on the pre-defined individual usage with fixed hardware allocation. And with DevOps, the usage
of cloud, sharing of resources comes into the picture, and the build is dependent upon the user's
need, which is a mechanism to control the usage of resources or capacity.

2) Code - Many good practices such as Git enables the code to be used, which ensures writing
the code for business, helps to track changes, getting notified about the reason behind the
difference in the actual and the expected output, and if necessary reverting to the original code
developed. The code can be appropriately arranged in files, folders, etc. And they can be reused.

3) Test - The application will be ready for production after testing. In the case of manual
testing, it consumes more time in testing and moving the code to the output. The testing can be
automated, which decreases the time for testing so that the time to deploy the code to
production can be reduced as automating the running of the scripts will remove many manual
steps.

4) Plan - DevOps use Agile methodology to plan the development. With the operations
and development team in sync, it helps in organizing the work to plan accordingly to
increase productivity.

5) Monitor - Continuous monitoring is used to identify any risk of failure. Also, it helps in
tracking the system accurately so that the health of the application can be checked. The
monitoring becomes more comfortable with services where the log data may get
monitored through many third-party tools such as Splunk.

6) Deploy - Many systems can support the scheduler for automated deployment. The
cloud management platform enables users to capture accurate insights and view the
optimization scenario, analytics on trends by the deployment of dashboards.

7) Operate - DevOps changes the way traditional approach of developing and testing separately.
The teams operate in a collaborative way where both the teams actively participate throughout
the service lifecycle. The operation team interacts with developers, and they come up with a
monitoring plan which serves the IT and business requirements.

8) Release - Deployment to an environment can be done by automation. But when the


deployment is made to the production environment, it is done by manual triggering.
Many processes involved in release management commonly used to do the deployment in
the production environment manually to lessen the impact on the customers.

4.2 DevOps Lifecycle


DevOps defines an agile relationship between operations and Development. It is a process that is
practiced by the development team and operational engineers together from beginning to the final
stage of the product.
13
The DevOps lifecycle includes seven phases as given below:
● Continuous Development
This phase involves the planning and coding of the software. The vision of the project is decided
during the planning phase. And the developers begin developing the code for the application.
There are no DevOps tools that are required for planning, but there are several tools for
maintaining the code.
● Continuous Integration
This stage is the heart of the entire DevOps lifecycle. It is a software development practice in
which the developers require to commit changes to the source code more frequently. This may be
on a daily or weekly basis. Then every commit is built, and this allows early detection of
problems if they are present. Building code is not only involved compilation, but it also includes
unit testing, integration testing, code review, and packaging.
Jenkins is a popular tool used in this phase. Whenever there is a change in the Git repository, then
Jenkins fetches the updated code and prepares a build of that code, which is an executable file in
the form of war or jar. Then this build is forwarded to the test server or the production server.

14
● Continuous Testing
This phase, where the developed software is continuously testing for bugs. For constant testing,
automation testing tools such as TestNG, JUnit, Selenium, etc are used. These tools allow QAs to
test multiple code-bases thoroughly in parallel to ensure that there is no flaw in the functionality.
In this phase, Docker Containers can be used for simulating the test environment.

Selenium does the automation testing, and TestNG generates the reports. This entire testing
phase can automate with the help of a Continuous Integration tool called Jenkins.
Automation testing saves a lot of time and effort for executing the tests instead of doing this
manually. Apart from that, report generation is a big plus. The task of evaluating the test cases
that failed in a test suite gets simpler. Also, we can schedule the execution of the test cases at
predefined times. After testing, the code is continuously integrated with the existing code.
● Continuous Monitoring
Monitoring is a phase that involves all the operational factors of the entire DevOps process, where
important information about the use of the software is recorded and carefully processed to find

15
out trends and identify problem areas. Usually, the monitoring is integrated within the operational
capabilities of the software application.
It may occur in the form of documentation files or maybe produce large-scale data about the
application parameters when it is in a continuous use position. The system errors such as server
not reachable, low memory, etc are resolved in this phase. It maintains the security and
availability of the service.
● Continuous Feedback
The application development is consistently improved by analyzing the results from the
operations of the software. This is carried out by placing the critical phase of constant feedback
between the operations and the development of the next version of the current software
application.
The continuity is the essential factor in the DevOps as it removes the unnecessary steps which are
required to take a software application from development, using it to find out its issues and then
producing a better version. It kills the efficiency that may be possible with the app and reduce the
number of interested customers.
● Continuous Deployment
In this phase, the code is deployed to the production servers. Also, it is essential to ensure that
the code is correctly used on all the servers.

The new code is deployed continuously, and configuration management tools play an essential
role in executing tasks frequently and quickly. Here are some popular tools which are used in this
phase, such as Chef, Puppet, Ansible, and SaltStack.
Containerization tools are also playing an essential role in the deployment phase. Vagrant and
Docker are popular tools that are used for this purpose. These tools help to produce consistency
16
across development, staging, testing, and production environment. They also help in scaling up
and scaling down instances softly.
Containerization tools help to maintain consistency across the environments where the application
is tested, developed, and deployed. There is no chance of errors or failure in the production
environment as they package and replicate the same dependencies and packages used in the
testing, development, and staging environment. It makes the application easy to run on different
computers.
● Continuous Operations
All DevOps operations are based on the continuity with complete automation of the release
process and allow the organization to accelerate the overall time to market continuingly.
It is clear from the discussion that continuity is the critical factor in the DevOps in removing steps
that often distract the development, take it longer to detect issues and produce a better version of
the product after several months. With DevOps, we can make any software product more efficient
and increase the overall count of interested customers in your product.

17
CHAPTER 5
AMAZON WEB
5.1 Introduction SERVICES

AWS or Amazon Web Services is a cloud computing platform that offers on-demand computing
services such as virtual servers and storage that can be used to build and run applications and
websites. AWS is known for its security, reliability, and flexibility, which makes it a popular
choice for organizations that need to store and process sensitive data.

Amazon Web Services (AWS), a subsidiary of Amazon.com, has invested billions of dollars in IT
resources distributed across the globe. These resources are shared among all the AWS account
holders across the globe. These account themselves are entirely isolated from each other. AWS
provides on-demand IT resources to its account holders on a pay-as-you-go pricing model with no
upfront cost. Amazon Web services offers flexibility because you can only pay for services you
use or you need. Enterprises use AWS to reduce capital expenditure of building their own private
IT infrastructure (which can be expensive depending upon the enterprise’s size and nature). AWS
has its own Physical fiber network that connects with Availability zones, regions and Edge
locations. All the maintenance cost is also bared by the AWS that saves a fortune for the
enterprises.
18
Security of cloud is the responsibility of AWS but Security in the cloud is Customer’s
Responsibility. The Performance efficiency in the cloud has four main areas:-
● Selection
● Review
● Monitoring
● Tradeoff

Advantages of Amazon Web Services


● AWS allows you to easily scale your resources up or down as your needs change,
helping you to save money and ensure that your application always has the resources it
needs.
● AWS provides a highly reliable and secure infrastructure, with multiple data centers and
a commitment to 99.99% availability for many of its services.
● AWS offers a wide range of services and tools that can be easily combined to build
and deploy a variety of applications, making it highly flexible.
● AWS offers a pay-as-you-go pricing model, allowing you to only pay for the resources
you actually use and avoid upfront costs and long-term commitments.
Disadvantages of Amazon Web Services
● AWS can be complex, with a wide range of services and features that may be difficult
to understand and use, especially for new users.
● AWS can be expensive, especially if you have a high-traffic application or need to run
multiple services. Additionally, the cost of services can increase over time, so you need to
regularly monitor your spending.
● While AWS provides many security features and tools, securing your resources on AWS
can still be challenging, and you may need to implement additional security measures to
meet your specific requirements.
● AWS manages many aspects of the infrastructure, which can limit your control
over certain parts of your application and environment.

19
Features of the AWS

● Flexibility: AWS offers flexibility by allowing users to choose programming models,


languages, and operating systems that best suit their projects. This flexibility simplifies
migration of legacy applications to the cloud and supports hybrid cloud deployments.
● Cost-Effectiveness: AWS provides cost-effective solutions by offering on-demand IT
resources, eliminating the need for upfront investments, and allowing users to scale
resources up or down as needed. This cost efficiency extends to hardware, bandwidth, and
staffing.
● Scalability and Elasticity: AWS enables easy scaling of computing resources based on
demand, ensuring that resources can expand or contract as needed. Elasticity load
balancing helps distribute application traffic efficiently.
● Security: AWS prioritizes security, incorporating it into its services and providing
extensive documentation on security features. AWS ensures data confidentiality, integrity,
and availability, with physically secured data centers and encryption for data privacy.
● Experience: AWS leverages its extensive experience gained from managing Amazon.com
to offer a reliable and scalable cloud platform. AWS has been serving customers since
2006 and continually enhances its infrastructure capabilities.

20
Overall, AWS stands out for its flexibility, cost-efficiency, scalability, security, and extensive
experience in cloud computing, making it a preferred choice for organizations seeking cloud-
based solutions.

AWS Global Infrastructure


The AWS global infrastructure is massive and is divided into geographical regions. The
geographical regions are then divided into separate availability zones. While selecting the
geographical regions for AWS, three factors come into play
● Optimizing Latency
● Reducing cost
● Government regulations (Some services are not available for some regions)
Each region is divided into at least two availability zones that are physically isolated from each
other, which provides business continuity for the infrastructure as in a distributed system. If one
zone fails to function, the infrastructure in other availability zones remains operational. The
largest region North Virginia (US-East), has six availability zones. These availability zones are
connected by high-speed fiber-optic networking.
There are over 100 edge locations distributed all over the globe that are used for the CloudFront
(content delivery network). Cloudfront can cache frequently used content such as images and
videos(live streaming videos also) at edge locations and distribute it to edge locations across the
globe for high-speed delivery and low latency for end-users. It also protects from DDOS attacks.

5.2 AWS Cloud Computing Models


There are three cloud computing models available on AWS.
● Infrastructure as a Service (IaaS): It is the basic building block of cloud IT. It generally
provides access to data storage space, networking features, and computer hardware(virtual
or dedicated hardware). It is highly flexible and gives management controls over the IT
resources to the developer. For example, VPC, EC2, EBS.
● Platform as a Service (PaaS): This is a type of service where AWS manages the
underlying infrastructure (usually operating system and hardware). This helps the
developer to be more efficient as they do not have to worry about undifferentiated heavy
lifting required for running the applications such as capacity planning, software
maintenance, resource procurement, patching, etc., and focus more on deployment and
management of the applications. For example, RDS, EMR, ElasticSearch.

21
● Software as a Service(SaaS): It is a complete product that usually runs on a browser. It
primarily refers to end-user applications. It is run and managed by the service provider.
The end-user only has to worry about the application of the software suitable to its needs.
For example, Saleforce.com, Web-based email, Office 365.

5.3 AWS EC2 Services


EC2 stands for Elastic Compute Cloud. EC2 is an on-demand computing service on the AWS
cloud platform. Under computing, it includes all the services a computing device can offer to you
along with the flexibility of a virtual environment. It also allows the user to configure their
instances as per their requirements i.e. allocate the RAM, ROM, and storage according to the
need of the current task.

Amazon EC2 is a short form of Elastic Compute Cloud (ECC) it is a cloud computing service
offered by the Cloud Service Provider AWS. You can deploy your applications in EC2 servers
without worrying about the underlying infrastructure. You configure the EC2-Instance in a very
secure manner by using the VPC, Subnets, and Security groups. You can scale the configuration
of the EC2-instance you have configured based on the demand of the application by attaching the
autoscaling group to the EC2-instance. You can scale up and scale down the instance based on
the incoming traffic of the application.

Use cases of EC2- Instances


● EC2 instances can be used to host websites, applications, and APIs in the cloud.
● It can be used to process large amounts of data using tools like Apache Hadoop
and Apache Spark.
● It can be used to perform demanding computing tasks, such as scientific simulations
and financial modeling.
● EC2 instances can be used to develop, test, and deploy software, allowing teams
to quickly spin up resources as needed.

AWS EC2 Instance Types


The AWS EC2 Instance Types are as follows:
● General Purpose Instances
● Compute Optimized Instances
● Memory-Optimized Instances
22
● Storage Optimized Instances

5.4 Amazon VPC –Amazon Virtual Cloud


Amazon VPC or Amazon Virtual Private Cloud is a service that allows its users to launch their
virtual machines in a protected as well as isolated virtual environment defined by them. You
have complete control over your VPC, from creation to customization and even deletion. It’s
applicable to organizations where the data is scattered and needs to be managed well. In other
words, VPC enables us to select the virtual address of our private cloud and we can also define
all the sub- constituents of the VPC like subnet, subnet mask, availability zone, etc on our own.
Architecture of Amazon VPC
The basic architecture of a properly functioning VPC consists of many distinct services such as
Gateway, Load Balancer, Subnets, etc. Altogether, these resources are clubbed under a VPC to
create an isolated virtual environment. Along with these services, there are also security checks
on multiple levels.
It is initially divided into subnets, connected with each other via route tables along with a
load balancer.

23
VPC Components
● VPC: You can launch AWS resources into a defined virtual network using Amazon
Virtual Private Cloud (Amazon VPC). With the advantages of utilizing the scalable
infrastructure of AWS, this virtual network closely mimics a conventional network that
you would operate in your own data center. /16 user-defined address space maximum
(65,536 addresses)
● Subnets: To reduce traffic, the subnet will divide the big network into smaller,
connected networks. Up to /16, 200 user-defined subnets.
● Route Tables: Route Tables are mainly used to Define the protocol for traffic
routing between the subnets.
● Network Access Control Lists: Network Access Control Lists (NACL) for VPC serve as
a firewall by managing both inbound and outbound rules. There will be a default NACL
for each VPC that cannot be deleted.
● Internet Gateway(IGW): The Internet Gateway (IGW) will make it possible to link
the resources in the VPC to the Internet.
● Network Address Translation (NAT): Network Address Translation (NAT) will
enable the connection between the private subnet and the internet.

Use cases of VPC


● Using VPC, you can host a public-facing website, a single-tier basic web application, or
just a plain old website.
● The connectivity between our web servers, application servers, and database can
be limited by VPC with the help of VPC peering.
● By managing the inbound and outbound connections, we can restrict the incoming
and outcoming security of our application.

24
5.5 Amazon AWS Elastic Load Balancer
The elastic load balancer is a service provided by Amazon in which the incoming traffic is
efficiently and automatically distributed across a group of backend servers in a manner that
increases speed and performance. It helps to improve the scalability of your application and
secures your applications. Load Balancer allows you to configure health checks for the
registered targets. In case any of the registered targets (Autoscaling group) fails the health check,
the load balancer will not route traffic to that unhealthy target. Thereby ensuring your
application is highly available and fault tolerant. To know more about load balancing refer to
Load Balancing in Cloud Computing.

Types of Load Balancers


● Classic Load Balancer: It is the traditional form of load balancer which was used
initially. It distributes the traffic among the instances and is not intelligent enough to
support host- based routing or path-based routing. It ends up reducing efficiency and
performance in certain situations. It is operated on the connection level as well as the
request level. Classic Load Balancer is in between the transport layer (TCP/SSL) and the
application layer (HTTP/HTTPS).
● Application Load Balancer: This type of Load Balancer is used when decisions are to be
made related to HTTP and HTTPS traffic routing. It supports path-based routing and host-
based routing. This load balancer works at the Application layer of the OSI Model. The
load balancer also supports dynamic host port mapping.

25
● Network Load Balancer: This type of load balancer works at the transport
layer(TCP/SSL) of the OSI model. It’s capable of handling millions of requests per
second. It is mainly used for load-balancing TCP traffic.
● Gateway Load Balancer: Gateway Load Balancers provide you the facility to deploy,
scale, and manage virtual appliances like firewalls. Gateway Load Balancers combine a
transparent network gateway and then distribute the traffic.

Use Cases of Elastic Load Balancer


● Modernize applications with serverless and containers
● Scale modern applications to meet demand without complex configurations or
API gateways.
● Improve hybrid cloud network scalability
● Load balance across AWS and on-premises resources using a single load balancer.
● Retain your existing network appliances
● Deploy network appliances from your preferred vendor while taking advantage of
the scale and flexibility of the cloud.

5.6 Amazon Elastic File System


AWS (Amazon Web Services) offers a wide range of storage services that can be provisioned
depending on your project requirements and use case. AWS storage services have different
provisions for highly confidential data, frequently accessed data, and the not so frequently
accessed data. You can choose from various storage types namely, object storage, file storage,
block storage services, backups, and data migration options. All of which fall under the AWS
Storage Services list.
EFS is a file-level, fully managed, storage provided by AWS (Amazon Web Services) that can
be accessed by multiple EC2 instances concurrently. Just like the AWS EBS, EFS is specially
designed for high throughput and low latency applications.

26
Use Cases Of EFS
● Secured file sharing: You can share your files in every secured manner and in a
faster and easier way and also ensures consistency across the system
● Web Hosting: Well suited for web servers where multiple web servers can access the
file system and can store the data EFS also scales whenever the data incoming is
increased.
● Modernize application development: You can share the data from the AWS resources
like ECS, EKS, and any serverless web applications in an efficient manner and without
more management required.
● Machine Learning and AI Workloads: EFS is well suited for large data AI applications
where multiple instances and containers will access the same data improving collaboration
and reducing data duplication.

Amazon EFS is suitable for the following scenarios:


● Shared File Storage: If the multiple EC2-Instances have to access the same data.
EFS management of shared data and ensures consistency across instances.
● Scalability: EFS can increase and decrease its storage capacity depending on
the incoming data.
● If you don’t have an idea how much data is going to come to the store then you can
use the Amazon EFS.
● Simplified Data Sharing: If different applications want the same data to use in a
collaborative manner then you can choose the Amazon EFS. EFS can share large datasets
across a group of instances.
● Use with Serverless Applications: Amazon EFS is well suited for the service like
serverless computing services like some of the examples AWS lambda, EFS, and so
on.
● Pay-as-You-go-Model: If your application is having unpredictable storage growth then
there is no need of paying upfront or no need of any prior commitments. You pay only for
the storage that you are going to use.

27
5.7 Identity and Access Management (IAM)
Identity and Access Management (IAM) manages Amazon Web Services (AWS) users and their
access to AWS accounts and services. It controls the level of access a user can have over an AWS
account & set users, grant permission, and allows a user to use different features of an AWS
account. Identity and access management is mainly used to manage users, groups, roles, and
Access policies.

IAM verifies that a user or service has the necessary authorization to access a particular service in
the AWS cloud. We can also use IAM to grant the right level of access to specific users, groups,
or services. For example, we can use IAM to enable an EC2 instance to access S3 buckets by
requesting fine-grained permissions.

28
CHAPTER 6
TERRAFORM
Terraform is an open-source infrastructure as code (IaC) tool developed by HashiCorp. It is
designed to help automate the provisioning and management of infrastructure resources in a
declarative and version-controlled manner. Terraform enables users to define infrastructure
configurations as code, making it easier to create, modify, and maintain cloud resources and
other infrastructure components.

6.1 Features of Terraform:


● Infrastructure as Code: IT professionals use Terraform’s high-level
configuration language (HCL) to describe the infrastructure in human-readable,
declarative configuration files. Terraform lets you create a blueprint, a template
that you can version, share, and re-use.
● Execution Plans: Once the user describes the infrastructure, Terraform creates an
execution plan. This plan describes what Terraform will do and asks for your approval
before initiating any infrastructure changes. This step lets you review changes before
Terraform does anything to the infrastructure, including creating, updating, or deleting
it.
● Resource Graph: Terraform generates a resource graph, creating or altering non-
dependent resources in parallel. This graph enables Terraform to build resources as

29
efficiently as possible while giving the users greater insight into their
infrastructure.

30
● Change Automation: Terraform can implement complex changesets to the
infrastructure with virtually no human interaction. When users update the
configuration files, Terraform figures out what has changed and creates an
incremental execution plan that respects the dependencies.

6.2 Benefits of Terraform:


● Repeatability: Terraform allows you to define your AWS infrastructure in code,
which makes it easy to repeat and reuse your infrastructure definitions. This can
save you time and effort, and help you to avoid errors.
● Consistency: Terraform helps you to ensure that your AWS infrastructure is
consistent across different environments. This is important for maintaining a reliable
and predictable infrastructure.
● Efficiency: Terraform can automate the provisioning and management of your
AWS infrastructure, which can save you time and effort. This is especially
beneficial for complex infrastructures.
● Portability: Terraform can be used to provision AWS infrastructure on a variety
of cloud providers, which gives you flexibility. This can be useful if you need to
migrate your infrastructure to a different cloud provider in the future.
In addition to these general benefits, Terraform also offers a number of specific benefits for
AWS users, such as:
● Support for a wide range of AWS services: Terraform supports a wide range of
AWS services, including EC2, S3, RDS, DynamoDB, and more. This allows you to
use Terraform to provision and manage all of your AWS infrastructure from a single
tool.
● Integration with AWS tools and services: Terraform integrates with a number of
AWS tools and services, such as the AWS CLI and the AWS Console. This makes
it easy to use Terraform to manage your AWS infrastructure.
● Community support: Terraform has a large and active community, which means
that there are many resources available to help you get started with Terraform and to
troubleshoot any problems that you may encounter.
Overall, Terraform is a powerful and flexible IaC tool that can help you to provision and
manage your AWS infrastructure more efficiently and effectively.

6.3 Use cases for Terraform


● The most common use case for Terraform is IaC. The infrastructure deployments
created with Terraform can be easily integrated with existing CI/CD workflows. The
31
tool is also

32
useful in other ways. For example, teams can use Terraform to automatically update load
balancing member pools and other key networking tasks.
● Terraform is also useful for multi-cloud provisioning. With Terraform, development
teams can deploy serverless functions in AWS, manage Active Directory (AD)
resources in Microsoft Azure, and provision load balancers in Google Cloud. They can
also use Terraform (with HCP Packer) to create and manage multi-cloud golden image
pipelines and deploy and manage multiple virtual machine (VM) images.
● Manage Kubernetes clusters on any public cloud (AWS, Azure, Google).
● Enforce policy-as-code before infrastructure components are created and provisioned.
● Automate the use of secrets and credentials in Terraform configurations.
● Codify existing infrastructure by importing it into an empty Terraform workspace.
● Migrate state to Terraform to secure it and easily share it with authorized collaborators.

33
CHAPTER 7
APACHE
7.1 Introduction

Apache is free and open-source software of web server that is used by approx 40% of websites
all over the world. Apache HTTP Server is its official name. It is developed and maintained by
the Apache Software Foundation. Apache permits the owners of the websites for serving content
over the web. It is the reason why it is known as a "web server." One of the most reliable and
old versions of the Apache web server was published in 1995.

If someone wishes to visit any website, they fill-out the name of the domain in their browser
address bar. The web server will bring the requested files by performing as the virtual delivery
person.

Apache is not any physical server; it is software that executes on the server. However, we
define it as a web server. Its objective is to build a connection among the website visitor
browsers (Safari, Google Chrome, Firefox, etc.) and the server. Apache can be defined as cross-
platform software, so it can work on Windows servers and UNIX.

When any visitor wishes for loading a page on our website, the homepage, for instance, or our
"About Us" page, the visitor's browser will send a request on our server. Apache will return a
response along with each requested file (images, files, etc.). The client and server communicate
by HTTP protocol, and Apache is liable for secure and smooth communication among t both
the machines.

Apache is software that is highly customizable. It contains the module-based structure.


Various modules permit server administrators for turning additional functionality off and on.
Apache includes modules for caching, security, password authentication, URL rewriting, and
other purposes. Also, we can set up our own configuration of the server with the help of a file
known as .htaccess. It is a supported configuration file of Apache.

Apache can be an excellent option to execute our website on a versatile and stable platform.
Although, it comes with a few disadvantages we need to understand.
34
Department of Computer Science and Engineering
(Artificial intelligence)
Jaipur Engineering College and Research Centre, Jaipur

7.2 Pros:
● Stable and reliable software.
● Free and open-source, even for economic use.
● Regular security patches, frequently updated.
● Beginner-friendly, easy to configure.
● Flexible because of the module-based structure.
● Works out of a box with the WordPress sites.
● Cross-platform (implements on Windows servers and Unix).
● Easily available support and huge community in the case of any issue.

7.3 Cons:
● Various performance issues on extremely heavy-traffic websites.
● Several options of configuration can cause security susceptibility.

35
Department of Computer Science and Engineering
(Artificial intelligence)
Jaipur Engineering College and Research Centre, Jaipur

CHAPTER 8
PROJECT

36
Department of Computer Science and Engineering
(Artificial intelligence)
Jaipur Engineering College and Research Centre, Jaipur

37
Department of Computer Science and Engineering
(Artificial intelligence)
Jaipur Engineering College and Research Centre, Jaipur

CHAPTER 9
REFERENCES

 “The DevOps Handbook: How to Create World-Class Agility, Reliability, & Security in
Technology Organizations" by Gene Kim, Jez Humble, Patrick Debois, and John Willis
"Docker Deep Dive" by Nigel Poulton
 Upfalirs Pvt. Ltd.
 Geeks For Geeks - https://www.geeksforgeeks.org/devops-tutorial/
 JavaPoints - https://www.javatpoint.com/devops-interview-questions

38

You might also like