Report Format (Part 2)
Report Format (Part 2)
(Artificial intelligence)
Jaipur Engineering College and Research Centre, Jaipur
TABLE OF CONTENTS
Certificate ---------------------------------------------------------------------------------------------------- i
Program Outcomes (POs) ------------------------------------------------------------------------------- ii
Program Education Objectives (PEOs) -------------------------------------------------------------- iii
Course Outcomes (COs)---------------------------------------------------------------------------------- iv
Mapping: COs and POs ---------------------------------------------------------------------------------- v
Acknowledgement ---------------------------------------------------------------------------------------- vi
Abstract --------------------------------------------------------------------------------------------------- vii
List of Figures --------------------------------------------------------------------------------------------- viii
List of Tables ----------------------------------------------------------------------------------------------- ix
1. INTRODUCTION -------------------------------------------------------------------------------------- 3
1.1 Definition 3
1.2 Benefits 5
1.3 Scope 6
1.4 Features -------------------------------------------------------------------------------------------------- 7
2. HISTORY OF DEVOPS & AWS ------------------------------------------------------------------ 8
2.1 Origin of DevOps 8
2.2 Origin of Cloud Computing 8
2.3 Evolution of DevOps & AWS 9
3. KEY TECHNOLOGIES ----------------------------------------------------------------------------- 11
4. DEVOPS ----------------------------------------------------------------------------------------------- 12
4.1 DevOps Architecture 12
4.2 DevOps Lifecycle 13
5. AMAZON WEB SERVICES ----------------------------------------------------------------------- 17
5.1 Introduction 17
5.2 AWS Cloud Computing Models 21
5.3 AWS EC2 Services 22
5.4 Amazon VPS 23
5.5 Amazon AWS Elastic Load Balancer 25
5.6 Amazon Elastic File System 26
1
Department of Computer Science and Engineering
(Artificial intelligence)
Jaipur Engineering College and Research Centre, Jaipur
2
Department of Computer Science and Engineering
(Artificial intelligence)
Jaipur Engineering College and Research Centre, Jaipur
3
Department of Computer Science and Engineering
(Artificial intelligence)
Jaipur Engineering College and Research Centre, Jaipur
CHAPTER 1
INTRODUCTION
In today's fast-paced and highly competitive technological landscape, the efficient and reliable
delivery of software has become paramount for organizations striving to meet the ever-growing
demands of their users and customers. To meet these challenges, two distinct yet interconnected
methodologies have emerged as indispensable solutions: DevOps and Site Reliability Engineering
(SRE).
1.1 DEFINITION
DevOps, an abbreviation for "Development" and "Operations," represents a holistic and
collaborative approach to software development and IT operations. It seeks to break down the
traditional silos between these two domains, fostering a culture of cooperation and shared
responsibility. DevOps emphasizes automation, continuous integration, and continuous
deployment, with the ultimate goal of accelerating the software development lifecycle while
ensuring stability and reliability.
DevOps is a set of practices that combines software development (Dev) and IT operations (Ops)
into a single team. DevOps teams work together to automate and streamline the software
development and delivery process, from ideation to deployment and support. DevOps is based on
the following key principles:
● Collaboration: DevOps teams break down the silos between development and operations
teams, and work together to achieve common goals.
● Automation: DevOps teams use automation tools and practices to streamline the
software development and delivery process.
● Continuous integration and continuous delivery (CI/CD): CI/CD is a set of practices
that automate the building, testing, and deployment of software.
● Monitoring and observability: DevOps teams use monitoring and observability tools to
track the performance and health of their software systems.
4
Cloud computing is on-demand access, via the internet, to computing resources—applications,
servers (physical servers and virtual servers), data storage, development tools, networking
capabilities, and more—hosted at a remote data center managed by a cloud services provider (or
CSP). The CSP makes these resources available for a monthly subscription fee or bills them
according to usage.
Cloud computing offers a number of benefits, including:
● Cost savings: Businesses can save money on IT costs by avoiding the need to purchase
and maintain their own hardware and software.
● Scalability: Cloud computing is highly scalable, so businesses can easily add or remove
resources as needed.
● Agility: Cloud computing allows businesses to quickly deploy new applications and
services.
● Reliability: Cloud providers offer a high level of reliability and uptime.
5
● Increased reliability: DevOps and cloud computing can help organizations to build and
operate reliable software systems. This is because DevOps focuses on monitoring and
observability, and cloud computing provides high-availability and disaster recovery
features.
● Reduced costs: DevOps and cloud computing can help organizations to reduce IT costs.
This is because DevOps automates tasks and optimizes resource utilization, and cloud
computing offers pay-as-you-go pricing.
● Increased agility and innovation: DevOps and cloud computing can help organizations
to be more agile and innovative. This is because DevOps enables organizations to quickly
deploy and scale applications, and cloud computing provides access to a wide range of
services and technologies.
Here are some examples of how organizations have used DevOps to achieve significant
benefits:
● Netflix: Netflix uses DevOps to deliver high-quality streaming video to millions of users
around the world. Netflix is able to release new features and bug fixes quickly and reliably
thanks to its use of DevOps practices.
● Amazon: Amazon uses DevOps to power its e-commerce platform and its Amazon Web
Services (AWS) cloud computing business. Amazon is able to scale its infrastructure up
and down quickly and reliably thanks to its use of DevOps practices.
● Google: Google uses DevOps to power its search engine, Gmail, and other popular online
services. Google is able to release new features and bug fixes quickly and reliably thanks
to its use of DevOps practices.
1.3 SCOPE
This project on “Infrastructure design using DevOps and AWS by Terraform” can vary
depending on the specific needs of the organization. However, some common areas of focus
include:
6
● Automating the infrastructure provisioning process: DevOps and Terraform can be
used to automate the provisioning of infrastructure, which can save time and reduce errors.
● Improving the scalability and reliability of infrastructure: Cloud computing provides
scalable and reliable infrastructure, while DevOps and Terraform can be used to automate
the scaling of infrastructure up or down as needed.
● Reducing infrastructure costs: Cloud computing can help organizations to reduce their
infrastructure costs by providing pay-as-you-go pricing. DevOps and Terraform can further
reduce costs by automating tasks and optimizing resource utilization.
● Improving the security of infrastructure: Terraform can be used to implement security
best practices in infrastructure design, while DevOps can help organizations to respond to
security incidents quickly and effectively.
1.4 FEATURES
DevOps and cloud computing are two of the most transformative and critical elements in today's
technology landscape. They have reshaped the way organizations build, deploy, and manage
software and infrastructure, playing a pivotal role in modern business operations.
Features of DevOps
● Automation: DevOps automates many of the manual tasks involved in software
development and operations, such as code testing, deployment, and infrastructure
provisioning. This helps to reduce errors, improve efficiency, and speed up the deliveryof
new features to customers.
● Collaboration: DevOps emphasizes collaboration between development and operations
teams. This helps to break down silos and ensure that everyone is working towards the
same goals.
● Integration: DevOps tools and processes are integrated with each other, which helps to
streamline workflows and reduce friction.
● Configuration management: DevOps uses configuration management tools to ensure that
all environments are consistent and up-to-date. This helps to reduce errors and downtime.
● Monitoring and logging: DevOps teams use monitoring and logging tools to track the
performance and health of their systems. This helps them to identify and fix problems
quickly.
7
CHAPTER 2
HISTORY OF DEVOPS & AWS
The origins of DevOps and cloud computing can be traced back to the early 2000s. DevOps
emerged as a response to the need for better collaboration between development and operations
teams. Cloud computing emerged as a way for businesses to access computing resources on
demand, without having to provision and manage their own infrastructure.
8
networking. These technologies laid the foundation for what would become AWS services.
● Realization of External Potential: Amazon's leadership recognized that the infrastructure
they had built to support their own operations could also be offered as a service to other
businesses, providing a new revenue stream.
● Launch of AWS: In 2006, AWS was officially launched with a suite of cloud computing
services, including Amazon Elastic Compute Cloud (EC2) and Amazon Simple Storage
Service (S3). These services allowed businesses to access scalable computing and storage
resources on a pay-as-you-go basis.
● Early Adoption: AWS quickly gained traction among startups and enterprises due to its
cost-effectiveness, scalability, and flexibility. It became a pioneer in the cloud computing
industry.
● Expansion of Services: Over the years, AWS expanded its service offerings to include a
wide range of cloud computing services, such as databases, AI and machine learning, IoT,
and more.
● Global Expansion: AWS built a global network of data centers (AWS Regions) and
Availability Zones, enabling customers to deploy applications and services closer to their
end-users for reduced latency and improved performance.
● Today, AWS is one of the world's largest and most widely used cloud computing platforms,
serving millions of customers, from startups to enterprises, across various industries. Its
origins in Amazon's own infrastructure needs and innovative technology development have
paved the way for the cloud computing revolution.
9
● AWS CodePipeline provides a way to automate the continuous integration and continuous
delivery (CI/CD) process. This helps teams to deliver new features and bug fixes to
production more quickly and reliably.
● AWS CloudWatch provides a way to monitor and log application performance and
infrastructure health. This helps teams to identify and resolve issues quickly.
● AWS also provides a variety of other services that can be used to implement DevOps
practices, such as CodeDeploy, CodeCommit, and CodeArtifact.
In addition to providing managed services, AWS has also helped to evolve DevOps by providing
a platform for innovation. For example, AWS Lambda has made it possible to run serverless
applications, which can reduce costs and simplify operational overhead. AWS Fargate has made
it possible to run containerized applications without having to manage servers or clusters.
AWS has also helped to evolve DevOps by providing a community of users and developers who
share best practices and collaborate on new tools and technologies. For example, the AWS
DevOps Blog is a great resource for learning about the latest DevOps trends and practices.
Overall, AWS has played a major role in the evolution of DevOps. By providing managed services,
a platform for innovation, and a community of users, AWS has helped teams to adopt DevOps
practices more easily and effectively.
Here are some specific examples of how AWS has helped to evolve DevOps:
● AWS CloudFormation has helped to make infrastructure as code more accessible to
teams of all sizes.
● AWS CodePipeline has helped to democratize the CI/CD process.
● AWS CloudWatch has helped to make monitoring and logging more efficient and
effective.
● AWS Lambda has enabled serverless computing, which has simplified operational
overhead and reduced costs.
● AWS Fargate has made it easier to run containerized applications.
● The AWS DevOps Blog and community have helped to share best practices and
collaborate on new tools and technologies.
As a result of these and other contributions, AWS is now the leading platform for DevOps.
Millions of organizations around the world use AWS to build, deploy, and manage their
applications.
10
CHAPTER 3
KEY TECHNOLOGIES
Devops
DevOps is a set of cultural philosophies, practices, and tools that aim to improve collaboration and
communication between software development (Dev) and IT operations (Ops) teams. It seeks to
break down the traditional silos between these two groups, fostering a culture of collaboration and
shared responsibility throughout the entire software development lifecycle (SDLC).
Terraform
Terraform is an open-source infrastructure as code (IaC) tool developed by HashiCorp. It is
designed to help automate the provisioning and management of infrastructure resources in a
declarative and version-controlled manner. Terraform enables users to define infrastructure
configurations as code, making it easier to create, modify, and maintain cloud resources and
other infrastructure components.
Apache
Apache is free and open-source software of web server that is used by approx 40% of websites
all over the world. Apache HTTP Server is its official name. It is developed and maintained by
the Apache Software Foundation. Apache permits the owners of the websites for serving content
over the web. It is the reason why it is known as a "web server." One of the most reliable and
old versions of the Apache web server was published in 1995.
11
CHAPTER 4
DEVOPS
4.1 DevOps Architecture
Development and operations both play essential roles in order to deliver applications. The
deployment comprises analyzing the requirements, designing, developing, and testing of the
software components or frameworks.
The operation consists of the administrative processes, services, and support for the software.
When both the development and operations are combined with collaborating, then the DevOps
architecture is the solution to fix the gap between deployment and operation terms; therefore,
delivery can be faster.
DevOps architecture is used for the applications hosted on the cloud platform and large distributed
applications. Agile Development is used in the DevOps architecture so that integration and
delivery can be contiguous. When the development and operations team works separately from
each other, then it is time-consuming to design, test, and deploy. And if the terms are not in sync
with each other, then it may cause a delay in the delivery. So DevOps enables the teams to change
their shortcomings and increases productivity.
Below are the various components that are used in the DevOps architecture:
12
1) Build - Without DevOps, the cost of the consumption of the resources was evaluated based on
the pre-defined individual usage with fixed hardware allocation. And with DevOps, the usage of
cloud, sharing of resources comes into the picture, and the build is dependent upon the user's
need, which is a mechanism to control the usage of resources or capacity.
2) Code - Many good practices such as Git enables the code to be used, which ensures writing the
code for business, helps to track changes, getting notified about the reason behind the difference
in the actual and the expected output, and if necessary reverting to the original code developed.
The code can be appropriately arranged in files, folders, etc. And they can be reused.
3) Test - The application will be ready for production after testing. In the case of manual testing,
it consumes more time in testing and moving the code to the output. The testing can be
automated, which decreases the time for testing so that the time to deploy the code to production
can be reduced as automating the running of the scripts will remove many manual steps.
4) Plan - DevOps use Agile methodology to plan the development. With the operations and
development team in sync, it helps in organizing the work to plan accordingly to increase
productivity.
5) Monitor - Continuous monitoring is used to identify any risk of failure. Also, it helps in
tracking the system accurately so that the health of the application can be checked. The
monitoring becomes more comfortable with services where the log data may get monitored
through many third-party tools such as Splunk.
6) Deploy - Many systems can support the scheduler for automated deployment. The cloud
management platform enables users to capture accurate insights and view the optimization
scenario, analytics on trends by the deployment of dashboards.
7) Operate - DevOps changes the way traditional approach of developing and testing separately.
The teams operate in a collaborative way where both the teams actively participate throughout the
service lifecycle. The operation team interacts with developers, and they come up with a
monitoring plan which serves the IT and business requirements.
13
The DevOps lifecycle includes seven phases as given below:
● Continuous Development
This phase involves the planning and coding of the software. The vision of the project is decided
during the planning phase. And the developers begin developing the code for the application. There
are no DevOps tools that are required for planning, but there are several tools for maintaining the
code.
● Continuous Integration
This stage is the heart of the entire DevOps lifecycle. It is a software development practice in which
the developers require to commit changes to the source code more frequently. This may be on a
daily or weekly basis. Then every commit is built, and this allows early detection of problems if
they are present. Building code is not only involved compilation, but it also includes unit testing,
integration testing, code review, and packaging.
Jenkins is a popular tool used in this phase. Whenever there is a change in the Git repository, then
Jenkins fetches the updated code and prepares a build of that code, which is an executable file in
the form of war or jar. Then this build is forwarded to the test server or the production server.
14
● Continuous Testing
This phase, where the developed software is continuously testing for bugs. For constant testing,
automation testing tools such as TestNG, JUnit, Selenium, etc are used. These tools allow QAs to
test multiple code-bases thoroughly in parallel to ensure that there is no flaw in the functionality.
In this phase, Docker Containers can be used for simulating the test environment.
Selenium does the automation testing, and TestNG generates the reports. This entire testing phase
can automate with the help of a Continuous Integration tool called Jenkins.
Automation testing saves a lot of time and effort for executing the tests instead of doing this
manually. Apart from that, report generation is a big plus. The task of evaluating the test cases that
failed in a test suite gets simpler. Also, we can schedule the execution of the test cases at predefined
times. After testing, the code is continuously integrated with the existing code.
● Continuous Monitoring
Monitoring is a phase that involves all the operational factors of the entire DevOps process, where
important information about the use of the software is recorded and carefully processed to find
15
out trends and identify problem areas. Usually, the monitoring is integrated within the operational
capabilities of the software application.
It may occur in the form of documentation files or maybe produce large-scale data about the
application parameters when it is in a continuous use position. The system errors such as server not
reachable, low memory, etc are resolved in this phase. It maintains the security and availability of
the service.
● Continuous Feedback
The application development is consistently improved by analyzing the results from the operations
of the software. This is carried out by placing the critical phase of constant feedback between the
operations and the development of the next version of the current software application.
The continuity is the essential factor in the DevOps as it removes the unnecessary steps which are
required to take a software application from development, using it to find out its issues and then
producing a better version. It kills the efficiency that may be possible with the app and reduce the
number of interested customers.
● Continuous Deployment
In this phase, the code is deployed to the production servers. Also, it is essential to ensure that the
code is correctly used on all the servers.
The new code is deployed continuously, and configuration management tools play an essential role
in executing tasks frequently and quickly. Here are some popular tools which are used in this phase,
such as Chef, Puppet, Ansible, and SaltStack.
Containerization tools are also playing an essential role in the deployment phase. Vagrant and
Docker are popular tools that are used for this purpose. These tools help to produce consistency
16
across development, staging, testing, and production environment. They also help in scaling up and
scaling down instances softly.
Containerization tools help to maintain consistency across the environments where the application
is tested, developed, and deployed. There is no chance of errors or failure in the production
environment as they package and replicate the same dependencies and packages used in the testing,
development, and staging environment. It makes the application easy to run on different computers.
● Continuous Operations
All DevOps operations are based on the continuity with complete automation of the release process
and allow the organization to accelerate the overall time to market continuingly.
It is clear from the discussion that continuity is the critical factor in the DevOps in removing steps
that often distract the development, take it longer to detect issues and produce a better version of
the product after several months. With DevOps, we can make any software product more efficient
and increase the overall count of interested customers in your product.
17
CHAPTER 5
AMAZON WEB SERVICES
5.1 Introduction
AWS or Amazon Web Services is a cloud computing platform that offers on-demand computing
services such as virtual servers and storage that can be used to build and run applications and
websites. AWS is known for its security, reliability, and flexibility, which makes it a popular choice
for organizations that need to store and process sensitive data.
Amazon Web Services (AWS), a subsidiary of Amazon.com, has invested billions of dollars in IT
resources distributed across the globe. These resources are shared among all the AWS account
holders across the globe. These account themselves are entirely isolated from each other. AWS
provides on-demand IT resources to its account holders on a pay-as-you-go pricing model with no
upfront cost. Amazon Web services offers flexibility because you can only pay for services you use
or you need. Enterprises use AWS to reduce capital expenditure of building their own private IT
infrastructure (which can be expensive depending upon the enterprise’s size and nature). AWS has
its own Physical fiber network that connects with Availability zones, regions and Edge locations.
All the maintenance cost is also bared by the AWS that saves a fortune for the enterprises.
18
Security of cloud is the responsibility of AWS but Security in the cloud is Customer’s
Responsibility. The Performance efficiency in the cloud has four main areas:-
● Selection
● Review
● Monitoring
● Tradeoff
19
Features of the AWS
20
Overall, AWS stands out for its flexibility, cost-efficiency, scalability, security, and extensive
experience in cloud computing, making it a preferred choice for organizations seeking cloud-
based solutions.
21
● Software as a Service(SaaS): It is a complete product that usually runs on a browser. It
primarily refers to end-user applications. It is run and managed by the service provider. The
end-user only has to worry about the application of the software suitable to its needs. For
example, Saleforce.com, Web-based email, Office 365.
Amazon EC2 is a short form of Elastic Compute Cloud (ECC) it is a cloud computing service
offered by the Cloud Service Provider AWS. You can deploy your applications in EC2 servers
without worrying about the underlying infrastructure. You configure the EC2-Instance in a very
secure manner by using the VPC, Subnets, and Security groups. You can scale the configuration
of the EC2-instance you have configured based on the demand of the application by attaching the
autoscaling group to the EC2-instance. You can scale up and scale down the instance based on the
incoming traffic of the application.
23
VPC Components
● VPC: You can launch AWS resources into a defined virtual network using Amazon Virtual
Private Cloud (Amazon VPC). With the advantages of utilizing the scalable infrastructure
of AWS, this virtual network closely mimics a conventional network that you would operate
in your own data center. /16 user-defined address space maximum (65,536 addresses)
● Subnets: To reduce traffic, the subnet will divide the big network into smaller, connected
networks. Up to /16, 200 user-defined subnets.
● Route Tables: Route Tables are mainly used to Define the protocol for traffic routing
between the subnets.
● Network Access Control Lists: Network Access Control Lists (NACL) for VPC serve as
a firewall by managing both inbound and outbound rules. There will be a default NACL for
each VPC that cannot be deleted.
● Internet Gateway(IGW): The Internet Gateway (IGW) will make it possible to link the
resources in the VPC to the Internet.
● Network Address Translation (NAT): Network Address Translation (NAT) will enable
the connection between the private subnet and the internet.
24
5.5 Amazon AWS Elastic Load Balancer
The elastic load balancer is a service provided by Amazon in which the incoming traffic is
efficiently and automatically distributed across a group of backend servers in a manner that
increases speed and performance. It helps to improve the scalability of your application and
secures your applications. Load Balancer allows you to configure health checks for the registered
targets. In case any of the registered targets (Autoscaling group) fails the health check, the load
balancer will not route traffic to that unhealthy target. Thereby ensuring your application is
highly available and fault tolerant. To know more about load balancing refer to Load Balancing
in Cloud Computing.
25
● Network Load Balancer: This type of load balancer works at the transport layer(TCP/SSL)
of the OSI model. It’s capable of handling millions of requests per second. It is mainly used
for load-balancing TCP traffic.
● Gateway Load Balancer: Gateway Load Balancers provide you the facility to deploy,
scale, and manage virtual appliances like firewalls. Gateway Load Balancers combine a
transparent network gateway and then distribute the traffic.
26
Use Cases Of EFS
● Secured file sharing: You can share your files in every secured manner and in a faster
and easier way and also ensures consistency across the system
● Web Hosting: Well suited for web servers where multiple web servers can access the file
system and can store the data EFS also scales whenever the data incoming is increased.
● Modernize application development: You can share the data from the AWS resources like
ECS, EKS, and any serverless web applications in an efficient manner and without more
management required.
● Machine Learning and AI Workloads: EFS is well suited for large data AI applications
where multiple instances and containers will access the same data improving collaboration
and reducing data duplication.
27
5.7 Identity and Access Management (IAM)
Identity and Access Management (IAM) manages Amazon Web Services (AWS) users and their
access to AWS accounts and services. It controls the level of access a user can have over an AWS
account & set users, grant permission, and allows a user to use different features of an AWS
account. Identity and access management is mainly used to manage users, groups, roles, and Access
policies.
IAM verifies that a user or service has the necessary authorization to access a particular service in
the AWS cloud. We can also use IAM to grant the right level of access to specific users, groups,
or services. For example, we can use IAM to enable an EC2 instance to access S3 buckets by
requesting fine-grained permissions.
28
CHAPTER 6
TERRAFORM
Terraform is an open-source infrastructure as code (IaC) tool developed by HashiCorp. It is
designed to help automate the provisioning and management of infrastructure resources in a
declarative and version-controlled manner. Terraform enables users to define infrastructure
configurations as code, making it easier to create, modify, and maintain cloud resources and
other infrastructure components.
29
● Change Automation: Terraform can implement complex changesets to the
infrastructure with virtually no human interaction. When users update the
configuration files, Terraform figures out what has changed and creates an incremental
execution plan that respects the dependencies.
30
useful in other ways. For example, teams can use Terraform to automatically update load
balancing member pools and other key networking tasks.
● Terraform is also useful for multi-cloud provisioning. With Terraform, development
teams can deploy serverless functions in AWS, manage Active Directory (AD) resources
in Microsoft Azure, and provision load balancers in Google Cloud. They can also use
Terraform (with HCP Packer) to create and manage multi-cloud golden image pipelines
and deploy and manage multiple virtual machine (VM) images.
● Manage Kubernetes clusters on any public cloud (AWS, Azure, Google).
● Enforce policy-as-code before infrastructure components are created and provisioned.
● Automate the use of secrets and credentials in Terraform configurations.
● Codify existing infrastructure by importing it into an empty Terraform workspace.
● Migrate state to Terraform to secure it and easily share it with authorized collaborators.
31
CHAPTER 7
APACHE
7.1 Introduction
Apache is free and open-source software of web server that is used by approx 40% of websites
all over the world. Apache HTTP Server is its official name. It is developed and maintained by
the Apache Software Foundation. Apache permits the owners of the websites for serving content
over the web. It is the reason why it is known as a "web server." One of the most reliable and
old versions of the Apache web server was published in 1995.
If someone wishes to visit any website, they fill-out the name of the domain in their browser
address bar. The web server will bring the requested files by performing as the virtual delivery
person.
Apache is not any physical server; it is software that executes on the server. However, we
define it as a web server. Its objective is to build a connection among the website visitor
browsers (Safari, Google Chrome, Firefox, etc.) and the server. Apache can be defined as cross-
platform software, so it can work on Windows servers and UNIX.
When any visitor wishes for loading a page on our website, the homepage, for instance, or our
"About Us" page, the visitor's browser will send a request on our server. Apache will return a
response along with each requested file (images, files, etc.). The client and server communicate
by HTTP protocol, and Apache is liable for secure and smooth communication among t both
the machines.
Apache can be an excellent option to execute our website on a versatile and stable platform.
Although, it comes with a few disadvantages we need to understand.
32
Department of Computer Science and Engineering
(Artificial intelligence)
Jaipur Engineering College and Research Centre, Jaipur
7.2 Pros:
● Stable and reliable software.
● Free and open-source, even for economic use.
● Regular security patches, frequently updated.
● Beginner-friendly, easy to configure.
● Flexible because of the module-based structure.
● Works out of a box with the WordPress sites.
● Cross-platform (implements on Windows servers and Unix).
● Easily available support and huge community in the case of any issue.
7.3 Cons:
● Various performance issues on extremely heavy-traffic websites.
● Several options of configuration can cause security susceptibility.
33
Department of Computer Science and Engineering
(Artificial intelligence)
Jaipur Engineering College and Research Centre, Jaipur
CHAPTER 8
PROJECT
34
Department of Computer Science and Engineering
(Artificial intelligence)
Jaipur Engineering College and Research Centre, Jaipur
35
Department of Computer Science and Engineering
(Artificial intelligence)
Jaipur Engineering College and Research Centre, Jaipur
CHAPTER 9
REFERENCES
“The DevOps Handbook: How to Create World-Class Agility, Reliability, & Security in
Technology Organizations" by Gene Kim, Jez Humble, Patrick Debois, and John Willis
"Docker Deep Dive" by Nigel Poulton
Upfalirs Pvt. Ltd.
Geeks For Geeks - https://www.geeksforgeeks.org/devops-tutorial/
JavaPoints - https://www.javatpoint.com/devops-interview-questions
36