0% found this document useful (0 votes)
14 views11 pages

Continous Integration

Uploaded by

nikkyrohit2518
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views11 pages

Continous Integration

Uploaded by

nikkyrohit2518
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

CONTINOUS INTEGRATION

Developers write several lines of code while creating a software

working in a team.

It's an ideal practice to store all this code at a centralized place.

This centralized repository is called a version control system like GitHub.

Everyday developers will pull and push code such repositories several
times in a day.

So code changes or code commit happens continuously.

This code will be moved to build server on build server.

This code will be build, tested and evaluated, which generates the
software, or we call it artifact

at this stage.

This artifact or software will be stored in a software repository.

Artifact or software is really an archive of files generated from the build


process based on the programming

language.

This artifact will be packaged in a specific format.

Artifact packaging format could be war or jar in Java. DLL/EXE/MSI in


Windows or even ZIP/Tar ball

From repository

It will be shipped to servers for further testing.

After deploying this artifact on the servers, software testers can conduct
further testing, and once

they approve, it can be shipped to production servers.

So that's how it works or does it?

Let's dig in and find out.

These developers are creating a software model and have worked for
three weeks straight.

That's a lot of code, really.

Oh sure, you can take some break.

Your job's done for now.


And as per the process, all this code will be fetched by the Build server,
and this code is build tested.

and oh boy, lots of errors, bugs, conflicts, build failures.

Now Developers have to fix all these defects, have to rewrite the code at
several places.

Lot of rework, really.

This could have been much easier if the problem was detected very early
in the process, but then code

was collected with defects for three weeks.

And now, yeah, you have to fix all that.

So the code is getting merged into the repository, but not really getting
integrated.

The solution to this is a very simple and a continuous process after every
single commit from the developers,

the code should be built and tested, so no waiting and collecting all these
codes with bugs, but then

developer commits several times in a day, so it's not humanly possible to


do a build & release several

times in a day.

I mean, the manual process so that's simple, just automated.

So when the developer commits any code and automated process, will
fetch the code, build it tested

and send a notification if there is any failure.

As soon as the developers receives a failed notification, he or she will fix


the code and commit it

again.

So again, build and test new changes, if it's good, then it can be versioned
and stored in a software

repository.

And it's all automated. Like this.

Any defects can be caught as soon as it's merged with the code.

Let's see it in a cyclic view.


This automated process is called continuous integration, or CI in short.

The goal of CI is to detect defects at a very early stage, so it does not


multiply.

IDE is used by developers for coding.

These IDE will be integrated with version control system to store and
version the good.

Build tools based on the programming language.

Software repositories to store artifacts.

Continuous integration tools that integrate everything.

CONTINOUS DELIVERY
Continuous delivery is the extension of continuous integration

We have seen continuous integration is automation of our code build and


test, developers any code, it

will be automatically built and tested.

If everything is good, the artifact generated from this process will be


stored in software repositories.

The goal of CI is to detect defects at very early stage so it does not


multiply.

Ops team will get regular requests to deploy the artifacts generated from
the CI process on servers

for further testing.

Ops team with all the info they got deploy the artifacts to the servers, at
times the deployment

also fails, which leads to higher lead time.

Dev & Ops team needs to work together to fix such deployment failures.

And this happens on and off.

We have to understand that in agile development, there will be regular


code changes which needs to

be deployed on servers for further testing.

Deployment is not just about shipping the software to the servers.


It's more than that.

A deployment could mean also server provisioning, installing


dependencies on servers, configuration

changes, network or firewall rules changes, and then deploy the artifact to
the server and there could

be many more things.

Ops team will be flooded with such requests as CI process will generate
faster and regularly.

After the manual deployment, information will be sent to the QA team for
testing after conducting testing,

QA team will send information back.

There is too much of human intervention and manual approval in this


process.

So as this terminator says automate it and save your time and also
failures.

Any and every step in deployment should be automated.

There are a lot of automation tools available in the market, like ansible,
puppet, chef for system

automation, terraform confirmation for cloud infrastructure, automation,


Jenkins Octopus deploy

for CICD automation.

And there are many others to choose from based on your need.

Software testing also has to be automated. Any test process like


functional, load, performance, databases,

network and security and any other test cases.

So Ops team, will write automation code for deployment testers will write
automation code for software

testing and sync it with developers source code.

So now we have a process integrated with deployment automation, which


triggers software testing, all

three teams and processes integrated together.

Continuous delivery process.

Have a look.
Automate every step and then stitch everything together.

That gives you continuous delivery automation.

VIRTUALIZATION
In this lecture, we will talk about what is virtualization.

And this will help you understand what is cloud computing and also what
are containers?

What is Docker?

So let's see.

One computer does the job of multiple computers.

No, I'm not talking about multitasking.

I am talking about multi os.

One computer can run multiple OS at the same time.

parallelly.

Now before virtualization came.

That was not the case before virtualization.

Also, we had software, services, apps running and to run an app or a


service like Tomcat or apache http

apache httpd or mysql databases to run such services, we need servers.

In that time, the only option was physical computers.

Like the one you would be using right now to watch this video.

We have much more bigger computers in data centers.

And always the idea is one service, one server.

Or should I say one main service = one server

And this is for isolation.

So if your database server is running, you don't want to run a web server
in the same machine or a

web service in the same computer.

If you run multiple main service in one server.


It's like "Putting all our eggs in one basket."

Which can lead to catastrophe.

So this is called as isolation.

And servers are always over provisioned.

That means if we need 8 GB of RAM, we will go for 12 gb of RAM.

IT team will always over-provision servers.

But server resources are mostly underutilized.

You will say if it's under-utilized, then why is it over-provisioned?

Well, the team will always go for extra just in case.

So they don't run out of resources.

But all this results in huge capital expenditure and operational


expenditure.

I'm talking about physical servers over here.

We have to procure it, stack it, rack it, install operating system and
maintain it.

So if you have a ten services in a project, you need ten servers minimum
and for high availability

minimum, you need 20.

And definitely more than that.

So it was kind of a big deal.

To run an I.T. project.

Then came VMware.

With the concept of virtualization.

VMware brought in tools or created tools which could allow one computer
to run multiple operating systems.

And that's how we can isolate, right?

So instead of running multiple main Services in one computer, we can run


multiple operating system

in the computer and run all services on top of that.

So they will be isolated.

And if you're thinking again, isn't it the problem of one basket


having multiple eggs or multiple eggs in one basket?

So let me tell you here, these physical computers can be clustered


together so you can distribute your

virtual machines.

That later.

But I just want to tell you that this thing is already taken care.

So virtualization partitions your physical resource into virtual resource.

So to set up and run an operating system, you need a physical computer.

But with virtualization, you can create a virtual computer in the physical
machine and you can create

multiple virtual computers in a physical machine.

Think of think of them as baby computers living in the physical machine.

And these virtual machines are isolated from each other because they
have their own operating systems.

And I'm talking about server virtualization, virtual machines.

But there's other kind of virtualization as well.

You have network virtualization, storage virtualization.

So that is how it may look.

You have the hardware, the computer, physical computer.

On top of that, you will have a tool called a hypervisor, the software, and
on that you can create

virtual machines, each virtual machine having its own operating system.

And you can run your main service, your application in this OS.

So they are isolated from each other.

Okay.

Let's discuss about some terminologies.

Host OS This is the operating system of the physical machine.

Physical computer.

So currently, if you're using laptop or a desktop to watch this video, the


operating system of your

laptop is the host operating system.


Guest operating system is the operating system of the virtual machine.

Virtual machines are also sometimes referred as guest machine.

So host, guest, got that? because we will be using these terminologies a


lot.

VM are the short form of virtual machines

Snapshot,

Is a way of taking backup of the virtual machine.

Well, we say machine, virtual machine, but it's really just sort of files.

So which can be backed up very easily.

And there will be concept of snapshot in every virtualization technology to


back up your virtual machines.

Hypervisor is the tool or the software that let us do or create virtual


machines.

Hypervisor enables virtualization.

There are two types of hypervisor.

You have type one, which is also called as bare metal operating a bare
metal hypervisor.

It runs directly on the physical computer, like an operating system like


here installing Windows 10

or you have Mac OS, like that you will have hypervisor installed on the
physical computer.

Now this is only for production and it won't let you use this computer for
other purposes.

For example, VMware ESXi or Zen Hypervisor.

The other type of hypervisor is type 2 which we will be using in this


course.

It runs as a software which you can install on any computer.

This is just for learning and testing purpose, because obviously you're not
going to run production

machines on your laptop.

Some example, Oracle VM VirtualBox, which we will be using in this course


and VMware server and there
are many other.

So that First diagram is the type 1 hypervisor.

You have the computer on top of that, your hypervisor, and then you
create your virtual machine,

install your operating system and run your app.

This is for production, and this also can be clustered.

Type one hypervisors can be clustered together so you can distribute your
virtual machines on the cluster

of hypervisors.

So if one of the hypervisor goes down, the other can take or run your
virtual machines.

Type two hypervisor, which just runs like a software on your computer.

This is for learning and testing purpose, so you will have your computer.

On top of that, you will have an operating system like Windows ten, Mac
OS or even Linux.

And like you install softwares, you're going to install a Type two hypervisor
on that.

You can create virtual machines and install your operating system.

Now there is a hypervisor called Hyper-V from Microsoft, and that can be
easily confused as a Type

two hypervisor.

But it is a type one hypervisor.

Just a tip.

Anyways.

We will see in the next lecture how to do virtualization or how to create


virtual machines on a computer.

And we are going to also automate that setup.

The reason of doing this exercises or creating virtual machines so you can
practice Linux and a few

other upcoming tools in this course.

VAGRANT
Vagrant is a tool designed to streamline and automate virtual machine
(VM) management by simplifying the process of creating, configuring, and
destroying VMs. It operates on top of hypervisors like VMware or
VirtualBox, using these systems to handle the actual virtualization while
providing a convenient layer for automation.

Key Points on Vagrant:

1. Purpose: Automates VM lifecycle management (creation,


configuration, provisioning, and cleanup).

2. Architecture: Vagrant works alongside a hypervisor (e.g.,


VirtualBox), and VM settings are stored in a configuration file named
Vagrantfile.

3. Vagrantfile: This text file contains all VM settings (e.g., CPU, RAM,
IP configuration) and provisioning instructions (e.g., setting up
servers or databases).

4. Vagrant Boxes: Pre-configured VM images called “boxes” are


available on Vagrant Cloud. These images save time as they come
with an OS pre-installed.

5. Provisioning: Vagrant allows you to automate post-OS installation


steps by specifying scripts or commands in Vagrantfile.

Common Commands:

 Initialize: vagrant init <box_name> to set up a new Vagrantfile


with the specified box.

 Start VM: vagrant up to create or start a VM based on Vagrantfile.

 Login to VM: vagrant ssh to access the VM’s command line.

 Stop VM: vagrant halt to power off the VM.

 Destroy VM: vagrant destroy to delete the VM completely.

Steps to Use Vagrant:

1. Create a Folder: Use mkdir to set up a directory where Vagrant will


store VM-related files.

2. Set Up Vagrantfile: Run vagrant init <box_name>, specifying the


desired box name.

3. Start VM: Run vagrant up to download the box (if not already
present) and create the VM.

4. Access VM: Use vagrant ssh to log into the VM and sudo -i to switch
to root, if needed.
5. Stop or Destroy VM: Use vagrant halt to power off or vagrant
destroy to delete the VM when done.

Tips:

 Use Git Bash on Windows or Terminal on Mac for executing Vagrant


commands.

 Managing Multiple VMs: You can create separate folders for


different VMs or use vagrant global-status to view the status of all
Vagrant-managed VMs.

Vagrant greatly simplifies managing environments, especially when


deploying similar setups on multiple machines or replicating environments
across teams. It’s especially helpful for developers who need to quickly set
up, modify, or tear down development environments.

You might also like