0% found this document useful (0 votes)
147 views12 pages

Continuous Integration Overview: Integrate at Least Daily

The document discusses continuous integration (CI), which is a development practice where developers integrate code into a shared repository several times a day. Each check-in is verified by an automated build to detect problems early. CI brings benefits like catching issues fast, reducing integration problems, and allowing teams to deliver software more rapidly. The document outlines principles of CI like maintaining a single source repository and automating testing. It also describes how CI is implemented using tools like Jenkins, which automates building and testing when code is committed.

Uploaded by

sciby
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
147 views12 pages

Continuous Integration Overview: Integrate at Least Daily

The document discusses continuous integration (CI), which is a development practice where developers integrate code into a shared repository several times a day. Each check-in is verified by an automated build to detect problems early. CI brings benefits like catching issues fast, reducing integration problems, and allowing teams to deliver software more rapidly. The document outlines principles of CI like maintaining a single source repository and automating testing. It also describes how CI is implemented using tools like Jenkins, which automates building and testing when code is committed.

Uploaded by

sciby
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 12

Continuous Integration Overview

“Eliminate blind spots so you can build and deliver software more rapidly.”

Integrate at least daily

Continuous Integration (CI) is a development practice that requires developers to integrate code
into a shared repository several times a day. Each check-in is then verified by an automated
build, allowing teams to detect problems early.

By integrating regularly, you can detect errors quickly, and locate them more easily.

Solve problems quickly

Because you’re integrating so frequently, there is significantly less back-tracking to discover


where things went wrong, so you can spend more time building features.

Continuous Integration is cheap. Not continuously integrating is costly. If you don’t follow a
continuous approach, you’ll have longer periods between integrations. This makes it
exponentially more difficult to find and fix problems. Such integration problems can easily knock
a project off-schedule, or cause it to fail altogether.

Continuous Integration brings multiple benefits to your organization:

● Say goodbye to long and tense integrations


● Increase visibility which enables greater communication
● Catch issues fast and nip them in the bud
● Spend less time debugging and more time adding features
● Proceed in the confidence you’re building on a solid foundation
● Stop waiting to find out if your code’s going to work
● Reduce integration problems allowing you to deliver software more rapidly

More than a process

Continuous Integration is backed by several important principles and practices.

The Practices :

● Maintain a single source repository


● Automate the build
● Make your build self-testing
● Every commit should build on an integration machine
● Keep the build fast
● Test in a clone of the production environment
● Make it easy for anyone to get the latest executable
● Everyone can see what’s happening
● Automate deployment

How to do it

● Developers check out code into their private workspaces.


● When done, the commit changes to the repository.
● The CI server monitors the repository and checks out changes when they occur.
● The CI server builds the system and runs unit and integration tests.
● The CI server releases deployable artefacts for testing.
● The CI server assigns a build label to the version of the code it just built.
● The CI server informs the team of the successful build.
● If the build or tests fail, the CI server alerts the team.
● The team fix the issue at the earliest opportunity.
● Continue to continually integrate and test throughout the project

Team Responsibilities

● Check in frequently
● Don’t check in broken code
● Don’t check in untested code
● Don’t check in when the build is broken
● Don’t go home after checking in until the system builds

Many teams develop rituals around these policies, meaning the teams effectively manage
themselves, removing the need to enforce policies from on high

The process and practise in detail:

Maintain a code repository

This practice advocates the use of a revision control system for the project's source code. All
artifacts required to build the project should be placed in the repository. In this practice and in
the revision control community, the convention is that the system should be buildable from a
fresh checkout and not require additional dependencies. Extreme Programming advocate Martin
Fowler also mentions that where branching is supported by tools, its use should be minimized.
Instead, it is preferred for changes to be integrated rather than for multiple versions of the
software to be maintained simultaneously. The mainline (or trunk) should be the place for the
working version of the software.

Automate the build


A single command should have the capability of building the system. Many build-tools, such as
make, have existed for many years. Other more recent tools are frequently used in continuous
integration environments. Automation of the build should include automating the integration,
which often includes deployment into a production-like environment. In many cases, the build
script not only compiles binaries, but also generates documentation, website pages, statistics
and distribution media (such as Debian DEB, Red Hat RPM or Windows MSI files).

Make the build self-testing

Once the code is built, all tests should run to confirm that it behaves as the developers expect it
to behave.

Everyone commits to the baseline every day

By committing regularly, every committer can reduce the number of conflicting changes.
Checking in a week's worth of work runs the risk of conflicting with other features and can be
very difficult to resolve. Early, small conflicts in an area of the system cause team members to
communicate about the change they are making. Committing all changes at least once a day
(once per feature built) is generally considered part of the definition of Continuous Integration. In
addition performing a nightly build is generally recommended.[citation needed] These are lower
bounds; the typical frequency is expected to be much higher.

Every commit (to baseline) should be built

The system should build commits to the current working version in order to verify that they
integrate correctly. A common practice is to use Automated Continuous Integration, although
this may be done manually. For many[who?], continuous integration is synonymous with using
Automated Continuous Integration where a continuous integration server or daemon monitors
the revision control system for changes, then automatically runs the build process.

Keep the build fast

The build needs to complete rapidly, so that if there is a problem with integration, it is quickly
identified.

Test in a clone of the production environment

Having a test environment can lead to failures in tested systems when they deploy in the
production environment, because the production environment may differ from the test
environment in a significant way. However, building a replica of a production environment is cost
prohibitive. Instead, the pre-production environment should be built to be a scalable version of
the actual production environment to both alleviate costs while maintaining technology stack
composition and nuances.
Make it easy to get the latest deliverables

Making builds readily available to stakeholders and testers can reduce the amount of rework
necessary when rebuilding a feature that doesn't meet requirements. Additionally, early testing
reduces the chances that defects survive until deployment. Finding errors earlier also, in some
cases, reduces the amount of work necessary to resolve them.

Everyone can see the results of the latest build

It should be easy to find out whether the build breaks and, if so, who made the relevant change.

Automate deployment

Most CI systems allow the running of scripts after a build finishes. In most situations, it is
possible to write a script to deploy the application to a live test server that everyone can look at.
A further advance in this way of thinking is Continuous deployment, which calls for the software
to be deployed directly into production, often with additional automation to prevent defects or
regressions.

Continuous Deployment :

Continuous Deployment is closely related to Continuous Integration and refers to the release
into production of software that passes the automated tests.

Essentially, “it is the practice of releasing every good build to users” explains Jez Humble,
author of Continuous Delivery.

By adopting both Continuous Integration and Continuous Deployment, you not only reduce risks
and catch bugs quickly, but also move rapidly to working software.

With low-risk releases, you can quickly adapt to business requirements and user needs. This
allows for greater collaboration between ops and delivery, fuelling real change in your
organisation, and turning your release process into a business advantage.

Jenkins

Jenkins is one open source tool to perform continuous integration. The basic functionality of
Jenkins is to monitor a version control system and to start and monitor a build system (for
example, Apache Ant or Maven) if changes occur. Jenkins monitors the whole build process
and provides reports and notifications to alert maintainers on success or errors.

Jenkins can be extended by additional plug-ins, e.g., for building and testing Android
applications.
Requirements for using Jenkins

To use Jenkins you need:

An accessible source code repository, e.g., a Git repository, with your code checked in.

A working build script, e.g., a Maven script, checked into the repository

Jenkins can be started via the command line or can run in a web application server. Under
Linux you can also install Jenkins as system service.

Basic Work Flow

● Commit and push new code to your repo.



● Jenkins waits for this commit, runs a full series of tests (customized by the developer)

● If the tests and build are successful, the new code gets deployed. If it fails, the old code
continues to run with no downtime related to the push.
● Users can review the persistent build history maintained by Jenkins

How does a Jenkins build work?

During a build the following steps take place:

● User issues a git push


● Jenkins is notified a new push is ready.
● Jenkins runs the build
● Content from originating app is downloaded to the builder app through git and rsync (Git
for source code and rsync for existing libraries).
● ci_build.sh is called from the Jenkins shell which sets up the builder app for the Jenkins
environment
● Any built in bundling steps (PHP Pear processing, Python virtual env, etc) are
performed.
● Build is executed on the Jenkins builder
● Any additional desired steps are executed from the Jenkins shell (Maven build, Gem
install, test cases, etc).
● Jenkins stops the current running application
● Jenkins rsyncs all new content over to the originating application
● Deploy is executed on the originating application by calling the deploy.sh script
● Jenkins starts the originating application
● post_deploy is executed on the originating application
● Jenkins archives build artifiacts for later reference

The build artifacts however, will still exist in Jenkins and can be viewed there.
Users can look at the build job by clicking on it in the Jenkins interface and going to "configure".
It is the Jenkins' build job to stop, sync and start the application once a build is complete.

I have created an overview diagram for this workflow below:

Server Configuration

OS of choice : Ubuntu 14.04

Why Ubuntu?

Ubuntu is the most popular server os used for jenkins build server. The advantage is that all the
required packages are readily available in ubuntu whether it is Sun JDK or Open JDK for java,
or the latest version of apache or nginx. The other advantage is that as more people are using
ubuntu, there will be more support for ubuntu in online forums and the interwebs. We can use
Ubuntu 14.04 LTS which is having long term support for the security updates?

Server Requirements:

There is no such ideal configuration for the hardware components of a Jenkins server. It is
because this purely depends upon your need and number of deployments which are happening
in a day. For a typical work flow of around 1000 builds per day will be good with a server of 4
GB of RAM

Jenkins stores the data in files such xml so there is no need to have a database like mysql or
postgres sql. The only thing we need to consider to have is enough disk space for jenkins home
directory. Jenkins saves the data and configuration in its home directory which is /var/lib/jenkins
. So when we create the server, we need to allocate enough hard disk space for root volume.

100 GB of root device (We need to select the size of the root device while creating the ec2
instance from the AWS console. This is possible) is good enough for a normal work flow. Other
option is to create /var using LVM (Logical Volume Manager) so that we can add more EBS
volumes when ever there is need for more disk space and expand it on the fly.

Cloud Solution:

We are already using amazon AWS and familiar with this. So we will be going ahead with AWS
as cloud of choice. Another advantage is that the ability to vertically scale the ec2 instance in
future (by upgrading from ec2 instance to a bigger one). Some of the components need to be
consider in this set up like Elastic IP, VPC for easy upgrade of the instance type.

In a nutshell:

OS of choice : Ubuntu Server 14.04 LTS


RAM : 4 GB
Cloud Solution : AWS with EC2, VPC and EBS
Instance type : m3.medium
Region : US North Virginia
Price : $51.34 Per Month (on demand) and $20.35 Per Month for 3 Years heavily reserved
Hard Disk : 100 GB (or /var with LVM)
Database: Jenkins doesn’t need database as it stores data as files in it /var/lib/jenkins

Installation

So we have decided that we are going with Ubuntu Server 14.04 LTS for the Jenkins
installation. Now the fun part. There are lots of ways to install Jenkins such as set up tomcat
server and use Jenkins.war, os specifiic installation using apt-get and yums etc.
As we are using Ubuntu, we will go ahead with apt-get based installation of Jenkins

Advantages :

● Security updates comes with the package manager


● No need to search for specific configuration files as all are stored /etc/default/jenkins
● Easy migration to configuration files to another server
● Ease of administration using init scripts located at /etc/init.d/jenkins

Step by step installation steps :

1. Install open jdk 7

sudo apt-get update

sudo apt-get install openjdk-7-jre

vim .bashrc

export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-amd64;
export JRE_HOME=/usr/lib/jvm/java-1.7.0-openjdk-amd64;
export PATH=/usr/sbin:/usr/bin:/sbin:/bin:$JRE_HOME/bin;

source .bashrc

2. Install Jenkins

wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -


sudo sh -c 'echo deb http://pkg.jenkins-ci.org/debian binary/ > /etc/apt/sources.list.d/jenkins.list'
sudo apt-get update
sudo apt-get install jenkins

Jenkins will be launched as a daemon up on start. See /etc/init.d/jenkins for more details.

The ‘jenkins‘ user is created to run this service.

Log file will be placed in /var/log/jenkins/jenkins.log. Check this file if you are troubleshooting
Jenkins.

/etc/default/jenkins will capture configuration parameters for the launch.


By default, Jenkins listen on port 8080. Access this port with your browser to start configuration.

That’s all in the installation part. Now you can access it from the browser using

http://ip-address-of-instance:8080/

Ansible for Capacity Provisioning and Managment

Ansible is a radically simple IT orchestration engine that automates configuration management,


application deployment, and many other IT needs. Ansible models your IT infrastructure by
looking at the comprehensive architecture of how all of your systems inter-relate, rather than
just managing one system at a time. It uses no agents and no additional custom security
infrastructure, so it’s easy to deploy — and most importantly, it uses a very simple language
(YAML, in the form of Ansible playbooks) that allow you to describe your automation jobs in a
way that approaches plain English.

How Ansible Works

Ansible works by connecting to your nodes and pushing out small programs, called “Ansible
Modules” to them. These programs are written to be resource models of the desired state of the
system. Ansible then executes these modules (over SSH by default), and removes them when
finished. Your library of modules can reside on any machine, and there are no servers,
daemons, or databases required. Typically you’ll work with your favorite terminal program, a text
editor, and probably a version control system to keep track of changes to your content.

Managing Ansible Inventory using simple text files

By default, Ansible represents what it manages using a very simple INI file that puts all of your
managed machines in groups of your own choosing. There’s never any hassle deciding why a
particular machine didn’t get linked up due to obsure NTP or DNS issues.

[webservers]
www1.example.com
www2.example.com

[dbservers]
db0.example.com
db1.example.com

Assign variables in simple text files (in a subdirectory called 'group_vars/' or 'host_vars/' or
directly in the inventory file. Alternatively, use a plugin to pull your inventory from data sources
like Cobbler, OpenStack, or Amazon EC2.
Basics of using Ansible

Quickly contact one or more hosts. Again, there’s no need for anything to be running or
preinstalled. If you have a new cloud instance with an SSH key or PEM file, you can talk to it
right away.

ansible all -m ping


ansible webservers -a "/usr/bin/command arg1 arg2" --limit phoenix_datacenter --forks 25
ansible foo.example.com -m yum "name=httpd state=installed"
ansible foo.example.com -a "/usr/sbin/reboot"

Note that we have access to state-based resource modules as well as running raw commands.
These modules are extremely easy to write and Ansible ships with a fleet of them so most of
your work is already done. These modules are shared with Ansible playbooks, our configuration
management and orchestration language.

Ansible PLAYBOOKS:

Ansible Playbooks are simple and powerful automation language used by ansible to do the
provisioning. Playbooks can finely orchestrate multiple slices of your infrastructure topology,
with very detailed control over how many machines to tackle at a time. They are perhaps the
most interesting part of Ansible. Ansible’s approach to orchestration is one of bare-minimum
simplicity, as we believe your automation code should make perfect sense to you years down
the road and there should be very little to remember about special syntax or features.

Ansible + Jenkins in our Environment

We have our on Bit Bucket private repository for code hosting and we are updating our ansible
playbooks there.

The Bit Bucket repository is located at :


https://bitbucket.org/fissioncodewarriors/fissionlabsce_playbooks

Our ansible playbook contains playbooks for creating a m2.medim ubuntu instance in Amazon
VPC Singapore region. Not only that it will install Jenkins, required Jenkins plugins, Jenkins
users etc

Jenkins and Plugins


Our Jenkins server IP is : http://54.251.118.253/

username : jadmin
password : jadmin

We can add needed functionality to Jenkins using the Jenkins Plugin system. There are lot of
useful plugins available for Jenkins.

We are using the below plugins in our environment:

https://wiki.jenkins-ci.org/display/JENKINS/BitBucket+Plugin
https://wiki.jenkins-ci.org/display/JENKINS/Git+Plugin
https://wiki.jenkins-ci.org/display/JENKINS/SSH+Credentials+Plugin
https://wiki.jenkins-ci.org/display/JENKINS/JaCoCo+Plugin
https://wiki.jenkins-ci.org/display/JENKINS/Checkstyle+Plugin
https://wiki.jenkins-ci.org/display/JENKINS/PMD+Plugin
https://wiki.jenkins-ci.org/display/JENKINS/FindBugs+Plugin
https://wiki.jenkins-ci.org/display/JENKINS/Javadoc+Plugin
https://wiki.jenkins-ci.org/display/JENKINS/Cobertura+Plugin
https://wiki.jenkins-ci.org/display/JENKINS/Subversion+Plugin
https://wiki.jenkins-ci.org/display/JENKINS/Gradle+Plugin
https://wiki.jenkins-ci.org/display/JENKINS/Maven+Project+Plugin
https://wiki.jenkins-ci.org/display/JENKINS/JIRA+Plugin
https://wiki.jenkins-ci.org/display/JENKINS/JDepend+Plugin
https://wiki.jenkins-ci.org/display/JENKINS/S3+Plugin
https://wiki.jenkins-ci.org/display/JENKINS/Template+Project+Plugin
https://wiki.jenkins-ci.org/display/JENKINS/Gradle+Plugin

Some of the other useful plugins for Jenkins are given below :

1. Backup Plugin
2. Even Scheduler Plugin
3. Folders Plugin
4. Folders Plus Plugin
5. Label Throttle Plugin
6. Fast Archiver Plugin
7. Role Based Access Control Plugin
8. Skip Next Build Plugin
9. Template Plugin
10. Validated merge Plugin
11. Restart Abortedd Builds Plugin
12. Long-Running Build Plugin
13. Consolidated Build View Plugin
14. Support Plugin
15. Monitoring Plugin

You might also like