100% found this document useful (1 vote)
283 views60 pages

DevOps Chapter 1

DevOps is a combination of cultural philosophies, practices, and tools that increases an organization's ability to deliver applications and services at high velocity. It evolved from earlier software development methodologies like the waterfall model, agile, lean, and ITIL frameworks. The goal of DevOps is to decrease the system development life cycle while delivering features and updates frequently. It integrates development and operations teams to automate and streamline the process of software builds, testing, and releases.

Uploaded by

vort
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
283 views60 pages

DevOps Chapter 1

DevOps is a combination of cultural philosophies, practices, and tools that increases an organization's ability to deliver applications and services at high velocity. It evolved from earlier software development methodologies like the waterfall model, agile, lean, and ITIL frameworks. The goal of DevOps is to decrease the system development life cycle while delivering features and updates frequently. It integrates development and operations teams to automate and streamline the process of software builds, testing, and releases.

Uploaded by

vort
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 60

Sinhgad Technical Education Society’s

Sinhgad Institute of Management & Computer Application

DevOps
IT-41
DevOps

Chapter-1

Introduction to DevOps

Dr. Poonam Sawant


Topic Content

1.1. Define Devops


1.2. What is Devops
1.3. SDLC models, Lean, ITIL, Agile
1.4. Why Devops?
1.5. History of Devops
1.6. Devops Stakeholders
1.7. Devops Goals
1.8. Important terminology
1.9. Devops perspective
1.10. DevOps and Agile
1.11. DevOps Tools
1.12. Configuration management
1.13. Continuous Integration and Deployment
1.14. Linux OS Introduction
1.15. Importance of Linux in DevOps
1.16. Linux Basic Command Utilities
1.17. Linux Administration
1.18. Environment Variables
1.19. Networking
1.20. Linux Server Installation
1.21. RPM and YUM Installation
Introduction to DevOps

DevOps is the combination of cultural philosophies, practices, and tools that,


Increases an organization’s ability to deliver applications and services at high velocity:
Evolving and improving products at a faster pace than traditional software
development and infrastructure management processes.
Introduction to DevOps

•The term DevOps is a combination of two words namely Development and Operations. DevOps
is a practice that allows a single team to manage the entire application development life cycle,
that is, development, testing, deployment, and monitoring.

•The ultimate goal of DevOps is to decrease the duration of the system’s development life cycle
while delivering features, fixes, and updates frequently in close synchronization with business
objectives.

•DevOps is a software development approach with the help of which you can develop superior
quality software quickly and with more reliability. It consists of various stages such as continuous
development, continuous integration, continuous testing, continuous deployment, and
continuous monitoring.

DevOps Stakeholders:

Architects, business representatives, customers, product owners, project managers, quality


assurance (QA), testers and analysts, suppliers
History of DevOps

History of DevOps

Before DevOps, We had following approaches for software development.

1. SDLC Models: Waterfall Model


2. Agile
3. Lean
4. ITIL

1. SDLC: Software Development life cycle (SDLC) is a spiritual model used in project management
that defines the stages include in an information system development project, from an initial
feasibility study to the maintenance of the completed application.
History of DevOps

Waterfall Model
•The waterfall model is a software development model that is pretty straight forward and linear.
This model follows a top-down approach.

•This model has various starting with Requirements gathering and analysis. This is the phase
where you get the requirements from the client for developing an application. After this, you try
to analyze these requirements.
History of DevOps

•The next phase is the Design phase where you prepare a blueprint of the software. Here, you
think about how the software is actually going to look like.

•Once the design is ready, you move further with the Implementation phase where you begin
with the coding for the application. The team of developers works together on various
components of the application.

•Once you complete the application development, you test it in the Verification phase. There are
various tests conducted on the application such as unit testing, integration testing, performance
testing, etc.

•After all the tests on the application are completed, it is deployed onto the production servers.

•At last, comes the Maintenance phase. In this phase, the application is monitored for
performance. Any issues related to the performance of the application are resolved in this phase.
History of DevOps
Advantages of the Waterfall Model:

•Simple to understand and use


•Allows for easy testing and analysis
•Saves a significant amount of time and money
•Good for small projects if all requirements are clearly defined
•Allows for departmentalization & managerial control

Disadvantages of Waterfall Model:

•Risky and uncertain


•Lack of visibility of the current progress
•Not suitable when the requirements keep changing
•Difficult to make changes to the product when it is in the testing phase
•The end product is available only at the end of the cycle
•Not suitable for large and complex projects
History of DevOps

Agile Methodology

Agile Methodology is an iterative based software development approach where the


software project is broken down into various iterations or sprints.

Each iteration has phases like the waterfall model such as Requirements Gathering,
Design, Development, Testing, and Maintenance. The duration of each iteration is
generally 2-8 weeks.
History of DevOps

Agile Process
•In Agile, a company releases the application with some high priority features in the first iteration.
•After its release, the end-users or the customers give you feedback about the performance of the
application.
•Then you make the necessary changes into the application along with some new features and the
application is again released which is the second iteration.
•You repeat this entire procedure until you achieve the desired software quality.

Advantages of Agile Model


•It adaptively responds to requirement changes favorably
•Fixing errors early in the development process makes this process more cost-effective
•Improves the quality of the product and makes it highly error-free
•Allows for direct communication between people involved in software project
•Highly suitable for large & long-term projects
•Minimum resource requirements & very easy to manage

Disadvantages of Agile Model


•Highly dependent on clear customer requirements
•Quite Difficult to predict time and effort for larger projects
•Not suitable for complex projects
•Lacks documentation efficiency
•Increased maintainability risks
History of DevOps

Lean Software Development (LSD) is an agile framework that is used to streamline & optimize the
software development process.
It may also be referred to as Minimum Viable Product (MVP) strategy as these ways of thinking are
very much alike since both intend to speed up development by focusing on new deliverables.
Toyota has been credited to inspire the lean development approach which is meant for optimizing
production and minimize waste.
Seeing Toyota’s lean approach many other manufacturing teams started to follow the same
strategy. And it was first adopted in software development in 2003.
History of DevOps

Advantages of LSD :
1.
LSD removes the unnecessary process stages when designing software so that it acts as a
time saver as simplifies the development process.
2. With a focus on MVP, Lean Software Development prioritizes essential functions so this
removes the risk of spending time on valueless builds.
3. It increases the involvement power of your team as more and more members participate due
to which the overall workflow becomes optimized and losses get reduced.

Disadvantages of LSD :

1. Make it scalable as other frameworks since it strongly depends on the team involved.
2. It is hard to keep pace so it is not easy for developers to work with team members as conflict
may occur between them.
3. It leads to a difficult decision-making process as it is mandatory for customers to clearly set
their requirements for the development not to be interrupted.
History of DevOps

ITIL
1. Is a set of well-defined guidelines that helps Software professionals to deliver the best IT
services. ITIL guidelines are the best practices that are observed, gathered, and put together
over time for delivering quality IT services. The full form of ITIL is Information Technology
Infrastructure Library.
2. Popular IT services covered by ITIL are Cloud services, backup, network security, Data
processing and storage, managed print services, IT consulting, Help desk support, IOT etc.
3. The systematic and structured approach of ITIL framework helps an organization in managing
risk, establishing cost-effective practices, strengthening customer relations. All these
eventually result in building a stable IT environment for your business.
History of DevOps
History of DevOps

Advantages of ITIL

Here, are pros/benefits of using ITIL services


•Increase customer satisfaction
•Improve service availability
•Financial management
•Allows you to improve the decision-making process
•Helps you to control infrastructure services
•Helps to create a clear structure of an organization

Disadvantages

• most IT professionals consider ITIL a holistic approach to IT management. While ITIL is


comprehensive, even the publication itself does not consider itself a holistic approach to IT
management.
• It misguides people
History of DevOps

Advantages of ITIL

Here, are pros/benefits of using ITIL services


•Increase customer satisfaction
•Improve service availability
•Financial management
•Allows you to improve the decision-making process
•Helps you to control infrastructure services
•Helps to create a clear structure of an organization

Disadvantages

• most IT professionals consider ITIL a holistic approach to IT management. While ITIL is


comprehensive, even the publication itself does not consider itself a holistic approach to IT
management.
• It misguides people
History of DevOps

DevOps

The goal of DevOps is to enable cross-functional relationships between the development and
operations groups enabling the two groups to work together to ensure IT services are transitioned
to the live environment without problems.

The specific skills and knowledge needed for a DevOps implementation will vary based on the
infrastructure and business focus.

The growing consensus within the DevOps community is that DevOps = Agile + Lean + ITIL helps to
establish a common set of base skills and knowledge that transcend business environments and
tool chains.

As reflected by the salary premiums, our data analysis provides significant evidence that there is
value gained by IT professionals if they possess Agile (salary premium 26%), Lean (salary premium
9%), and ITIL skills and knowledge (salary premium 16%).

Organizations and educational institutions that focus on cultivating these skills and knowledge will
enhance the IT professional's ability to build cross-functional processes and also use appropriate
technology to enhance an overall collaborative automated DevOps environment.
How DevOps Works

How DevOps Works

• DevOps model =development + operations ( a single team) where the engineers work across
the entire application lifecycle, from development and test to deployment to operations, and
develop a range of skills not limited to a single function.

• In some DevOps models, quality assurance and security teams may also become more tightly
integrated with development and operations and throughout the application lifecycle.

• When security is the focus of everyone on a DevOps team, this is sometimes referred to as
DevSecOps.

• They use a technology stack and tooling which help them operate and evolve applications
quickly and reliably.

• These tools also help engineers independently accomplish tasks (for example, deploying code
or provisioning infrastructure) that normally would have required help from other teams, and
this further increases a team’s velocity.
Benefits of DevOps

Speed: Move at high velocity so you can innovate for customers faster, adapt to changing markets
better, and grow more efficient at driving business results.

Reliability: Ensure the quality of application updates and infrastructure changes so you can
reliably deliver at a more rapid pace while maintaining a positive experience for end users.

Rapid Delivery : Increase the frequency and pace of releases so you can innovate and improve
your product faster. The quicker you can release new features and fix bugs, the faster you can
respond to your customers’ needs and build competitive advantage.

Scale: Operate and manage your infrastructure and development processes at scale. Automation
and consistency help you manage complex or changing systems efficiently and with reduced risk

Improved Collaboration: Build more effective teams under a DevOps cultural model, which
emphasizes values such as ownership and accountability.

Security: Move quickly while retaining control and preserving compliance. You can adopt a
DevOps model without sacrificing security by using automated compliance policies, fine-grained
controls, and configuration management techniques
DevOps Terminology

The most important DevOps terms to know today include:

Agile. Used in the DevOps world to describe infrastructure, processes or tools that are adaptable
and scalable. Being agile is a key focus of DevOps.

Continuous delivery. A software delivery process wherein updates are planned, implemented
and released to end-users on a steady, constant basis. It's the opposite of waterfall delivery, in
which updates are released at an irregular, static pace.

Continuous integration. A process that allows software changes to be tested and integrated into
a code base on a continuous basis each time a change is made to code. Most DevOps teams view
continuous integration as an improvement over the traditional process of waiting until a large
number of code changes are written before testing and integrating them.

Immutable infrastructure. An application service or hosting environment that, once set up,
cannot be changed.

Infrastructure-as-Code. An approach to infrastructure configuration that allows DevOps teams to


use scripts in order to provision servers or hosting environments automatically. This saves them
from having to set up infrastructure by hand, a
DevOps Terminology
The most important DevOps terms to know today include:

Agile. Used in the DevOps world to describe infrastructure, processes or tools that are adaptable
and scalable. Being agile is a key focus of DevOps.

Continuous delivery. A software delivery process wherein updates are planned, implemented
and released to end-users on a steady, constant basis

Continuous integration. A process that allows software changes to be tested and integrated into
a code base on a continuous basis each time a change is made to code.

Immutable infrastructure. An application service or hosting environment that, once set up,
cannot be changed.

Infrastructure-as-Code. An approach to infrastructure configuration that allows DevOps teams to


use scripts in order to provision servers or hosting environments automatically.

Microservices. A type of application architecture in which applications are broken into multiple
small pieces.

Serverless computing. A type of service that provides access to computing resources on demand,
without requiring users to configure or manage an entire server environment.
DevOps Perspective
Across-disciplinary community of practice dedicated to the study of building, evolving and
operating rapidly-changing resilient systems at scale. ” – Jez Humble

“DevOps is an IT mindset that encourages communication, collaboration, integration and


automation among software developers and IT operations in order to improve the speed and
quality of delivering software.” – Versionone

“DevOps is a set of practices and cultural changes — supported by the right tools — that creates
an automated software delivery pipeline, enabling organizations to win, serve, and retain
customers.” -Forrester

•Improved deployment frequency


•Faster time to market
•Lower failure rate of new releases
•Faster mean time to recovery
•Better employee engagement and motivation
DevOps and Agile
DevOps and Agile

Parameter DevOps Agile


Definition DevOps is a practice of bringing development and operation teams together. Agile refers to the continuous iterative approach, which focuses on
collaboration, customer feedback, small, and rapid releases.

Purpose DevOps purpose is to manage end to end engineering processes. The agile purpose is to manage complex projects.
Task It focuses on constant testing and delivery. It focuses on constant changes.
Team size It has a large team size as it involves all the stack holders. It has a small team size. As smaller is the team, the fewer people work on it so
that they can move faster.
Team skillset The DevOps divides and spreads the skill set between development and the operation The Agile development emphasizes training all team members to have a wide
team. variety of similar and equal skills.
Implementation DevOps is focused on collaboration, so it does not have any commonly accepted Agile can implement within a range of tactical frameworks such as safe,
framework. scrum, and sprint.
Duration The ideal goal is to deliver the code to production daily or every few hours. Agile development is managed in units of sprints. So this time is much less
than a month for each sprint.
Target areas End to End business solution and fast delivery. Software development.
Feedback Feedback comes from the internal team. In Agile, feedback is coming from the customer.
Shift left principle It supports both variations left and right. It supports only shift left.
Focus DevOps focuses on operational and business readiness. Agile focuses on functional and non-functional readiness.
Importance In DevOps, developing, testing, and implementation all are equally important. Developing software is inherent to Agile.

Quality DevOps contributes to creating better quality with automation and early bug removal. The Agile produces better applications suites with the desired requirements.
Developers need to follow Coding and best Architectural practices to maintain quality It can quickly adapt according to the changes made on time during the project
standards. life.
Tools Puppet, Chef, AWS, Ansible, and team City OpenStack are popular DevOps tools. Bugzilla, Kanboard, JIRA are some popular Agile tools.

Automation Automation is the primary goal of DevOps. It works on the principle of maximizing Agile does not emphasize on the automation.
efficiency when deploying software.
Communication DevOps communication involves specs and design documents. It is essential for the Scrum is the most common method of implementing Agile software
operational team to fully understand the software release and its network development. Scrum meeting is carried out daily.
implications for the enough running the deployment process.

Documentation In the DevOps, the process documentation is foremost because it will send the The agile method gives priority to the working system over complete
software to an operational team for deployment. Automation minimizes the impact of documentation. It is ideal when you are flexible and responsive. However, it
insufficient documentation. However, in the development of sophisticated software, can harm when you are trying to turn things over to another team for
it's difficult to transfer all the knowledge required. deployment.
DevOps Stages and Tools

Sr.No. Stages Tools


1 Continuous Development Git, SVN, Mercurial, CVS

2 Continuous Integration Jenkins, TeamCity, Travis

3 Continuous Testing Jenkins, Selenium TestNG, Junit

4 Continuous Deployment Configuration Management – Chef, Puppet,


Ansible

Containerization – Docker, Vagrant


5 Continuous Monitoring Splunk, ELK Stack, Nagios, New Relic
DevOps Stages and Tools

Stage – 1: Continuous Development •This is the phase that involves ‘planning‘ and
‘coding‘ of the software. You decide the project
vision during the planning phase and the developers
begin developing the code for the application.

•There are no DevOps tools that are required for


planning, but there are a number of tools for
maintaining the code.

•The code can be in any language, but you maintain


it by using Version Control tools. This process of
maintaining the code is known as Source Code
Management.

•After the code is developed, then you move to the


Continuous Integration phase.

Tools Used: Git, SVN, Mercurial, CVS


DevOps Stages and Tools

• This stage is the core of the entire DevOps life


Stage – 2: Continuous Integration cycle. It is a practice in which the developers
require to commit changes to the source code
more frequently. This may be either on a daily or
weekly basis.

• You then build every commit and this allows early


detection of problems if they are present.
Building code not only involves compilation but it
also includes code review, unit testing,
integration testing, and packaging.

• The code supporting new functionality


is continuously integrated with the existing code.
Since there is a continuous development of
software, you need to integrate the updated
code continuously as well as smoothly with the
systems to reflect changes to the end-users.
Tools: Jenkins, TeamCity, Travis
• In this stage, you use the tools for building/
packaging the code into an executable file so that
you can forward it to the next phases.
DevOps Stages and Tools

• This is the stage where you test the developed


Stage – 3: Continuous Testing software continuously for bugs using automation
testing tools. These tools allow QAs to test
multiple code-bases thoroughly in parallel to
ensure that there are no flaws in the
functionality. In this phase, you can use Docker
Containers for simulating the test environment.

• Selenium is used for automation testing, and the


reports are generated by Testing. You can
automate this entire testing phase with the help
of a Continuous Integration tool called Jenkins.

• Suppose you have written a selenium code in


Java to test your application. Now you can build
this code using ant or maven. Once you build the
code, you then test it for User Acceptance Testing
(UAT). This entire process can be automated
Tools: Jenkins, Selenium TestNG, JUnit using Jenkins.
DevOps Stages and Tools

• This is the stage where you deploy the code on


Stage – 4: Continuous Deployment the production servers. It is also important to
ensure that you correctly deploy the code on all
the servers.

• Configuration Management is the act of


establishing and maintaining consistency in an
application’s functional requirements and
performance.

• Containerization tools also play an equally crucial


role in the deployment stage. The
containerization tools help produce consistency
across Development, Test, Staging as well as
Production environments. Besides this, they also
help in scaling-up and scaling-down of instances
swiftly.

Tools:
Configuration Management – Chef, Puppet, Ansible
Containerization – Docker, Vagrant
DevOps Stages and Tools

• This is a very critical stage of the DevOps life cycle


Stage – 5: Continuous Monitoring where you continuously monitor the
performance of your application. Here you record
vital information about the use of the software.
You then process this information to check the
proper functionality of the application. You
resolve system errors such as low memory, server
not reachable, etc in this phase.

• This practice involves the participation of the


Operations team who will monitor the user
activity for bugs or any improper behavior of the
system. The Continuous Monitoring tools help
you monitor the application’s performance and
the servers closely and also enable you to check
the health of the system proactively.

Tools Used: Splunk, ELK Stack, Nagios, New Relic


Session 29: Introduction of Unix/Linux
Linux offers the DevOps team the flexibility and scalability needed to create a dynamic
development process.
History: 1969 (AT & Bell Laboratory) – Commercial Vendor
Grandfather: Small & Compatible Time-Sharing system.
Father: Pioneering multics Project (collapsed)
Team Members: Ken Thompson & Denis Richie

B language to C
1969-71: Written in assembler included file system, fork()
1972: Pipe
1973: Unix was rewritten in C
1975: First version of Unix widely available outside Bell Labs.
1976: John Mashey : “Using Command Language as a high level programming Language”
1977:awk
1978-1982: Bourne Shell, exit, access, chroot, close, read, C Shell, Korn Shell
1983-1994: Perl Development
38 years Old
Session 29: Introduction of Unix/Linux
Linux Architecture:

1.Kernel: Kernel is the core of the Linux based operating system. It virtualizes the common
hardware resources of the computer to provide each process with its virtual resources.
2.System Library: Isthe special types of functions that are used to implement the
functionality of the operating system.
3.Shell: It is an interface to the kernel which hides the complexity of the kernel’s functions
from the users. It takes commands from the user and executes the kernel’s functions.
4.Hardware Layer: This layer consists all peripheral devices like RAM/ HDD/ CPU etc.
5.System Utility: It provides the functionalities of an operating system to the user.
Session 29: Introduction of Unix/Linux
Kernel: Heart of Linux
Acts as an intermediate between hardware and various programs.

User Request

Shell
Session 29: Introduction of Unix/Linux

Types of Kernel:
• Monolithic
• Microkernels
• Hybrid
• Nano
• Exo Kernel

Monolithic:
It is one of types of kernel where all operating system services operate in kernel space. It has
dependencies between systems components. It has huge lines of code which is complex.

Example :
Unix, Linux, Open VMS, XTS-400 etc.

Advantage :
It has good performance.

Disadvantage :
It has dependencies between system component and lines of code in millions.
Session 29: Introduction of Unix/Linux

2. Micro Kernel –
It is kernel types which has minimalist approach. It has virtual memory and thread scheduling.
It is more stable with less services in kernel space. It puts rest in user space.

Example :
Mach, L4, AmigaOS, Minix, K42 etc.
•Advantage :
It is more stable.
•Disadvantage :
There are lots of system calls and context switches.

3. Hybrid Kernel –
It is the combination of both monolithic kernel and mircrokernel. It has speed and design of
monolithic kernel and modularity and stability of microkernel.

Example :
Windows NT, Netware, BeOS etc.
•Advantage :
It combines both monolithic kernel and microkernel.
•Disadvantage :
It is still similar to monolithic kernel.
Session 29: Introduction of Unix/Linux

4. Exo Kernel –
It is the type of kernel which follows end-to-end principle. It has fewest hardware abstractions as
possible. It allocates physical resources to applications.
Example :
Nemesis, ExOS etc.
•Advantage :
It has fewest hardware abstractions.
•Disadvantage :
There is more work for application developers.

5. Nano Kernel –
It is the type of kernel that offers hardware abstraction but without system services. Micro
Kernel also does not have system services therefore the Micro Kernel and Nano Kernel have
become analogous.
Example :
EROS etc.
•Advantage :
It offers hardware abstractions without system services.
•Disadvantage :
It is quite same as Micro kernel hence it is less used.
Session 29: Introduction of Unix/Linux
Shell:

 Command Line interpreter


 Uses kernel to execute programs
 Types are:
• BASH
• CSH
• KSH
• TCSH
 Interacts with kernel through system calls
Session 30 : Introduction of Unix/Linux
System Kernel:
Session 30: Introduction of Unix/Linux
Unix File Subsystem
Session 31-33: Introduction of Unix/Linux

Linux Commands:
 User Management Commands
• sudo su
• adduser
• userdel

 File and Device Management

• pwd
• Mv
• Cp
• Chmod
• ls
• Cat
• Cd
• Mkdir
• Rmdir
• rm
Session 34: Process Management
Process: An instance of a program is called a Process. In simple terms, any command that
you give to your Linux machine starts a new process.

When you type ls it starts new process.

Types of Processes:
 Foreground Processes: They run on the screen and need input from the user. For example
Office Programs

To start a foreground process, you can either run it from the dashboard, or you can run it
from the terminal.
When using the Terminal, you will have to wait, until the foreground process runs.

 Background Processes: They run in the background and usually do not need user input.
For example Antivirus.

If you start a foreground program/process from the terminal, then you cannot work on the
terminal, till the program is up and running.
Session 34 : Process Management
• Any running program or a command given to a Linux system is called a process
• A process could run in foreground or background
• The priority index of a process is called Nice in Linux. Its default value is 0, and it can vary
between 20 to -19 The lower the Niceness index, the higher would be priority given to
that task
Command Description
bg To send a process to the background
fg To run a stopped process in the foreground
top Details on all Active Processes
ps Give the status of processes running for a user
ps PID Gives the status of a particular process
pidof Gives the Process ID (PID) of a process
kill PID Kills a process
nice Starts a process with a given priority
renice Changes priority of an already running process
df Gives free hard disk space on your system
free Gives free RAM on your system
Session 35: Process Management
Top: This utility tells the user about all the running processes on the Linux machine.
Field Description Example 1 Example 2
PID The process ID of each task 1525 961
User The username of task owner Home Root
PR Priority Can be 20(highest) or -20(lowest) 20 20
NI The nice value of a task 0 0
VIRT Virtual memory used (kb) 1775 75972
RES Physical memory used (kb) 100 51
SHR Shared memory used (kb) 28 7952
Status :There are five types:
'D' = uninterruptible sleep
'R' = running
S S R
'S' = sleeping
'T' = traced or stopped
'Z' = zombie
%CPU % of CPU time 1.7 1.0
%MEM Physical memory used 10 5.1
TIME+ Total CPU time 5:05.34 2:23.42
Command Command name Photoshop.exe Xorg
Press 'q' on the keyboard to move out of the process display.
Session 34: Process Management
PS
This command stands for 'Process Status'. It is similar to the "Task Manager" that pop-ups in
a Windows Machine when we use Cntrl+Alt+Del. This command is similar to 'top' command
but the information displayed is different.
To check all the processes running under a user, use the command -
ps ux

ps PID

Kill
This command terminates running processes on a Linux machine.
To use these utilities you need to know the PID (process id) of the process you want to kill
Syntax -
kill PID
To find the PID of a process simply type
pidof Process name
Session 34: Process Management
NICE
Linux can run a lot of processes at a time, which can slow down the speed of some high
priority processes and result in poor performance.
To avoid this, you can tell your machine to prioritize processes as per your requirements.
This priority is called Niceness in Linux, and it has a value between -20 to 19. The lower the
Niceness index, the higher would be a priority given to that task.
The default value of all the processes is 0.
To start a process with a niceness value other than the default value use the following syntax
nice -n 'Nice value' process name

DF
This utility reports the free disk space(Hard Disk) on all the file systems.

If you want the information in a readable format, then use the command
'df -h'

Free
This command shows the free and used memory (RAM) on the Linux system.

You can use the arguments


free -m to display output in MB
free -g to display output in GB
Session 35: Backup and Recovery
UNIX and Linux backup and restore can be done using backup commands tar, cpio ufsdump,
dump and restore.

1. Backup Restore using tar command


tar features:
1. tar ( tape archive ) is used for single or multiple files backup and restore
on/from a tape or file.
2. tar can not backup special character & block device files , shows as 0 byte files
with first letter of permissions as b or c for block or character.
3. tar Works only on mounted file system, it can not access the files on unmounted
file system.

Example 1 :
$tar -cvf archive.tar filename filename
Or
Tar –cf archive.tar filename filename

In the command above Options are c -> create ; v -> Verbose ; f->file or archive device ; * -> all
files and directories . Together the commands means create a tar file on /dev/rmt/0 from all file
and directories s in the current directory.
Session 35: Backup and Recovery
Viewing a tar backup on a tape or file
t option is used to see the table of content in a tar file.
$tar -tvf archive.tar
$tar –xf archive .tar # extract files from archive

-c : Creates archive
-x : Extracts the archive
-f : creates archive with given filename
-t : displays or lists files in archived file
-u : archives and adds to an existing archive file
-v : Displays verbose information
-A : Concatenates the archive files
-z : compresses the tar file using gzip
-j : compresses the tar file using bzip2
-W : Verifies an archive file
-r : updates or adds file or directory in already existing .tar file
Session 35: Installing and Deleting Software Packages

To Installing an rpm software package, use the following command with -i option.
Syntax:
rpm -i package_file_name
Options:

-i –> Install a Package


-v –> Verbose for a nicer display
-h –> Print hash marks as the package archive is unpacked
Where package_file_name specifies the name of the file that contains the package you want
to install.
Cmd:
rpm -i gnorpm-0.9-10.i386.rpm
gnome-linuxconf-0.23-1.i386.rpm
Administrators like to include -v & -h Flags causes the command to print status information
as its excutes.

Linux system administration is a process of setting up, configuring, and managing a computer
system in a Linux environment. System administration involves creating a user account, taking
reports, performing backup, updating configuration files, documentation, and performing
recovery actions.
Session 35: Installing and Deleting Software Packages

Example:

#rpm -ivh gnorom-0.-10.i386.rpm


Press Enter & Type
gnome-linuxconf-0.23-1.386.rpm
gnorpm
###############
gnome-linuxconf
###############

RPM Arguments:

1) –force
2) –nodeps
3) –replacefiles

Example:

rpm -i –nodeps gnorpm-0.-10.i386


Press Enter
gnome -linuxconf-0.23-1.i386.rpm
Session 35: Installing and Deleting Software Packages

Deleting Packages:
To Remove an installed package, issue a command
Cmd:
rpm -e package_name

Performing Updates & Fixes :

RPM makes it easy to install new versions of package. issue a command


Cmd :
rpm -uvh package_file_name
Where package file name is the name of the file containing the new version of the package
Example :
rpm -Fvh gnorpm-0.9-10.i386.rpm
Session 35: Installing and Deleting Software Packages
Identifying Installed Packages :
RPM the version and build numbers of an installed packages, issue a command
Cmd:

rpm -q package_name

Example:
gnorpm-0.9-10
The version and build number of each installed package.
Cmd :
rpm -qa

Determining File Ownership:

RPM to learn which package, owns a particular file, issue a command


Syntax:
rpm -qf filename
Cmd:
rpm -qf /etc/initab
initscripts-4.16-1
Environment Variable

An environment variable is a variable whose value is set outside the program, typically through
functionality built into the operating system or microservice.

Scope of an environment variable: Scope of any variable is the region from which it can be
accessed or over which it is defined. An environment variable in Linux can
have global or local scope.

Global
A globally scoped ENV that is defined in a terminal can be accessed from anywhere in that
particular environment which exists in the terminal. That means it can be used in all kind of
scripts, programs or processes running in the environment bound by that terminal.

Local
A locally scoped ENV that is defined in a terminal cannot be accessed by any program or process
running in the terminal. It can only be accessed by the terminal( in which it was defined) itself.

$ echo $NAME
Networking Commands
ifconfig Display and manipulate route and network interfaces.

ip It is a replacement of ifconfig command.

traceroute Network troubleshooting utility.


tracepath Similar to traceroute but doesn't require root privileges.

ping To check connectivity between two nodes.

netstat Display connection information.


ss It is a replacement of netstat.
dig Query DNS related information.
nslookup Find DNS related query.
route Shows and manipulate IP routing table.

host Performs DNS lookups.


arp View or add contents of the kernel's ARP table.

iwconfig Used to configure wireless network interface.

hostname To identify a network name.


curl or wget To download a file from internet.
mtr Combines ping and tracepath into a single command.

whois Will tell you about the website's whois.

ifplugstatus Tells whether a cable is plugged in or not.


Session 36: Introduction to Graphical Environment (GNOME)
• Gnome is a user-friendly graphical desktop environment for UNIX and UNIX-like systems that
enables users to easily use and configure their computers.
• Gnome includes
• a panel (for starting applications and displaying status),
• a desktop (where data and applications can be placed),
• a set of standard desktop tools and applications, and
• a set of conventions that make it easy for applications to cooperate and be consistent with each
other.
• Gnome runs on a number of UNIX-like operating systems, including Linux, FreeBSD, and Solaris.
• Gnome is completely open source (free software) distributed under the terms of GNU General Public
License (and its cousins, Lesser General Public License and Free Documentation License for libraries
and documentation respectively).
• Gnome is highly configurable, enabling you to set your desktop the way you want it to look and feel.
• Gnome supports many human languages, and more are added every month.
• Gnome even supports several Drag and Drop protocols for maximum interoperability with non-
Gnome applications.
• Gnome comes from the acronym for the GNU Network Object Model Environment (GNOME).
Session 36: Ubuntu Utilities
Virtual Box:
• VirtualBox is designed to run virtual machines on your physical machine without
reinstalling your OS that is running on a physical machine.
• One more VirtualBox advantage is that this product can be installed for free.
• A virtual machine (VM) works much like a physical one.

Evolution of Ubuntu:
• Ubuntu 5.04 codenamed "Hoary Hedgehog" was released on 8 April 2005.
• From this second release onwards, massive changes started to trickle in.
• Ubuntu 5.04 added many new features including an update manager, upgrade
notifier, readahead and grepmap, suspend, hibernate and standby support, dynamic
frequency scaling for processors among many other major improvements.
• Ubuntu 5.04 was so ahead of its time that it even introduced support for installation
from USB devices.
Session 36: Ubuntu Utilities
Gimp:
• GIMP is an acronym for GNU Image Manipulation Program.
• It is a freely distributed program for such tasks as photo retouching, image
composition and image authoring.

Bleach Bit :
• When your computer is getting full, BleachBit quickly frees disk space.
• When your information is only your business, BleachBit guards your privacy.
• With BleachBit you can free cache, delete cookies, clear Internet history, shred
temporary files, delete logs, and discard junk you didn't know was there.
• Designed for Linux and Windows systems, it wipes clean thousands of applications
including Firefox, Adobe Flash, Google Chrome, Opera etc

Unity Tweak Tool :


• Unity tweak tool is one of the oldest desktop environments that served as the default
desktop environment for several years.
• Tweak tools are used to configure the system and adjust other settings.
Session 36: Ubuntu Utilities

SAMBA:
• Samba is the standard Windows interoperability suite of programs for Linux and
Unix. Samba is Free Software licensed under the GNU General Public License
• The Samba project is a member of the Software Freedom Conservancy.
• Since 1992, Samba has provided secure, stable and fast file and print services for all
clients using the SMB/CIFS protocol, such as all versions of DOS and Windows, OS/2,
Linux and many others.
• Samba is an important component to seamlessly integrate Linux/Unix Servers and
Desktops into Active Directory environments.
• It can function both as a domain controller or as a regular domain member.
References

• https://aws.amazon.com/devops/what-is-
devops/#:~:text=DevOps%20is%20the%20combination%20of,development%20an
d%20infrastructure%20management%20processes.
• https://cacm.acm.org/magazines/2020/10/247595-what-do-agile-lean-and-itil-
mean-to-devops/fulltext

Linux Server Installation:


• https://docs.bmc.com/docs/NetworkAutomation/89/installing/preparing-for-
installation/setting-up-the-installation-environment/setting-up-for-installation-on-
a-linux-server

RPM and YUM Installation


• https://www.redhat.com/sysadmin/how-manage-packages
Thank You

You might also like