A Little Book On Linux
A Little Book On Linux
A Little Book On Linux
Acknowledgements
We gratefully acknowledge the Edx and Linux Foundations for offering this open source course
of Linux for beginners.
ii
Table of Contents
CHAPTER - 1: Linux Philosophy and Concepts.......................................................................... 1
Learning Objectives:............................................................................................................ 1
Introduction: ...................................................................................................................... 1
1.1.
Linux History............................................................................................................ 1
1.2.
Linux Philosophy...................................................................................................... 2
1.3.
Linux Community..................................................................................................... 2
1.4.
Linux Terminology.................................................................................................... 3
1.5.
Summary............................................................................................................................ 5
CHAPTER - 2: Linux Structure and Installation ......................................................................... 6
Learning Objectives:............................................................................................................ 6
2.1.
2.2.
2.3.
Summary.......................................................................................................................... 19
CHAPTER 3: Graphical Interface ......................................................................................... 20
Learning objectives ........................................................................................................... 20
3.1.
3.2.
Basic Operations.................................................................................................... 28
3.3.
Graphical Desktop.................................................................................................. 36
Summary.......................................................................................................................... 38
Chapter 4: System configuration from Graphical Interface .................................................. 40
Learning Objectives........................................................................................................... 40
4.1.
4.2.
Network Manager.................................................................................................. 43
4.3.
Summary.......................................................................................................................... 49
Chapter 5: Command Line Operations................................................................................ 50
Learning Objectives........................................................................................................... 50
5.1.
iii
5.2.
Basic Operations.................................................................................................... 54
5.3.
5.4.
5.5.
Summary.......................................................................................................................... 72
Chapter 6: Finding Linux Documentation............................................................................ 73
Learning Objectives........................................................................................................... 73
6.1.
6.2.
6.3.
6.4.
Help Command...................................................................................................... 76
6.5.
Summary.......................................................................................................................... 79
Chapter 7: File Operations ................................................................................................. 80
Learning Objectives........................................................................................................... 80
7.1.
Accounts ............................................................................................................... 80
7.2.
7.3.
7.4.
7.5.
Summary.......................................................................................................................... 97
Chapter 8: Text Editors ...................................................................................................... 99
8.1.
8.2.
Summary.........................................................................................................................115
Chapter- 9: Logical Security Principles..................................................................................116
Learning Objectives..........................................................................................................116
9.1.
9.2.
9.3. Using Sudo, the Importance of Process Isolation, Limiting Hardware Access and
Hardware Resources ........................................................................................................121
9.4.
9.5.
Summary.........................................................................................................................128
iv
10.2.
Browsers...........................................................................................................137
10.3.
Transferring Files...............................................................................................138
Summary.........................................................................................................................140
Chapter- 11: Manipulating Text ..........................................................................................142
Learning Objectives..........................................................................................................142
11.1.
11.2.
11.3.
11.4.
Grep .................................................................................................................156
11.5.
11.6.
Summary.........................................................................................................................162
Chapter 12: Printing .........................................................................................................164
Learning Objectives..........................................................................................................164
12.1.
Configuration....................................................................................................164
12.2.
12.3.
Summary.........................................................................................................................177
Chapter 13: Bash Shell Scripting........................................................................................179
Learning Objectives..........................................................................................................179
13.1.
13.2.
Syntax...............................................................................................................183
13.3.
Constructs ........................................................................................................190
Summary.........................................................................................................................195
Chapter 14: Advanced Bash Scripting .............................................................................196
Learning Objectives..........................................................................................................196
14.1.
String Manipulation...........................................................................................196
14.2.
14.3.
Case Statement.................................................................................................200
14.4.
Looping Constructs............................................................................................201
14.5.
14.6.
Summary.........................................................................................................................207
Chapter 15: Processes.......................................................................................................208
Learning Objectives..........................................................................................................208
15.1.
15.1.
15.2.
15.3.
Summary.........................................................................................................................223
Chapter 16: Common Applications....................................................................................224
Learning Objectives..........................................................................................................224
16.1.
16.2.
16.3.
16.4.
Summary.........................................................................................................................231
vi
Introduction:
Linux is a free open source computer operating system initially developed for
Intel x86-based personal computers. It has been subsequently ported to many other
hardware platforms. In this section, you will become familiar with how Linux evolved
from a student project into a massive effort with an enormous impact on today's world.
1.1.
Linux History
Linus Torvalds was a student in Helsinki, Finland, in 1991 when he started a
project: writing his own operating system kernel. He also collected together and/or
developed the other essential ingredients required to construct an entire operating
system with his kernel at the center. This soon became known as the Linux kernel.
In
1992,
Linux
was
re-licensed
using
the General
Public
License
(GPL) by GNU (a project of the Free Software Foundation (FSF) which promotes freely
available software) which made it possible to build a worldwide community of
developers. By combining the kernel with other system components from the GNU
project,
numerous
other
developers
created
complete
systems
called Linux
Distributions in the mid-90s. The Linux distributions created in the mid-90s provided
the basis for fully free computing and became a driving force in the open source software
movement. In 1998, major companies likeIBM and Oracle announced support for the
Linux platform and began major development efforts as well.
Today, Linux powers more than half of the servers on the Internet, the majority of
smart-phones (via the Android system which is built on top of Linux), and nearly all of
the worlds most powerful supercomputers.
1.2.
Linux Philosophy
Every organization or project has a philosophy that works as a guide while
framing its objectives and delineating its growth path. This section contains a description
of the Linux philosophy and how this philosophy has impacted its development.
Linux is constantly enhanced and maintained by a network of developers from all
over the world collaborating over the Internet, with Linus Torvalds at the head. Technical
skill and a desire to contribute are the only qualifications for participating.
Linux borrows heavily from the UNIX operating system because it was written to
be a free and open source version of UNIX. Files are stored in a hierarchical filesystem,
with the top node of the system being root or simply "/". Whenever possible, Linux makes
its components available via files or objects that look like files. Processes, devices, and
network sockets are all represented by file-like objects, and can often be worked with
using the same utilities used for regular files.
Linux is a fully multitasking (a method where multiple tasks are performed during the
same period of time), multiuser operating system, with built-in networking and service
processes known as daemons in the UNIX world.
1.3.
Linux Community
Suppose as part of your job you need to configure a Linux file server, and you run
into some difficulties. If you're not able to figure out the answer yourself or get help from
a co-worker, the Linux community might just save the day! There are many ways to
engage with the Linux community: you can post queries on relevant discussion forums,
subscribe to discussion threads, and even join local Linux groups that meet in your area.
The Linux community is a far-reaching ecosystem consisting of developers, system
administrators, users and vendors, who use many different forums to connect with one
another. Among the most popular are:
1.4.
Linux Terminology
When you start exploring Linux, you'll soon come across some unfamiliar terms
like distribution, boot loader, desktop environment, etc. So let's stop and take a look
at some basic terminology used in Linux to help you get up to speed before we proceed
further.
1.5.
Linux Distributions:
Suppose you have been assigned to a project building a product for a Linux
platform. Project requirements include making sure the project works properly on the
most widely used Linux distributions. To accomplish this you need to learn about the
different components, services and configurations associated with each distribution.
We're about to look at how you'd go about doing exactly that.
So, what is a Linux distribution and how does it relate to the Linux kernel?
As illustrated above, the Linux kernel is the core of a computer operating system.
A full Linux distribution consists of the kernel plus a number of other software tools for
file-related operations, user management, and software package management. Each of
3
these tools provides a small part of the complete system. Each tool is often its own
separate project, with its own developers working to perfect that piece of the system.
As mentioned earlier, the current Linux kernel, along with past Linux kernels (as
well as earlier release versions) can be found at the www.kernel.org web site. The various
Linux distributions may be based on different kernel versions. For example, the very
popular RHEL 6 distribution is based on the 2.6.32 version of the Linux kernel, which is
rather old but extremely stable. Other distributions may move more quickly in adopting
the latest kernel releases. It is important to note that the kernel is not an all or nothing
proposition, for example, RHEL 6 has incorporated many of the more recent kernel
improvements into their version of 2.6.32.
Examples of other essential tools and ingredients provided by distributions include
the C/C++ compiler, the gdbdebugger, the core system libraries applications need to link
with in order to run, the low-level interface for drawing graphics on the screen as well as
the higher-level desktop environment, and the system for installing and updating the
various components including the kernel itself.
popular
free
alternative
to Red
Hat
Enterprise
Linux
favored by the scientific research community for its compatibility with scientific and
4
mathematical
software
packages.
Linux are
binary-
compatible with RHEL; i.e., binary software packages in most cases will install properly
across the distributions.
Many commercial distributors,
including Red
provide long term fee-based support for their distributions, as well as hardware and
software certification. All major distributors provide update services for keeping your
system primed with the latest security and bug fixes, and performance enhancements, as
well as provide online support resources.
Summary
You have completed this chapter. Lets summarize the key concepts covered.
Linux borrows heavily from the UNIX operating system, with which its creators
were well versed.
Linux accesses many features and services through files and file-like objects.
Linux is developed by a loose confederation of developers from all over the world,
collaborating over the Internet, with Linus Torvalds at the head. Technical skill and a
desire to contribute are the only qualifications for participating.
The Linux community is a far reaching ecosystem of developers, vendors, and users
that supports and advances the Linux operating system.
Some of the common terms used in Linux are: Kernel, Distribution, Boot loader,
Service, File system, X Window system, desktop environment, and command line.
A full Linux distribution consists of the kernel plus a number of other software tools
for file-related operations, user management, and software package management.
2.1.
relationships, while the partitions are like different families (each of which has its own
tree). A comparison between file systems in Windows and Linux is given in the following
table:
All
Linux
file
three
system
different
names
are
directories
case-sensitive,
(or
folders).
so /boot, /Boot,
Many
distributions
distinguish between core utilities needed for proper system operation and other programs,
and place the latter in directories under /usr (think "user"). To get a sense for how the
other programs are organized, find the /usr directory in the diagram above and compare
the subdirectories with those that exist directly under the system root directory (/).
2.2.
Test).
The BIOS software is stored on a ROM chip on the
Master
Boot Records
(MBR)
andoperating
Boot Loader
is
completely
controlled
by the
system.
Once the POST is completed, the system control passes from the BIOS to the boot
loader. The boot loader is usually stored on one of the hard disks in the system, either in
the boot sector (for traditional BIOS/MBR systems) or the EFI partition (for more
recent (Unified) Extensible
Firmware
Up
to
this
stage, the machine does not access any mass storage media. Thereafter, information on
the date, time, and the most important peripherals are loaded from the CMOS
values (after a technology used for the battery-powered memory store - which allows the
system to keep track of the date and time even when it is powered off).
9
A number of boot loaders exist for Linux; the most common ones are GRUB (for GRand
Unified Boot loader) and ISOLINUX(for booting from removable media). Most Linux
boot loaders can present a user interface for choosing alternative options for booting
Linux, and even other operating systems that might be installed. When booting Linux, the
boot loader is responsible for loading the kernel image and the initial RAM disk (which
contains some critical files and device drivers needed to start the system) into memory.
Boot Loader in Action
The boot loader has two distinct stages:
First Stage:
For systems using the BIOS/MBR method, the
boot loader resides at the first sector of the hard
disk also known as the Master Boot
Record (MBR). The size of the MBR is just
512 bytes. In this stage, the boot loader
examines the partition table and finds a
bootable partition. Once it finds a bootable
partition, it then searches for the second stage
boot loader e.g, GRUB, and loads it
into RAM (Random Access Memory).
For
systems
using
firmware reads
its Boot Manager data to determine which UEFI application is to be launched and from
where (i.e., from which disk and partition the EFI partition can be found). The firmware
then launches the UEFI application, for example, GRUB, as defined in the boot entry in
the firmware's boot manager. This procedure is more complicated but more versatile than
the older MBR methods.
10
Second Stage:
The second stage boot loader resides under /boot. A splash screen is displayed which
allows us to choose which Operating System (OS) to boot. After choosing the OS, the
boot loader loads the kernel of the selected operating system into RAM and passes control
to it.
The boot loader loads the selected kernel image (in the case of Linux) and passes control
to it. Kernels are almost always compressed, so its first job is to uncompress itself. After
this, it will check and analyze the system hardware and initialize any hardware device
drivers built into the kernel.
The Linux Kernel
The boot loader loads both the kernel and an initial RAM
based file system (initramfs) into memory so it can be used
directly by the kernel.
When the kernel is loaded in RAM, it immediately
initializes and configures the computers memory and also
configures all the hardware attached to the system. This
includes all processors, I/O subsystems, storage devices,
etc. The kernel also loads some necessary user space
applications.
The Initial RAM Disk
The initramfs file system image contains
programs and binary files that perform all actions
needed to mount the proper root file system, like
providing kernel functionality for the needed file
system and device drivers for mass storage
controllers with a facility
called udev (for User Device) which is
11
responsible for figuring out which devices are present, locating the drivers they need to
operate properly, and loading them. After the root file system has been found, it is
checked for errors and mounted.
The mount program instructs the operating system that a file system is ready for use, and
associates it with a particular point in the overall hierarchy of the file system (the mount
point). If this is successful, the initramfs is cleared from RAM and the init program on
the root file system (/sbin/init) is executed.
init handles the mounting and pivoting over to the final real root file system. If special
hardware drivers are needed before the mass storage can be accessed, they must be in
the initramfs image.
/sbin/init and Services
Once the kernel has set up all its hardware and
mounted the root filesystem, the kernel runs
the /sbin/init program. This then becomes the
initial process, which then starts other processes
to get the system running. Most other processes
on the system trace their origin ultimately
to init; the exceptions are kernel processes,
started by the kernel directly for managing
internal operating system details.
Traditionally, this process startup was done
using conventions that date back to System V UNIX, with the system passing through a
sequence of runlevels containing collections of scripts that start and stop services. Each
runlevel supports a different mode of running the system. Within each runlevel,
individual services can be set to run, or to be shut down if running. Newer distributions
are moving away from the System V standard, but usually support the System V
conventions for compatibility purposes.
Besides starting the system, init is responsible for keeping the system running and for
shutting it down cleanly. It acts as the "manager of last resort" for all non-kernel
12
processes, cleaning up after them when necessary, and restarts user login services as
needed when users log in and out.
Text-Mode Login
Near the end of the boot process, init starts a
number of text-mode login prompts (done by a
program called getty). These enable you to type
your username, followed by your password, and to
eventually get a command shell.
Usually, the default command shell is bash (the
GNU Bourne Again Shell), but there are a
number of other advanced command shells
available. The shell prints a text prompt,
indicating it is ready to accept commands; after
the user types the command and presses Enter,
the command is executed, and another prompt is
displayed after the command is done.
As you'll learn in the chapter 'Command Line Operations', the terminals which run the
command
shells can
be
accessed
using
Most
distributions start six text terminals and one graphics terminal starting with F1 or F2. If
the graphical environment is also started, switching to a text console requires
pressing CTRL-ALT + the appropriate function key (with F7 or F1being the GUI). As
you'll see shortly, you may need to run the startx command in order to start or restart
your
graphical
13
2.3.
14
Your family is planning to buy its first car. What are the factors you need to consider
while purchasing a car? Your planning depends a lot on your requirements. For instance,
your budget, available finances, size of the car, type of engine, after-sales services, etc.
Similarly, determining which distribution to deploy also requires some planning. The
figure shows some but not all choices, as there are other choices for distributions and
standard embedded Linux systems are mostly neither Android or Tizen, but are slimmed
down standard distributions.
Questions to Ask When Choosing a Linux Distribution
What types of packages are important to the organization? For example, web server,
word processing, etc.
How much hard disk space is available? For example, when installing Linux on an
embedded device, there will be space limitations.
How long is the support cycle for each release? For example, LTS releases have long
term support.
What hardware are you running the Linux distribution on? For example, X86, ARM,
PPC, etc.
16
17
18
Summary
You have completed this chapter. Lets summarize the key concepts covered:
By dividing the hard disk into partitions, data can be grouped and separated as
needed. When a failure or mistake occurs, only the data in the affected partition will
be damaged, while the data on the other partitions will likely survive.
The boot process has multiple steps, starting with BIOS, which triggers the boot
loader to start up the Linux kernel. From there the initramfs file system is invoked,
which triggers the init program to complete the startup process.
Determining the appropriate distribution to deploy requires that you match your
specific system needs to the capabilities of the different distributions.
19
You
can
use
either a Command
Line
families
that
we
explicitly
cover
in
this
As
you'll
see
of GNOME as the default desktop manager. However, since in many cases we use just a
single distro for illustration, we've used GNOME for the openSUSE visuals throughout
this course. If you are using KDE your experience will vary somewhat from what is
shown.
3.1.
GNOME is a popular desktop environment with an easy to use graphical user interface. It
is bundled as the default desktop environment for many distributions including Red Hat
20
Enterprise
Linux
Enterprise,
andDebian. GNOME has menu-based navigation and is sometimes an easy transition for
at least some Windows users. However, as you'll see, the look and feel can be quite
different across distributions, even if they are all using GNOME.
Another common desktop environment very important in the history of Linux and also
widely used is KDE, which is used by default in openSUSE.
Other
alternatives
for
desktop
environment
based
onGNOME), Xfce, and LXDE. Most desktop environments follow a similar structure
toGNOME.
GUI Startup
When you install a desktop environment,
the X display manager starts at the end
of the boot process. This X display
manager is responsible for starting the
graphics system, logging in the user, and
starting the users desktop environment.
You can often select from a choice of
desktop environments when logging in
to the system.
The default display manager
for GNOME is called gdm. Other popular display managers include lightdm (used
on Ubuntu) and kdm(associated with KDE).
Locking the Screen
It is often a good idea to lock your
screen to prevent other people from
accessing your session while you are
away from your computer. Note this
21
does not suspend the computer; all your applications and processes continue to run while
the screen is locked. There are two ways to lock your screen:
1. Using the graphical interface.
2. Using the keyboard shortcut CTRL-ALT-L.
Note: The keyboard shortcut for locking the screen in the three distros can be changed as
indicated below:
22
23
24
25
Suspending
Most modern computers support suspend
mode or sleep mode when you stop using your
computer for a short while. Suspend mode
26
saves the current system state and allows you to resume your session more quickly while
remaining on but using very little power. It works by keeping your systems applications,
desktop, and so on in system RAM, but turning off all of the other hardware. The suspend
mode bypasses the time for a full system start-up and continues to use minimal power.
Suspending in CentOS
To suspend the system in CentOS,
perform the following steps:
1. On the CentOS desktop screen,
click the System menu on the top
bar.
2. Click Shut Down to shutdown
the system.
3. Click Hibernate to switch the
system to the sleep mode.
Note: Not all computers support suspend mode properly, so its best to try this
feature out on a new, blank session before using it with applications and files open.
If your computer goes into suspend mode but fails to wake up when you attempt to
use the mouse or the keyboard, you may need to force the power off by
holding down the power button for five seconds or by using the
mechanical hardware switch if supported.
Suspending in openSUSE
To suspend the system in openSUSE,
perform the following steps:
27
3.2.
Basic Operations
Basic Operations
Even experienced users can forget the
precise command that launches an
application, or exactly what options
and arguments it requires.
Fortunately, Linux allows you to
28
In CentOS, applications can be opened from the Applications menu in the upper-left
corner of the screen.
In Ubuntu, applications can be opened from the Dash button in the upper-left corner
of the screen.
For KDE, and other environments, applications can be opened from the button in the
lower-left corner.
Submenus
Submenus for different types of
applications include:
Accessories
Games
Graphics
Internet
Office
System Tools
On the following screens you will learn how to perform basic operations in Linux using
the Graphical Interface.
Locating Applications
29
Default Applications
Multiple applications are available to
accomplish various tasks and to open
a file of a given type. For example,
you can click on a web address while
reading an email and launch a browser
such asFirefox or Chrome.
The file managing program can be
used to set the default application to
be used for any particular file type.
Default Directories
30
31
Default Directories in
Ubuntu
You can also click the magnifying glass icon on the top-right of the File
Manager window to search for files or directories that exist inside
your home directory.
Viewing Files
32
Nautilus (the name of the File Manager or file browser) allows you to view files
and directories in several different formats.
To view files in the Icons, List, or Compact formats, click the Viewdrop-down
and select your view, or press CTRL-1 , CTRL-2 and CTRL-3respectively.
In addition you can also arrange the files and directories by Name,Size, Type,
or Modification Date for further sorting. To do so, click View and
select Arrange Items.
The file browser provides multiple ways to customize your window view to
facilitate easy drag and drop file operations. You can also alter the size of the
icons by selecting Zoom In and Zoom Out under theView menu.
4. To open Nautilus in graphical mode, Press ALT-F2 and search for Nautilus. Click
the icon that appears.
Note: Both the above methods, will open the graphical interface for the program.
The shortcut key to get to the search text box is CTRL-F. You can exit the search text
box view by clicking the Search button again.
Another quick way to access a specific directory is to press CTRL-L, which will give
you a Location text box to type in a path to a directory.
More about Searching for Files
Nautilus allows you to refine your
search beyond the initial keyword by
providing drop-down menus to further
filter the search.
1. Based on Location or File Type,
select additional criteria from the
drop-down.
2. To regenerate the search, click
the Reload button.
3. To add multiple search criteria, click the + button and selectadditional search
criteria.
For example, if you want to find a PDF file containing the word Linux in
your home directory, navigate to your home directory and search for the word Linux .
You should see that the default search criterion limits the search to your home directory
already. To finish the job, click the + button to add another search criterion, selectFile
Type for the type of criterion, and select PDF under the File Type drop-down.
Editing a File
34
35
3.3.
Graphical Desktop
Graphical Desktop
Each Linux distribution comes with
its own set of desktop backgrounds.
You can change the default by
choosing a new wallpaper or selecting
a custom picture to be set as the
desktop background. If you do not
want to use an image as the
background, you can select a color to
be displayed on the desktop instead.
In addition, you can also change the desktop theme, which changes the look and feel of
the Linux system. The theme also defines the appearance of application windows.
In this section, you will learn how to change the desktop background and theme.
36
Desktop Background
If you do not like any of the
installed wallpapers, you can use
different shades of color as the
background using the Colors and
Gradients drop-down in
the Appearance window.
There are three types of color:
solid, horizontal gradient, and
vertical gradient. Click the box at
the bottom and pick the effect between solid and the two gradients. In addition, you can
also install packages that contain wallpapers by searching for packages using wallpaper
as a keyword.
Changing the Theme
The visual appearances of applications
(the buttons, scroll bars, widgets, and
other graphical components) are
controlled by a
theme.GNOME comes with a set of
different themes which can change the
way your applications look.
The exact method for changing your
theme will depend on your
distribution. For example, for Ubuntu you can right-click anywhere on the desktop and
select a different theme from the Theme drop-down. For CentOS you have to click
on System Preferences Appearance.
More About Changing the Theme
37
Summary
You have completed this chapter. Let's summarize the key concepts covered:
GNOME is a popular desktop environment and graphical user interface that runs on top
of the Linux operating system.
The gdm display manager presents the user with the login screen which prompts for the
login username and password.
Logging out through the desktop environment kills all processes in your current X session
and returns to the display manager login screen.
The Places menu contains entries that allow you to access different parts of the
computer and the network.
38
Each Linux distribution comes with its own set of desktop backgrounds.
GNOME comes with a set of different themes which can change the way your
applications look.
39
Apply system, display, and date and time settings using theSystem Settings panel.
Note that we will revisit all these tasks later when we discuss how to accomplish
them from the command line interface.
4.1.
System Settings
The System Settings panel allows
you to control most of the basic
configuration options and desktop
settings such as specifying the screen
resolution, managing network
connections, or changing the date and
time of the system.
As we mentioned in Chapter 4, we use
the GNOME Desktop Manager for
the visuals in this course as it is the default for CentOSand Ubuntu and readily available
on openSUSE (for which the default is the KDE Desktop Manager).
The procedure to access System Settings varies according to distribution:
- CentOS: click System Preferences.
40
(root)
access.
If
possible,
you
should
configure
the
settings
in
41
42
specific time servers run by the distribution. This means that no setup, beyond "on or off",
is required for network time synchronization. If desired, more detailed configuration is
possible by editing the standard NTP configuration file (/etc/ntp.conf) for Linux NTP
utilities.
4.2.
Network Manager
Network Configuration
All Linux distributions have network configuration files, but file formats and locations
can differ from one distribution to another.
Hand editing of these files can handle quite
complicated setups, but is not very dynamic or
easy to learn and use. The Network
Manager utility was developed to make things
easier and more uniform across distributions. It
can list all available networks (both wired and
wireless), allow the choice of a wired, wireless
or mobile broadband network, handle passwords, and set up Virtual Private Networks
(VPNs). Except for unusual situations, its generally best to let the Network
Manager establish your connections and keep track of your settings.
In this section, you will learn how to manage network connections, including wired and
wireless connections, and mobile broadband and VPN connections.
Wired and Wireless Connections
Wired connections usually do not require
complicated or manual configuration. The
hardware interface and signal presence are
automatically
detected,
and
thenNetwork
Host
Control
Protocol).
43
For static configurations that don't use DHCP, manual setup can also be done easily
through Network Manager. You can also change the Ethernet Media Access Control
(MAC) address if your hardware supports it. (The MAC address is a unique hexadecimal
number of your network card.)
Wireless networks are not connected to the machine by default. You can view the list of
available wireless networks and see which one you are connected to by using Network
Manager. You can then add, edit, or remove known wireless networks, and also specify
which ones you want connected by default when present.
Configuring Wireless Connections in CentOS
To configure a Wireless Network
in CentOS:
1. Right-click the Network
Manager icon.
2. Select the Enable
Wireless check box.
3. Click the Network
Manager icon.
4. Select the wireless network you wish to connect to or More Networks if the network
you want is not in the first group shown.
5. Enter the password to access a secure wireless network the first time you connect to
that network. The password will be saved for subsequent connections.
6. To view your current network interface connections and to disconnect if desired,
click the Network Manager icon.
44
To configure Wireless
Network in Ubuntu:
1. In top panel, click Network
Manager.
2. Click Enable Wi-Fi - to display a
list available Wireless Networks.
3. Click the desired Wireless
Network.
4. For a secured network, enter the password.
5. To modify saved wireless network settings, click Edit Connections.
Configuring Wireless Connections in openSUSE
openSUSE looks different from CentOS or Ubuntu, but Wired, Wireless, Mobile
Broadband, VPN, and DSL are all available from the Network Connections dialog box
as you will see in the upcoming demonstration.
Mobile Broadband and VPN Connections
You can set up a mobile broadband
connection with Network Manager, which
will launch a wizard to set up the connection
details for each connection.
Once the configuration is done, the network
is configured automatically each time the
broadband network is attached.
It supports many VPN technologies, such as native IPSec, Cisco OpenConnect (via
either the Cisco client or a native open-source client),Microsoft PPTP, and OpenVPN.
You might get support for VPN as a separate package from your distributor. You need to
install this package if your preferred VPN is not supported.
4.3.
46
48
Summary
You have completed this chapter. Let's summarize the key concepts covered:
You can control basic configuration options and desktop settings through the System
Settings panel
Linux always uses Coordinated Universal Time (UTC) for its own internal timekeeping . You can set Date and Time Settings from the System Settings window.
The Network Time Protocol is the most popular and reliable protocol for setting the
local time via Internet servers.
The Displays panel allows you to change the resolution of your display and
configure multiple screens.
Network Manager can present available wireless networks, allow the choice of a
wireless or mobile broadband network, handle passwords, and set up VPNs.
dpkg and RPM are the most popular package management systems used on Linux
distributions.
Debian distributions use dpkg and apt-based utilities for package management.
49
5.1.
No GUI overhead.
You can initiate graphical apps directly from the command line.
50
the machine at a pure text terminal with no running graphical interface. Most terminal emulator
programs support multiple terminal sessions by opening additional tabs or windows.
xterm
rxvt
konsole
terminator
51
Virtual Terminals
Virtual Terminals
(VT) are console sessions
that use the entire display
and keyboard outside of a
graphical environment.
Such terminals are
considered "virtual"
because although there
can be multiple active
terminals, only one
terminal remains visible at a time. A VT is not quite the same as a command line terminal window;
you can have many of those visible at once on a graphical desktop.
One virtual terminal (usually number one or seven) is reserved for the graphical environment, and text
logins are enabled on the unused VTs.Ubuntu uses VT 7, but CentOS/RHEL andopenSUSE use VT
1 for the graphical display.
An example of a situation where using the VTs is helpful when you run into problems with the
graphical desktop. In this situation, you can switch to one of the text VTs and troubleshoot.
To switch between the VTs, press CTRL-ALT-corresponding function key for the VT. For example,
press CTRL-ALT-F6 for VT 6. (Actually you only have to press ALT-F6 key combination if you are
in a VT not running X and want to switch to another VT.)
52
Command
Options
Arguments
The command is the name of the program you are executing. It may be followed by one or
more options (or switches) that modify what the command may do. Options usually start with one or
two dashes, for example,-p or--print, in order to differentiate them from arguments, which represent
what the command operates on.
However, plenty of commands have no options, no arguments, or neither. You can also type other
things at the command line besides issuing
commands, such as setting environment
variables.
Use the sudo service gdm stop or sudo service lightdm stop commands, to stop the graphical user
interface inDebian-based systems. On RPM-based systems typing sudo telinit 3 may have the same
effect of killing the GUI.
sudo
All the demonstrations created have a user
configured withsudo capabilities to provide
the user with administrative (admin)
privileges when required. sudo allows users
to run programs using the security
privileges of another user, generally root (superuser). The functionality of sudo is similar to that
of run as in Windows.
53
On your own systems, you may need to set up and enable sudo to work correctly. To do this, you need
to follow some steps that we wont explain in much detail now, but you will learn about later in this
course. When running on Ubuntu, sudo is already always set up for you during installation. If you are
running something in the Fedora or openSUSE families of distributions, you will likely need to set
up sudo to work properly for you after initial installation.
Next, you will learn the steps to setup and run sudo on your system.
1. You will need to make modifications as the administrative or super user, root. While sudo will
become the preferred method of doing this, we dont have it set up yet, so we will use su (which
we will discuss later in detail) instead. At the command line prompt, type su and
press Enter. You will then be prompted for the root password, so enter it and press Enter. You
will notice that nothing is printed; this is so others cannot see the password on the screen. You
should end up with a different looking prompt, often ending with #. For example: $
su Password: #
2. Now you need to create a configuration file to enable your user account to use sudo. Typically,
this file is created in the/etc/sudoers.d/ directory with the name of the file the same as your
username. For example, for this demo, lets say your username is student. After doing step 1,
you would then create the configuration file for student by doing this: # echo "student
ALL=(ALL) ALL" > /etc/sudoers.d/student
3. Finally, some Linux distributions will complain if you dont also change permissions on the file
by doing: # chmod 440 /etc/sudoers.d/student
That should be it. For the rest of this course, if you use sudo you should be properly set up. When
using sudo, by default you will be prompted to give a password (your own user password) at least the
first time you do it within a specificed time interval. It is possible (though very insecure) to
configure sudo to not require a password or change the time window in which the password does not
have to be repeated with every sudo command.
5.2.
Basic Operations
Basic Operations
54
Once your session is started (either by logging in to a text terminal or via a graphical terminal
program) you can also connect and log in to remote systems via the Secure Shell (SSH) utility. For
example, by typing ssh [email protected],SSH would connect securely to the remote
machine and give you a command line terminal window, using passwords (as with regular logins) or
cryptographic keys (a topic we won't discuss) to prove your identity.
55
then control shutting down or rebooting the system. It is important to always shut down properly;
failure to do so can result in damage to the system and/or loss of data.
The halt and poweroff commands issue shutdown -h to halt the system; reboot issues shutdown r and causes the machine to reboot instead of just shutting down. Both rebooting and shutting down
from the command line requires superuser (root) access.When administering a multiuser system, you
have the option of notifying all users prior to shutdown as in:
$ sudo shutdown -h 10:00 "Shutting down for scheduled maintenance."
Locating Applications
Depending on the specifics of your
particular distribution's policy, programs
and software packages can be installed in
various directories. In general, executable
programs should live in
the/bin, /usr/bin,/sbin,/usr/sbin directories
or under /opt.
If which does not find the program, whereis is a good alternative because it looks for packages in a
broader range of system directories:
$ whereis diff
Accessing Directories
When you first log into a system or open a terminal, the default directory should be your home
directory; you can print the exact path of this by typing echo $HOME. (Note that some Linux
distributions actually open new graphical terminals in$HOME/Desktop.) The following commands
are useful for directory navigation:
56
Command
Result
pwd
cd ~ orcd
cd ..
cd -
57
1. Absolute pathname: An absolute pathname begins with the root directory and follows the tree,
branch by branch, until it reaches the desired directory or file. Absolute paths always start with /.
2. Relative pathname: A relative pathname starts from the present working directory. Relative
paths never start with /.
Multiple slashes (/) between directories and files are allowed, but all but one slash between elements in
the pathname is ignored by the system. ////usr//bin is valid, but seen as /usr/bin by the system.
Most of the time it is most convenient to use relative paths, which require less typing. Usually you take
advantage of the shortcuts provided by: . (present directory), .. (parent directory) and ~ (your home
directory).
For example, suppose you are currently working in your home directory and wish to move to
the /usr/bin directory. The following two ways will bring you to the same directory from
your home directory:
Command
cd /
Usage
Changes your current directory to the root (/) directory (or path you supply)
58
ls
ls a
List all files including hidden files and directories (those whose name start with . )
tree
Note that two files now appear to exist. However, a closer inspection of the file listing shows that this
is not quite true.
$ ls -li file1 file2
The -i option to ls prints out in the first column the inode number, which is a unique quantity for each
file object. This field is the same for both of these files; what is really going on here is that it is
only one file but it has more than one name associated with it, as is indicated by the 3 that appears in
the ls output. Thus, there already was another object linked to file1 before the command was
executed.
Click the image to view an enlarged version.
59
Symbolic Links
Symbolic (or Soft) links are created with
the -s option as in:
$ ln -s file1 file4
$ ls -li file1 file4
Unlike hard links, soft links can point to objects even on different filesystems (or partitions) which
may or may not be currently available or even exist. In the case where the link does not point to a
currently available or existing object, you obtain a dangling link.
Hard links are very useful and they save space, but you have to be careful with their use, sometimes in
subtle ways. For one thing if you remove either file1 or file2 in the example on the previous screen,
the inode object (and the remaining file name) will remain, which might be undesirable as it may lead
to subtle errors later if you recreate a file of that name.
If you edit one of the files, exactly what happens depends on your editor; most editors
including vi and gedit will retain the link by default but it is possible that modifying one of the names
may break the link and result in the creation of two objects.
60
directories, walking in reverse order (the most recent directory will be the first one retrieved
with popd). The list of directories is displayed with the dirs command.
5.3.
In Linux, all open files are represented internally by what are called file descriptors. Simply put, these
are represented by numbers starting at zero. stdin is file descriptor 0, stdout is file descriptor 1,
and stderr is file descriptor 2. Typically, if other files are opened in addition to these three, which are
opened by default, they will start at file descriptor 3 and increase from there.
I/O Redirection
Through the command shell we can redirect the three standard filestreams so that we can get input
from either a file or another command instead of from our keyboard, and we can write output and
errors to files or send them as input for subsequent commands.
For example, if we have a program called do_something that reads from stdin and writes
to stdout and stderr, we can change its input source by using the less-than sign ( < ) followed by the
name of the file to be consumed for input data:
$ do_something < input-file
If you want to send the output to a file, use the greater-than sign (>) as in:
$ do_something > output-file
Because stderr is not the same as stdout, error messages will still be seen on the terminal windows in
the above example.
If you want to redirect stderr to a separate file, you use stderrs file descriptor number (2), the
greater-than sign (>), followed by the name of the file you want to hold everything the running
61
A special shorthand notation can be used to put anything w ritten to file descriptor 2 (stderr) in the
same place as file descriptor 1 (stdout): 2>&1
$ do_something > all-output-file 2>&1
bash permits an easier syntax for the above:
$ do_something >& all-output-file
Pipes
The UNIX/Linux philosophy is to have many simple and short programs (or commands) cooperate
together to produce quite complex results, rather than have one complex program with many possible
options and modes of operation. In order to accomplish this, extensive use of pipes is made; you can
pipe the output of one command or program into another as its input.
In order to do this we use the vertical-bar, |, (pipe symbol) between commands as in:
$ command1 | command2 | command3
The above represents what we often call a pipeline and allows Linux to combine the actions of several
commands into one. This is extraordinarily efficient because command2 and command3 do not have
to wait for the previous pipeline commands to complete before they can begin hacking at the data in
their input streams; on multiple CPU or core systems the available computing power is much better
utilized and things get done quicker. In addition there is no need to save output in (temporary) files
between the stages in the pipeline, which saves disk space and reduces reading and writing from disk,
which is often the slowest bottleneck in getting something done.
In this section, you will learn how to use the locate andfind utilities, and how to
use wildcards in bash.
locate
62
The locate utility program performs a search through a previously constructed database of files and
directories on your system, matching all entries that contain a specified character string. This can
sometimes result in a very long list.
To get a shorter more relevant list we can use the grep program as a filter; grep will print only the
lines that contain one or more specified strings as in:
$ locate zip | grep bin
which will list all files and directories with both "zip" and "bin" in their name . (We will cover grep in
much more detail later.) Notice the use of | to pipe the two commands together.
locate utilizes the database created by another program, updatedb. Most Linux systems run this
automatically once a day. However, you can update it at any time by just running updatedb from the
command line as the root user.
Wildcard
Result
[set]
Matches any character in the set of characters, for example [adf] will
match any occurrence of "a", "d", or "f"
63
[!set]
To search for files using the ? wildcard, replace each unknown character with ?, e.g. if you know only
the first 2 letters are 'ba' of a 3-letter filename with an extension of .out, type ls ba?.out .
To search for files using the * wildcard, replace the unknown string with *, e.g. if you remember only
that the extension was.out, type ls *.out
Using find
When no arguments are given, find lists all
files in the current directory and all of its
subdirectories. Commonly used options to
shorten the list include -name (only list files
with a certain pattern in their name), iname (also ignore the case of file
names), and -type (which will restrict the
results to files of a certain specified type,
such as d for directory, l for symbolic link
or f for a regular file, etc).
64
One can also use the -ok option which behaves the same as -exec except that find will prompt you for
permission before executing the command. This makes it a good way to test your results before blindly
executing any potentially dangerous commands.
65
created, last used, etc, or based on their size. Both are easy to accomplish.
Here, -ctime is when the inode meta-data (i.e., file ownership, permissions, etc) last changed; it is
often, but not necessarily when the file was first created. You can also search for accessed/last read ( atime) or modified/last written (-mtime) times. The number is the number of days and can be
expressed as either a number (n) that means exactly that value, +n which means greater than that
number, or -n which means less than that number. There are similar options for times in minutes (as
in -cmin, -amin, and -mmin).
Finding based on sizes:
$ find / -size 0
Note the size here is in 512-byte blocks, by default; you can also specify bytes (c), kilobytes (k),
megabytes (M), gigabytes (G), etc. As with the time numbers above, file sizes can also be exact
numbers (n), +n or -n. For details consult the man page for find.
For example, to find files greater than 10 MB in size and running a command on those files:
$ find / -size +10M -exec command {} ;
5.4.
Linux provides many commands that help you in viewing the contents of a file, creating a new file or
an empty file, changing the timestamp of a file, and removing and renaming a file or directory. These
commands help you in managing your data and files and in ensuring that the correct data is available at
the correct location.
In this section, you will learn how to manage files.
Viewing Files
You can use the following utilities to view files:
66
Command
Cat
Usage
Used for viewing files that are not very long; it does not provide any scrollback.
Tac
Less
Tail
Used to print the last 10 lines of a file by default. You can change the number of
lines by doing -n 15 or just -15 if you wanted to look at the last 15 lines instead
of the default.
Head
67
The -t option allows you to set the date and time stamp of the file.
To create a sample directory named sampdir under the current directory, type mkdir sampdir.
To create a sample directory called sampdir under /usr, type mkdir /usr/sampdir.
Removing a directory is simply done with rmdir. The directory must be empty or it will fail. To
remove a directory and all of its contents you have to do rm -rf as we shall discuss.
Removing a File
Command
Usage
mv
Rename a file
rm
Remove a file
rm f
rm i
68
If you are not certain about removing files that match a pattern you supply, it is always good to
run rm interactively (rm i) to prompt before every removal.
extremely dangerous and should be used with the utmost care, especially when used by root (recall
that recursive means drilling down through all sub-directories, all the way down a tree). Below are the
commands used to rename or remove a directory:
Command
Usage
mv
Rename a directory
rmdir
rm -rf
69
student@quad32 $
This could prove useful if you are working in multiple roles and want to be always reminded of who
you are and what machine you are on. The prompt above could be implemented by setting the PS1
variable to: \u@\h \$
For example:
$ echo $PS1
\$
$ PS1="\u@\h \$ "
coop@quad64 $ echo $PS1
\u@\h \$
coop@quad64 $
5.5.
Installing Software
There are two broad families of package managers: those based onDebian and those which
use RPM as their low-level package manager. The two systems are incompatible, but provide the
same features at a broad level.
70
Most of the time users need work only with the high-level tool, which will take care of calling the low level tool as needed. Dependency tracking is a particularly important feature of the high-level tool, as
it handles the details of finding and installing each dependency for you. Be careful, however, as
installing a single package could result in many dozens or even hundreds of dependent packages being
installed.
The Advanced Packaging Tool (apt) is the underlying package management system that manages
software on Debian-based systems. While it forms the backend for graphical package managers, such
as theUbuntu Software Center and synaptic, its native user interface is at the command line, with
programs that include apt-get and apt-cache.
zypper is a package management system for openSUSE that is based on RPM. zypper also allows
you to manage repositories from the command line. zypper is fairly straightforward to use and
resembles yum quite closely.
71
Summary
You have completed this chapter. Lets summarize the key concepts covered.
Virtual terminals (VT) in Linux are consoles, or command line terminals that use the connected
monitor and keyboard.
Different Linux distributions start and stop the graphical desktop in different ways.
A terminal emulator program on the graphical desktop works by emulating a terminal within a
window on the desktop.
The Linux system allows you to either log in via text terminal or remotely via the console.
When typing your password, nothing is printed to the terminal, not even a generic symbol to
indicate that you typed.
The preferred method to shut down or reboot the system is to use the shutdown command.
An absolute pathname begins with the root directory and follows the tree, branch by branch, until
it reaches the desired directory or file.
cd remembers where you were last, and lets you get back there with cd -.
locate performs a database search to find all file names that match a given pattern.
find is able to run commands on the files that it lists, when used with the -exec option.
touch is used to set the access, change, and edit times of files as well as to create empty files.
The Advanced Packaging Tool (apt) package management system is used to manage
installed software on Debian-based systems.
You can use the Yellowdog Updater Modified (yum) open-source command-line packagemanagement utility for RPM-compatible Linux operating systems.
The zypper package management system is based on RPM and used for openSUSE.
72
6.1.
Documentation Sources
GNU Info
6.2.
The man pages are the most often-used source of Linux documentation. They provide in-depth
documentation about many programs and utilities as well as other topics, including configuration files,
system calls, library routines, and the kernel.
73
Typing man with a topic name as an argument retrieves the information stored in the topic's man
pages. Some Linux distributions require every installed program to have a corresponding man page,
which explains the depth of coverage. (Note: man is actually an abbreviation for manual.) The man
pages structure were first introduced in the early UNIX versions of the early 1970s.
The man pages are often converted to:
Web pages
Published books
Graphical help
Other formats
man
The man program searches, formats, and displays the information contained in the man
pages. Because many topics have a lot of information, output is piped through a terminal
pager program such as less to be viewed one page at a time; at the same time the information is
formatted for a good visual display.
When no options are given, by default one sees only the dedicated page specifically about the
topic. You can broaden this to view all man pages containing a string in their name by using
the -f option. You can also view all man pages that discuss a specified subject (even if the
specified subject is not present in the name) by using the k option.
74
Manual Chapters
The man pages are divided into nine numbered chapters
(1 through 9). Sometimes, a letter is appended to the
chapter number to identify a specific topic. For example,
many pages describing part of the X Window API are in
chapter 3X.
With the -a parameter, man will display all pages with the given name in all chapters, one
after the other.
$ man 3 printf
$ man -a printf
75
6.3.
GNU Info
Key
6.4.
Function
Help Command
Most commands have an available short description which can be viewed using the --help or the h option along with the command or application. For example, to learn more about theman command,
76
The --help option is useful as a quick reference and it displays information faster than
the man or info pages.
6.5.
77
GNOME: gnome-help
KDE: khelpcenter
Package Documentation
Linux documentation is also available as part
of the package management system. Usually
this documentation is directly pulled from the
upstream source code, but it can also contain
information about how the distribution
packaged and set up the software.
Online Resources
There are many places to access online Linux documentation, and a little bit of searching will
get you buried in it.
You can also find very helpful documentation for each distribution. Each distribution has its
own user-generated forums and wiki sections. Here are just a few links to such sources:
Ubuntu: https://help.ubuntu.com/
CentOS: https://www.centos.org/docs/
78
OpenSUSE: http://en.opensuse.org/Portal:Documentation
GENTOO: http://www.gentoo.org/doc/en
Moreover you can use online search sites to locate helpful resources from all over the Internet,
including blog posts, forum and mailing list posts, news articles, and so on.
Summary
You have completed this chapter. Lets summarize the key concepts covered:
The main sources of Linux documentation are the man pages,GNU Info, the help options and
command, and a rich variety of online documentation sources.
The man pages provide in-depth documentation about programs and other topics about the
system including configuration files, system calls, library routines, and the kernel.
The GNU Info System was created by the GNU project as its standard documentation. It is
robust and is accessible via command line, web, and graphical tools using info.
Short descriptions for commands are usually displayed with the -h or --help argument.
You can type help at the command line to display a synopsis of built-in commands.
There are many other help resources both on your system and on the Internet.
79
7.1.
Accounts
All Linux users are assigned a unique user ID (uid), which is just an integer, as well as one or more
group IDs (gid), including a default one which is the same as the user ID.
80
Historically Fedora-family systems start uid's at 500; other distributions begin at 1000.
These numbers are associated with names through the files /etc/passwd and /etc/group.
Groups are used to establish a set of users who have common interests for the purposes of access
rights, privileges, and security considerations. Access rights to files (and devices) are granted on the
basis of the user and the group they belong to.
Adding a new user is done with useradd and removing an existing user is done with userdel. In the
simplest form an account for the new user turkey would be done with:
$ sudo useradd turkey
which by default sets the home directory to /home/turkey, populates it with some basic files (copied
from /etc/skel) and adds a line to /etc/passwd such as:
turkey:x:502:502::/home/turkey:/bin/bash
and sets the default shell to /bin/bash. Removing a user account is as easy as typing userdel
turkey However, this will leave the /home/turkey directory intact. This might be useful if it is a
temporary inactivation. To remove the home directory while removing the account one needs to use
the -r option to userdel.
Typing id with no argument gives information about the current user, as in:
81
$ id
uid=500(george) gid=500(george) groups=106(fuse),500(george)
If given the name of another user as an argument, id will report information about that other user.
Adding a user to an already existing group is done with usermod. For example, you would first look at
what groups the user already belongs to:
$ groups turkey
turkey : turkey
and then add the new group:
These utilities update /etc/group as necessary. groupmod can be used to change group properties such
as the Group ID (gid) with the -g option or its name with the -n option.
Removing a user from the group is a somewhat trickier. The -G option to usermod must give a
complete list of groups. Thus if you do:
82
su and sudo
When assigning elevated privileges, you can use the command su (switch or substitute user) to launch
a new shell running as another user (you must type the password of the user you are becoming). Most
often this other user is root, and the new shell allows the use of elevated privileges until it is exited. It
is almost always a bad (dangerous for both security and stability) practice to use su to become root.
Resulting errors can include deletion of vital files from the system and security breaches.
Granting privileges using sudo is less dangerous and is preferred. By default, sudo must be enabled on
a per-user basis. However, some distributions (such as Ubuntu) enable it by default for at least one
main user, or give this as an installation option.
In the chapter on Security that follows shortly, we will describe and compare su and sudo in detail.
83
the command is complete you will return to being a normal unprivileged user.
sudo configuration files are stored in the /etc/sudoers file and in the /etc/sudoers.d/ directory. By
default, the sudoers.d directory is empty.
Startup Files
In Linux, the command shell program (generally bash) uses one
or more startup files to configure the environment. Files in
the /etc directory define global settings for all users while
Initialization files in the user's home directory can include and/or
override the global settings.
The startup files can do anything the user would like to do in every command shell, such as:
1. ~/.bash_profile
2. ~/.bash_login
3. ~/.profile
The Linux login shell evaluates whatever startup file that it comes across first and ignor es the
rest. This means that if it finds~/.bash_profile, it ignores ~/.bash_login and ~/.profile. Different
distributions may use different startup files.
However, every time you create a new shell, or terminal window, etc., you do not perform a full
system login; only the~/.bashrc file is read and evaluated. Although this file is not read and evaluated
along with the login shell, most distributions and/or users include the ~/.bashrc file from within one of
84
the three user-owned startup files. In the Ubuntu,openSuse, and CentOS distros, the user must make
appropriate changes in the ~/.bash_profile file to include the~/.bashrc file.
The .bash_profile will have certain extra lines, which in turn will collect the required customization
parameters from.bashrc.
7.2.
Environment Variables
Environment variables are simply named quantities that have specific values and are
understood by the command shell, such as bash. Some of these are pre-set (built-in) by the system,
and others are set by the user either at the command line or within startup and other scripts. An
environment variable is actually no more than a character string that contains information used by one
or more applications.
There are a number of ways to view the values of currently set environment variables; one can
type set, env, or export.Depending on the state of your system, set may print out many more lines
than the other two methods.
$ set
BASH=/bin/bash
BASHOPTS=checkwinsize:cmdhist:expand_aliases:extglob:extquote:force_fignore
BASH_ALIASES=()
...
85
$ env
SSH_AGENT_PID=1892
GPG_AGENT_INFO=/run/user/me/keyring-Ilf3vt/gpg:0:1
TERM=xterm
SHELL=/bin/bash
...
$ export
declare -x COLORTERM=gnome-terminal
declare -x COMPIZ_BIN_PATH=/usr/bin /
declare -x COMPIZ_CONFIG_PROFILE=ubuntu
Task
Command
echo $SHELL
VARIABLE=value
2. Type source ~/.bashrc or just .
~/.bashrc (dot ~/.bashrc); or just start a
new shell by typing bash
86
Command
Explanation
$ echo $HOME
/home/me
$ cd /bin
$ pwd
Where are we? Use print (or present)
/bin
$ cd
$ pwd
. . . takes us back to HOME
/home/me
:path1:path2
87
path1::path2
In the example :path1:path2, there is null directory before the first colon (:). Similarly,
for path1::path2 there is null directory between path1 and path2.
To prefix a private bin directory to your path:
$ export PATH=$HOME/bin:$PAT H
$ echo $PATH
/home/me/bin:/usr/local/bin:/usr/bin:/bin/usr
PS1 is the primary prompt variable which controls what your command line prompt looks like. The
following special characters can be included in PS1 :
\u - User name
\h - Host name
\w - Current working directory
\! - History number of this command
\d - Date
They must be surrounded in single quotes when they are used as in the following example:
$ echo $PS1
$
$ export PS1='\u@\h:\w$ '
[email protected]:~$ # new prompt
[email protected]:~$
Even better practice would be to save the old prompt first and then restore, as in:
$ OLD_PS1=$PS1
88
7.3.
The list of commands is displayed with the most recent command appearing last in the list. This
information is stored in ~/.bash_history.
Key
Usage
!! (Pronounced as bang-bang)
CTRL-R
If you want to recall a command in the history list, but do not want to press the arrow key repeatedly,
you can press CTRL-Rto do a reverse intelligent search.
As you start typing the search goes back in reverse order to the first command that matches the letters
you've typed. By typing more successive letters you make the match more and more specific.
The following is an example of how you can use the CTRL-R command to search through the
command history:
$ ^R
(reverse-i-search)'s': sleep 1000
$ sleep 1000
90
Syntax
Task
!$
!n
!string
All history substitutions start with !. In the line $ ls -l /bin /etc /var !$ refers to /var, which is the last
argument in the line.
91
$ !1
echo $SHELL
/bin/bash
$ !sl
sleep 1000
$
Keyboard Shortcuts
You can use keyboard shortcuts to perform different tasks quickly. The table lists some of these
keyboard shortcuts and their uses.
Keyboard Shortcut
Task
CTRL-L
CTRL-D
CTRL-Z
CTRL-C
CTRL-H
CTRL-A
92
CTRL-W
CTRL-U
CTRL-E
Tab
7.4.
Command Aliases
Creating Aliases
You can create customized commands or modify the behavior of already existing ones by
creating aliases. Most often these aliases are placed in your ~/.bashrc file so they are available to any
command shells you create.
Typing alias with no arguments will list currently defined aliases.
Please note there should not be any spaces on either side of the equal sign and the alias definition
needs to be placed within either single or double quotes if it contains any spaces.
7.5.
File Permissions
File Ownership
In Linux and other UNIX-based operating systems, every file is associated with a user who is
the owner. Every file is also associated with a group (a subset of all users) which has an interest in the
file and certain rights, or permissions: read, write, and execute.
The following utility programs involve user and group ownership and permission setting.
93
Command
Usage
chown
chgrp
chmod
There are a number of different ways to use chmod. For instance, to give the owner and others execute
permission and remove the group write permission:
$ ls -l a_file
-rw-rw-r-- 1 coop coop 1601 Mar 9 15:04 a_file
$ chmod uo+x,g-w a_file
$ ls -l a_file
-rwxr--r-x 1 coop coop 1601 Mar 9 15:04 a_file
where u stands for user (owner), o stands for other (world), and g stands for group.
94
This kind of syntax can be difficult to type and remember, so one often uses a shorthand which lets
you set all the permissions in one step. This is done with a simple algorithm, and a single digit suffices
to specify all three permission bits for each entity. This digit is the sum of:
4 if read permission is desired.
2 if write permission is desired.
1 if execute permission is desired.
Thus 7 means read/write/execute, 6 means read/write, and 5 means read/execute.
When you apply this to the chmod command you have to give three digits for each degree of freedom,
such as in
$ chmod 755 a_file
$ ls -l a_file
-rwxr-xr-x 1 coop coop 1601 Mar 9 15:04 a_file
Example of chown
The first image shows the permissions for owners/groups/all users on 'file1'. The second image shows
the change in permissions for the different users on "file1"
$ ls -l
total 4
95
$ ls -l
total 4
-rw-rw-r--. 1 root bob 0 Mar 16 19:04 file-1
-rw-rw-r--. 1 bob bob 0 Mar 16 19:04 file-2
drwxrwxr-x. 2 bob bob 4096 Mar 16 19:04 temp
Example of chgrp
Now lets see an example of changing group ownership using chgrp:
The image on LHS shows the group with their permissions on 'file1'.
The image on RHS shows the change in groups and thier permissions on "file1"
$ sudo chgrp bin file-2
$ ls -l
total 4
-rw-rw-r--. 1 root bob 0 Mar 16 19:04 file-1
-rw-rw-r--. 1 bob bin 0 Mar 16 19:04 file-2
drwxrwxr-x. 2 bob bob 4096 Mar 16 19:04 temp
96
Summary
You have completed this chapter. Let's summarize the key concepts covered.
To find the currently logged on users, you can use thewho command.
To find the current user ID, you can use the whoami command.
The root account has full access to the system. It is never sensible to grant full root access to a
user.
You can assign root privileges to regular user accounts on a temporary basis using
the sudo command.
The shell program (bash) uses multiple startup files to create the user environment. Each file
affects the interactive environment in a different way. /etc/profile provides the global settings.
Advantages of startup files include that they customize the user's prompt, set the user's terminal
type, set the command-line shortcuts and aliases, and set the default text editor, etc.
An environment variable is a character string that contains data used by one or more
applications. The built-in shell variables can be customized to suit your requirements.
The history command recalls a list of previous commands which can be edited and recycled.
In Linux, various keyboard shortcuts can be used at the command prompt instead of long
actual commands.
You can customize commands by creating aliases. Adding an alias to /.bashrc will make it
available for other shells.
97
98
How to create and edit files using the available Linux text editors.
vi and emacs, two advanced editors with both text-based and graphical interfaces.
8.1.
By now you have certainly realized Linux is packed with choices; when it comes to text editors, there
are many choices ranging from quite simple to very complex, including:
- nano
- gedit
- vi
- emacs
99
In this section, we will learn about nano and gedit; editors which are relatively simple and easy to
learn. Before we start, let's take a look at some cases where an editor is not needed.
Earlier we learned that a single greater-than sign (>) will send the output of a command to a file. Two
greater-than signs ( >>) will append new output to an existing file.
100
Both the above techniques produce a file with the following lines in it:
line one
line two
line three
and are extremely useful when employed by scripts.
As a graphical editor, gedit is part of the GNOME desktop system (kwrite is associated with KDE).
The gedit and kwrite editors are very easy to use and are extremely capable. They are also very
configurable. They look a lot like Notepad in Windows. Other variants such as keditand kate are also
supported by KDE.
nano
nano is easy to use, and requires very little effort to learn. To open a file in nano, type nano
<filename> and press Enter. If the file doesn't exist, it will be created.
nano provides a two line shortcut bar at the bottom of the screen that lists the available commands.
Some of these commands are:
gedit
gedit (pronounced 'g-edit') is a simple-to-use graphical editor that can only be run within a Graphical
Desktop environment. It is visually quite similar to the Notepad text editor in Windows, but is
101
actually far more capable and very configurable and has a wealth of plugins available to extend its
capabilities further.
To open a new file in gedit, find the program in your desktop's menu system, or from the command
line type gedit <filename>. If the file doesn't exist it will be created.
Using gedit is pretty straight-forward and doesn't require much training. Its interface is composed of
quite familiar elements.
8.2.
vi and emacs
Developers and administrators
experienced in working on UNIX-like
systems almost always use one of the
two venerable editing
options; vi andemacs. Both are present
or easily available on all distributions
and are completely compatible
with the versions available on other
operating systems.
Both vi and emacs have a basic purely text-based form that can run in a non-graphical environment.
They also have one or more X-based graphical forms with extended capabilities; these may be
friendlier for a less experienced user. While vi and emacs can have significantly steep learning curves
for new users, they are extremely efficient when one has learned how to use them.
You need to be aware that fights among seasoned users over which editor is better can be quite intense
and are often described as a holy war.
Introduction to vi
Usually the actual program installed on your system is vim which stands for vi Improved, and is
aliased to the name vi. The name is pronounced as vee-eye.
Even if you dont want to use vi, it is good to gain some familiarity with it: it is a standard tool
installed on virtually all Linux distributions. Indeed, there may be times where there is no other editor
available on the system.
102
GNOME extends vi with a very graphical interface known as gvim and KDE offers kvim. Either of
these may be easier to use at first.
When using vi, all commands are entered through the keyboard; you dont need to keep moving your
hands to use a pointer device such as a mouse or touchpad, unless you want to do so when using one of
the graphical versions of the editor.
Mode
Command
Insert
Feature
Keyboard strokes are interpreted as commands that can modify file contents.
Type: to switch to the Line mode from Command mode. Each key is an external
Line
command, including operations such as writing the file contents to disk or exiting.
Uses line editing commands inherited from older line editors. Most of these commands
are actually no longer used. Some line editing commands are very powerful.
vimtutor
Typing vimtutor launches a short but very comprehensive tutorial for those who want to learn their
first vi commands. This tutorial is a good place to start learning vi. Even though it provides only an
introduction and just seven lessons, it has enough material to make you a very proficient vi user
because it covers a large number of commands. After learning these basic ones, you can look up new
103
tricks to incorporate into your list of vi commands because there are always more optimal ways to do
things in vi with less typing.
Modes in vi
vi provides three modes as described in the table below. It is vital to not lose track of which mode you
are in. Many keystrokes and commands behave quite differently in different modes.
Command
Usage
vi myfile
vi -r myfile
:r file2
:w
:w myfile
104
:w! file2
Overwrite file2
:x or :wq
:q
Quit vi
:q!
Key
Usage
arrow keys
j or <ret>
105
h or Backspace
l or Space
:0 or 1G
:n or nG
To move to line n
:$ or G
CTRL-B or Page Up
106
^l
Command
Usage
/pattern
?pattern
The table describes the most important keystrokes used when searching for text in vi.
Key
Usage
107
Key
Usage
Start a new line below current line, insert text there; stop upon Escape
key
Start a new line above current line, insert text there; stop upon Escape
key
Replace text starting with current position; stop upon Escape key
108
Nx
dw
dd
Ndd or dNd
Delete N lines
yy
Nyy or yNy
Paste at the current position the yanked line or lines from the buffer.
109
Typing :!executes a command from within vi. The command follows the exclamation point. This
technique best suited for non-interactive commands such as:
:! wc %
Typing this will run the wc (word count) command on the file; the character % represents the file
currently being edited.
The fmt command does simple formatting of text. If you are editing a file and want the file to look
nice, you can run the file through fmt. One way to do this while editing is by using :%!fmt, which
runs the entire file (the % part) through fmt and replaces the file with the results.
110
Introduction to emacs
The emacs editor is a popular competitor for vi. Unlike vi, it does not work with modes. emacs is
highly customizable and includes a large number of features. It was initially designed for use on a
console, but was soon adapted to work with a GUI as well. emacshas many other capabilities other
than simple text editing; it can be used for email, debugging, etc.
Rather
than
having
different
modes
for
command
and
insert,
Key
Usage
emacs myfile
CTRL-x i
CTRL-x s
CTRL-x CTRL-w
CTRL-x CTRL-s
111
CTRL-x CTRL-c
The emacs tutorial is a good place to start learning basic emacs commands. It is available any time
when in emacs by simply typing CTRL-h (for help) and then the letter t for tutorial.
Key
Usage
arrow keys
Use the arrow keys for up, down, left and right
CTRL-n
CTRL-p
One line up
CTRL-f
CTRL-b
CTRL-a
112
CTRL-e
Esc-f
Esc-b
Esc-<
Esc-x
Esc->
Esc-v or Page Up
CTRL-l
113
Key
Usage
CTRL-s
CTRL-r
Key
Usage
CTRL-o
CTRL-d
CTRL-k
CTRL-_
Mark the beginning of the selected region. The end will be at the cursor
114
@)
position
CTRL-w
CTRL-y
Summary
You have completed this chapter. Lets summarize the key concepts covered.
Text editors (rather than word processing programs) are used quite often in Linux, for tasks such
as for creating or modifying system configuration files, writing scripts, developing source code,
etc.
The vi editor is available on all Linux systems and is very widely used. Graphical extension
versions of vi are widely available as well.
emacs is available on all Linux systems as a popular alternative to vi. emacs can support both a
graphical user interface and a text mode interface.
To access the emacs tutorial type Ctl-h and then t from within emacs.
vi has three modes: Command, Insert, and Line; emacs has only one but requires use of special
keys such as Control and Escape.
Both editors use various combinations of keystrokes to accomplish tasks; the learning curve to
master these can be long but once mastered using either editor is extremely efficient.
115
Have a good grasp of best practices and tools for making Linux systems as secure as possible.
Understand the powers and dangers of using the root (superuser) account.
Know how to use the sudo command to perform privileged operations while restricting enhanced
powers as much as feasible.
Know how to work with passwords, including how to set and change them.
9.1.
User Accounts
The Linux kernel allows properly authenticated users to access files and applications. While each user
is identified by a unique integer (the user id or UID), a separate database associates a username with
each UID. Upon account creation, new user information is added to the user database and the user's
home directory must be created and populated with some essential files. Command line programs such
as useradd and userdel as well as GUI tools are used for creating and removing accounts.
For each user, the following seven fields are maintained in the /etc/passwd file:
Field
Details
Remarks
Name
116
Username
Password
encrypted format
User ID
(UID)
Group ID
(GID)
Processes
User Info
Home
Directory
directory
117
Shell
Types of Accounts
By default, Linux distinguishes
between several account types in order
to isolate processes and workloads.
Linux has four types of accounts:
root
System
Normal
Network
Keep in mind that practices you use on multi-user business systems are more strict than practices you
can use on personal desktop systems that only affect the casual user. This is especially true with
security. We hope to show you practices applicable to enterprise servers that you can use on all
systems, but understand that you may choose to relax these rules on your own personal system.
118
root is the most privileged account on a Linux/UNIX system. This account has the ability to carry out
all facets of system administration, including adding accounts, changing user passwords, examining
log files, installing software, etc. Utmost care must be taken when using this account. It has no security
restrictions imposed upon it.
When you are signed in as, or acting as root, the shell prompt displays '#' (if you are using bash and
you havent customized the prompt as we discuss elsewhere in this course). This convention is
intended to serve as a warning to you of the absolute power of this account.
9.2.
119
SUID (Set owner User ID upon executionsimilar to the Windows "run as" feature) is a special kind
of file permission given to a file. SUID provides temporary permissions to a user to run a program
with the permissions of the file owner (which may be root) instead of the permissions held by the
user.
The table provides examples of operations which do not require root privileges:
120
privilege
Accessing files that you have access to or sharing data over the
network
9.3.
su
sudo
121
sudo Features
sudo has the ability to keep track of unsuccessful attempts at gaining root access. Users' authorization
for using sudo is based on configuration information stored in the /etc/sudoers file and in
the /etc/sudoers.d directory.
A message such as the following would appear in a system log file (usually /var/log/secure) when
trying to execute sudo bash without successfully authenticating the user:
authentication failure; logname=op uid=0 euid=0 tty=/dev/pts/6 ruser=op rhost= user=op
conversation failed
auth could not identify password for [op]
op : 1 incorrect password attempt ;
TTY=pts/6 ; PWD=/var/log ; USER=root ; COMMAND=/bin/bash
122
The file has a lot of documentation in it about how to customize. Most Linux distributions now prefer
you add a file in the directory /etc/sudoers.d with a name the same as the user. This file contains the
individual user's sudo configuration, and one should leave the master configuration file untouched
except for changes that affect all users.
Command Logging
By default, sudo commands and any
failures are logged in/var/log/auth.log under
the Debian distribution family, and
in /var/log/messages or /var/log/secure on
other systems. This is an important
safeguard to allow for tracking and
accountability of sudo use. A typical entry
of the message contains:
Calling username
Terminal info
Working directory
Running a command such as sudo whoami results in a log file entry such as:
Dec 8 14:20:47 server1 sudo: op : TTY=pts/6 PWD=/var/log USER=root
COMMAND=/usr/bin/whoami
Process Isolation
Linux is considered to be more secure than many
other operating systems because processes are
naturally isolated from each other. One process
normally cannot access the resources of another
process, even when that process is running with the
same user privileges. Linux thus makes it difficult
(though certainly not impossible) for viruses and
123
Additional security mechanisms that have been recently introduced in order to make risks even smaller
are:
Control Groups (cgroups): Allows system administrators to group processes and associate finite
resources to each cgroup.
Linux Containers (LXC): Makes it possible to run multiple isolated Linux systems (containers)
on a single system by relying on cgroups.
Virtualization: Hardware is emulated in such a way that not only processes can be isolated, but
entire systems are run simultaneously as isolated and insulated guests (virtual machines) on one
physical host.
Hard disks, for example, are represented as /dev/sd*. While a root user can read and write to the disk
in a raw fashion (for example, by doing something like:
$ echo hello world > /dev/sda1
the standard permissions as shown in the figure make it impossible for regular users to do so. Writing
to a device in this fashion can easily obliterate the filesystem stored on it in a way that cannot be
repaired without great effort, if at all. The normal reading and writing of files on the hard disk by
applications is done at a higher level through the filesystem, and never through direct access to the
device node.
Keeping Current
124
When security problems in either the Linux kernel or applications and libraries
are discovered, Linux distributions have a good record of reacting quickly and
pushing out fixes to all systems by updating their software repositories and
sending notifications to update immediately. The same thing is true with bug
fixes and performance improvements that are not security related.
However, it is well known that many systems do not get updated frequently
enough and problems which have already been cured are allowed to remain on computers for a long
time; this is particularly true with proprietary operating systems where users are either uninformed or
distrustful of the vendor's patching policy as sometimes updates can cause new problems and break
existing operations. Many of the most successful attack vectors come from exploiting security holes
for which fixes are already known but not universally deployed.
So the best practice is to take advantage of your Linux distribution's mechanism for automatic updates
and never postpone them. It is extremely rare that such an update will cause new problems.
9.4.
Password Encryption
Protecting passwords has become a crucial element of security. Most Linux distributions rely on a
modern password encryption algorithm called SHA-512 (Secure Hashing Algorithm 512 bits),
developed by the U.S. National Security Agency (NSA) to encrypt passwords.
The SHA-512 algorithm is widely used for security applications and protocols. These security
applications and protocols include TLS, SSL, PHP, SSH, S/MIME and IPSec. SHA-512 is one of the
most tested hashing algorithms.
125
1. Password aging is a method to ensure that users get prompts that remind them to create a new
password after a specific period. This can ensure that passwords, if cracked, will only be usable
for a limited amount of time. This feature is implemented using chage, which configures the
password expiry information for a user.
2. Another method is to force users to set strong passwords using Pluggable Authentication
Modules (PAM). PAM can be configured to automatically verify that a password created or
modified using the passwd utility is sufficiently strong.PAM configuration is implemented using
a library calledpam_cracklib.so, which can also be replaced bypam_passwdqc.so for more
options.
3. One can also install password cracking programs, such as Jack The Ripper, to secure the
password file and detect weak password entries. It is recommended that written authorization be
obtained before installing such tools on any system that you do not own.
9.5.
For the now more common GRUB version 2 things are more complicated, and you have more
flexibility and can do things like use user-specific passwords, which can be their normal login
126
password. Also you never edit the configuration file,/boot/grub/grub.cfg, directly, rather you edit
system configuration files in /etc/grub.d and then run update-grub. One explanation of this can be
found at https://help.ubuntu.com/community/Grub2/Passwords.
Hardware Vulnerability
When hardware is physically accessible, security can be
compromised by:
Network sniffing: Capturing and viewing the network packet level data on your network
Your IT security policy should start with requirements on how to properly secure physical access to
servers and workstations. Physical access to a system makes it possible for attackers to easily leverage
several attack vectors, in a way that makes all operating system level recommendations irrelevant.
The guidelines of security are:
Protect your network links such that it cannot be accessed by people you do not trust
Protect your keyboards where passwords are entered to ensure the keyboards cannot be tampered
with
Ensure a password protects the BIOS in such a way that the system cannot be booted with a live
or rescue DVD or USB key
For single user computers and those in a home environment some of the above features (like
preventing booting from removable media) can be excessive, and you can avoid implementing them.
However, if sensitive information is on your system that requires careful protection, either it shouldn't
be there or it should be better protected by following the above guidelines.
127
Summary
You have completed this chapter. Lets summarize the key concepts covered:
root privileges may be required for tasks, such as restarting services, manually installing
packages and managing parts of the filesystem that are outside your home directory.
In order to perform any privileged operations such as system-wide changes, you need to use
either su or sudo.
Calls to sudo trigger a lookup in the /etc/sudoers file, or in the /etc/sudoers.d directory which
first validates that the calling user is allowed to use sudo and that it is being used within
permitted scope
One of the most powerful features of sudo is its ability to log unsuccessful attempts at gaining
root access. By default sudo commands and failures are logged in /var/log/auth.logunder
the Debian family and /var/log/messages in other distribution families.
One process cannot access another process resources, even when that process is running with
the same user privileges.
Using the user credentials, the system verifies the authenticity and identity.
The SHA-512 algorithm is typically used to encode passwords. They can be encrypted but not
decrypted.
Your IT security policy should start with requirements on how to properly secure physical
access to servers and workstations.
128
Explain many basic networking concepts including types of networks and addressing issues.
Know how to configure network interfaces and use basic networking utilities, such
as ifconfig, ip, ping, route& traceroute.
Use graphical and non-graphical browsers, such as Lynx, w3m,Firefox, Chrome and Epiphany.
Transfer files to and from clients and servers using both graphical and text mode applications,
such as Filezilla, ftp, sftp, curl and wget.
Enable multiple users to share devices over the network, such as printers and scanners.
Most organizations have both an internal network and an Internet connection for users to communicate
with machines and people outside the organization. The Internet is the largest network in the world
and is often called "the network of networks".
IP Addresses
Devices attached to a network must have at least one unique
network address identifier known as the IP (Internet
129
Protocol) address. The address is essential for routing packetsof information through the network.
Exchanging information across the network requires using streams of bite-sized packets, each of which
contains a piece of the information going from one machine to another. These packets contain data
buffers together with headers which contain information about where the packet is going to and
coming from, and where it fits in the sequence of packets that constitute the stream. Networking
protocols and software are rather complicated due to the diversity of machines and operating systems
they must deal with, as well as the fact that even very old standards must be supported.
IPv6 uses 128-bits for addresses; this allows for 3.4 X 1038 unique addresses. If you have a larger
network of computers and want to add more, you may want to move to IPv6, because it provides more
unique addresses. However, it is difficult to move to IPv6 as the two protocols do not inter-operate.
Due to this, migrating equipment and addresses to IPv6 requires significant effort and hasn't been as
fast as was originally intended.
Example:
IP address
172 .
16 .
31 .
46
Bit format
10101100.00010000.00011111.00101110
130
Network address are divided into five classes: A, B, C, D, and E. Classes A, B, and C are classified
into two parts:Network addresses (Net ID) and Host address (Host ID). The Net ID is used to
identify the network, while the Host ID is used to identify a host in the network. Class D is used for
special multicast applications (information is broadcast to multiple computers simultaneously) and
Class E is reserved for future use. In this section you will learn about classes A, B, and C.
Each Class A network can have up to 16.7 million unique hosts on its network. The range of host
address is from 1.0.0.0 to 127.255.255.255.
Note: The value of an octet, or 8-bits, can range from 0 to 255.
131
Each Class C network can support up to 256 (8-bits) unique hosts. The range of host address is from
192.0.0.0 to 223.255.255.255.
IP Address Allocation
Typically, a range of IP addresses are requested
from your Internet Service Provider (ISP) by
your organization's network administrator. Often
your choice of which class of IP address you are
given depends on the size of your network and
expected growth needs.
You can assign IP addresses to computers over a network manually or dynamically. When you assign
IP addresses manually, you add static (never changing) addresses to the network. When you assign IP
addresses dynamically (they can change every time you reboot or even more often), theDynamic Host
Configuration Protocol (DHCP) is used to assign IP addresses.
Note: The version of ipcalc supplied in the Fedora family of distributions does not behave as
described below, it is really a different program.
132
Assume that you have a Class C network. The first three octets of the IP address are 192.168.0. As it
uses 3 octets (i.e. 24 bits) for the network mask, the shorthand for this type of address is
192.168.0.0/24. To determine the host range of the address you can use for this new host, at the
command prompt, type: ipcalc 192.168.0.0/24 and press Enter.
From the result, you can check the HostMin and HostMax values to manually assign a static address
available from 1 to 254 (192.168.0.1 to 192.168.0.254).
Name Resolution
Name Resolution is used to
convert numerical IP address
values into a human-readable
format known as the hostname.
For example, 140.211.169.4 is the
numerical IP address that refers to
thelinuxfoundation.org hostnam
e. Hostnames are easier to remember.
Given an IP address, you can obtain its corresponding hostname. Accessing the machine over the
network becomes easier when you can type the hostname instead of the IP address.
You can view your systems hostname simply by typing hostname with no argument.
Note: If you give an argument, the system will try to change its hostname to match it, however,
only root users can do that.
The special hostname localhost is associated with the IP address 127.0.0.1, and describes the machine
you are currently on (which normally has additional network-related IP addresses).
Note: The next two screens cover the demonstration and T
Network Interfaces
Network interfaces are a connection channel between a device and a network. Physically, network
interfaces can proceed through anetwork interface card (NIC) or can be more abstractly
implemented as software. You can have multiple network interfaces operating at once. Specific
interfaces can be brought up (activated) or brought down (de-activated) at any time.
133
A list of currently active network interfaces is reported by theifconfig utility which you may have to
run as the superuser, or at least, give the full path, i.e., /sbin/ifconfig, on some distributions.
For Debian family configuration, the basic network configuration file is/etc/network/interfaces. You
can type /etc/init.d/networking start to start the networking configuration.
For Fedora family system configuration, the routing and host information is contained
in /etc/sysconfig/network. The network interface configuration script is located
at/etc/sysconfig/network-scripts/ifcfg-eth0.
For SUSE family system configuration, the routing and host information and network interface
configuration scripts are contained in the /etc/sysconfig/network directory.
ip is a very powerful program that can do many things. Older (and more specific) utilities such
as ifconfig and route are often used to accomplish similar tasks. A look at the relevant man pages can
tell you much more about these utilities.
ping
ping is used to check whether or not a machine
attached to the network can receive and send
134
data; i.e., it confirms that the remote host is online and is responding.
To check the status of the remote host, at the command prompt, type ping <hostname>.
ping is frequently used for network testing and management; however, its usage can increase network
load unacceptably. Hence, you can abort the execution of ping by typing CTRL-C, or by using the c option, which limits the number of packets that ping will send before it quits. When execution stops,
a summary is displayed.
route
A network requires the connection of many nodes. Data moves from source to destination by passing
through a series of routers and potentially across multiple networks. Servers maintain routing
tables containing the addresses of each node in the network. TheIP Routing protocols enable routers
to build up a forwarding table that correlates final destinations with the next hop addresses.
route is used to view or change the IP routing table. You may want to change the IP routing table to
add, delete or modify specific (static ) routes to specific hosts or networks. The table explains some
commands that can be used to manage IP routing.
Task
Command
$ route n
135
traceroute
traceroute is used to inspect the route which the data packet takes to reach the destination host which
makes it quite useful for troubleshooting network delays and errors. By using traceroute you can
isolate connectivity issues between hops, which helps resolve them faster.
To print the route taken by the packet to reach the network host, at the command prompt,
type traceroute <domain>.
Networking Tools
ethtool
Description
Queries network interfaces and can also set various parameters such as the
speed.
netstat
Displays all active connections and routing tables. Useful for monitoring
performance and troubleshooting.
nmap
tcpdump
iptraf
136
10.2. Browsers
Browsers are used to retrieve, transmit, and explore information resources, usually on the World
Wide Web. Linux users commonly use both graphical and non-graphical browser applications.
The common graphical browsers used in Linux are:
Firefox
Google Chrome
Chromium
Epiphany
Opera
Sometimes you either do not have a graphical environment to work in (or have reasons not to use it)
but still need to access web resources. In such a case, you can use non-graphical browsers such as the
following:
Non-Graphical Browsers
lynx
Description
links or elinks
w3m
137
wget
Sometimes you need to download files and information but a browser is not the best choice, either
because you want to download multiple files and/or directories, or you want to perform the action
from a command line or a script. wget is a command line utility that can capably handle the following
types of downloads:
Recursive downloads, where a web page refers to other web pages and all are downloaded at
once
Password-required downloads
To download a webpage, you can simply type wget <url>, and then you can read the downloaded page
as a local file using a graphical or non-graphical browser.
curl
Besides downloading you may want to obtain information about a URL, such as the source code being
used. curl can be used from the command line or a script to read such information. curl also allows
you to save the contents of a web page to a file as does wget.
You can read a URL using curl <URL>. For example, if you want to
read http://www.linuxfoundation.org , type curlhttp://www.linuxfoundation.org.
To get the contents of a web page and store it to a file, type curl -o
saved.html http://www.mysite.com. The contents of the main index file at the website will be saved
in saved.html.
138
FTP Clients
FTP clients enable you to transfer files with remote computers
using the FTP protocol. These clients can be either graphical
or command line tools.Filezilla, for example, allows use of the
drag-and-drop approach to transfer files between hosts. All
web browsers support FTP, all you have to do is give a URL
like : ftp://ftp.kernel.org where the usual http://becomes ftp://.
Some command line FTP clients are:
ftp
sftp
ncftp
sftp is a very secure mode of connection, which uses the Secure Shell (ssh) protocol, which we will
discuss shortly. sftpencrypts its data and thus sensitive information is transmitted more securely.
However, it does not work with so-called anonymous FTP (guest user credentials).
Both ncftp and yafc are also powerful FTP clients which work on a wide variety of operating systems
including Windows and Linux.
139
To run my_command on a remote system via SSH, at the command prompt, type, ssh <remotesystem>
my_command and press Enter. ssh then prompts you for the remote password. You can also
configure ssh to securely allow your remote access without typing a password each time.
We can also move files securely using Secure Copy (scp) between two networked hosts. scp uses the
SSH protocol for transferring data.
To copy a local file to a remote system, at the command prompt, type scp <localfile>
<user@remotesystem>:/home/user/ and press Enter.
You will receive a prompt for the remote password. You can also configure scp so that it does not
prompt for a password for each transfer.
Summary
You have completed this chapter. Lets summarize the key concepts covered:
The IP (Internet Protocol) address is a unique logical network address that is assigned to a
device on a network.
IPv4 uses 32-bits for addresses and IPv6 uses 128-bits for addresses.
DNS (Domain Name System) is used for converting Internet domain and host names to IP
addresses.
The commands ip addr show and ip route show can be used to view IP address and routing
information.
140
You can use ping to check if the remote host is alive and responding.
You can monitor and debug network problems using networking tools.
Firefox, Google Chrome, Chromium, and Epiphany are the main graphical browsers used
in Linux.
ftp, sftp, ncftp, and yafc are command line FTP clients used in Linux.
141
Most of the time such file manipulation is done at the command linewhich allows users to perform
tasks more efficiently than while using a GUI. Furthermore the command line is more suitable for
automating often executed tasks.
Indeed, experienced system administrators write customized scripts to accomplish such repetitive
tasks, standardized for each particular environment. We will discuss such scripting later in much
detail.
In this section, we will concentrate on command line file and text manipulation related utilities.
cat
cat is short for concatenate and is one of the most frequently used Linux command line utilities. It is
often used to read and print files as well as for simply view ing file contents. To view a file, use the
following
command:
$ cat <filename>
142
For example, cat readme.txt will display the contents of readme.txt on the terminal. Often the main
purpose ofcat, however, is to combine (concatenate) multiple files together. You can perform the
actions listed in the following table using cat:
Command
Usage
Concatenate multiple files and display the output; i.e., the entire content of the
first file is followed by that of the second file.
Combine multiple files and save the output into a new file.
Any subsequent lines typed will go into the file until CTRL-D is typed.
Any subsequent lines are appended to the file until CTRL-D is typed.
The tac command (cat spelled backwards) prints the lines of a file in reverse order. (Each line remains
the same but the order of lines is inverted.) The syntax of tac is exactly the same as for cat as in:
$ tac file
$ tac file1 file2 > newfile
143
cat can be used to read from standard input (such as the terminal window) if no files are specified.
You can use the > operator to create and add lines into a new file, and the >> operator to append lines
(or files) to an existing file.
To create a new file, at the command prompt type cat > <filename> and press the Enter key.
This command creates a new file and waits for the user to edit/enter the text. After you finish typing
the required text, press CTRL-D at the beginning of the next line to save and exit the editing.
Another way to create a file at the terminal is cat > <filename> << EOF. A new file is created and you
can type the required input. To exit, enter EOF at the beginning of a line.
Note that EOF is case sensitive. (One can also use another word, such as STOP.)
echo
echo simply displays (echoes) text. It is used simply as in:
$ echo string
echo can be used to display a string on standard output (i.e., the terminal) or to place in a new file
(using the > operator) or append to an already existing file (using the >> operator).
The e option along with the following switches is used to enable special character sequences, such as
the newline character or horizontal tab.
\n represents newline
echo is particularly useful for viewing the values of environment variables (built-in shell variables).
For example, echo $USERNAME will print the name of the user who has logged into the current
terminal.
The following table lists echo commands and their usage:
Command
Usage
144
echo $variable
Note that many Linux users and administrators will write scripts using more comprehensive language
utilities such as python andperl, rather than use sed and awk (and some other utilities we'll discuss
later.) Using such utilities is certainly fine in most circumstances; one should always feel free to use
the tools one is experienced with. However, the utilities that are described here are much lighter; i.e.,
they use fewer system resources, and execute faster. There are times (such as during booting the
system) where a lot of time would be wasted using the more complicated tools, and the system may
not even be able to run them. So the simpler tools will always be needed.
sed
sed is a powerful text processing tool and is
one of the oldest earliest and most popular
UNIX utilities. It is used to modify the
contents of a file, usually placing the contents
into a new file. Its name is an abbreviation
for stream editor.
sed can filter text as well as perform substitutions in data streams, working like a churn-mill.
145
Data from an input source/file (or stream) is taken and moved to a working space. The entire list of
operations/modifications is applied over the data in the working space and the final contents are
moved to the standard output space (or stream).
Command
Usage
The -e command option allows you to specify multiple editing commands simultaneously at the
command line.
Command
Usage
146
You must use the -i option with care, because the action is not reversible. It is always safer to
use sed without the i option and then replace the file yourself, as shown in the following example:
$ sed s/pattern/replace_string/g file > file2
The above command will replace all occurrences of pattern with replace_string in file1 and move the
contents tofile2. The contents of file2 can be viewed with cat file2. If you approve you can then
overwrite the original file with mv file2 file1.
Example: To convert 01/02/ to JAN/FEB/
sed -e 's/01/JAN/' -e 's/02/FEB/' -e 's/03/MAR/' -e 's/04/APR/' -e 's/05/MAY/' \
-e 's/06/JUN/' -e 's/07/JUL/' -e 's/08/AUG/' -e 's/09/SEP/' -e 's/10/OCT/' \
-e 's/11/NOV/' -e 's/12/DEC/'
awk
awk is used to extract and then print specific contents of a file and is often used to construct reports. It
was created at Bell Labs in the 1970s and derived its name from the last names of its authors:
Alfred Aho, Peter Weinberger, and BrianKernighan.
awk has the following features:
147
It works well with fields (containing a single piece of data, essentially a column) and records (a
collection of fields, essentially a line in a file).
Command
Usage
As with sed, short awk commands can be specified directly at the command line, but a more complex
script can be saved in a file that you can specify using the -f option.
The command/action in awk needs to be surrounded with apostrophes (or single-quote (')). awk can be
used as follows:
Command
Usage
148
sort
uniq
paste
join
split
You will also learn about regular expressions and search patterns.
sort
sort is used to rearrange the lines of a text file either in ascending or descending order, according to a
sort key. You can also sort by particular fields of a file. The default sort key is the order of the ASCII
characters (i.e., essentially alphabetically).
sort can be used as follows:
149
Syntax
Usage
sort <filename>
Append the two files, then sort the lines and display the output on the terminal
sort -r <filename>
When used with the -u option, sort checks for unique values after sorting the records (lines).
uniq
uniq is used to remove duplicate lines in a text file and is useful for simplifying text
display. uniq requires that the duplicate entries to be removed are consecutive. Therefore one often
runs sort first and then pipes the output
into uniq; if sort is passed the -uoption it can do all
this in one step. In the example shown, the file is
called names and was originally Ted, Bob, Alice, Bob,
Carol, Alice.
To remove duplicate entries from some files, use the
following command: sort file1 file2 | uniq > file3
OR
sort -u file1 file2 > file3
To count the number of duplicate entries, use the following command: uniq -c filename
150
paste
Suppose you have a file that contains the full name of all employees and another file that lists their
phone numbers and Employee IDs. You want to create a new file that contains all the data listed in
three columns: name, employee ID, and phone number. How can you do this effectively without
investing too much time?
paste can be used to create a single file containing all three columns. The different columns are
identified based on delimiters (spacing used to separate two fields). For example, delimiters can be a
blank space, a tab, or an Enter. In the image provided, a single space is used as the delimiter in all
files.
paste accepts the following options:
-d delimiters, which specify a list of delimiters to be used instead of tabs for separating
consecutive values on a single line. Each delimiter is used in turn; when the list has been
exhausted, paste begins again at the first delimiter.
-s, which causes paste to append the data in series rather than in parallel; that is, in a horizontal
rather than vertical fashion.
Using paste
paste can be used to combine fields (such as name or
phone number) from different files as well as combine
lines from multiple files. For example, line one from
file1 can be combined with line one of file2, line two
from file1 can be combined with line two of file2, and so
on.
To paste contents from two files one can do:
$ paste file1 file2
151
Join
Suppose you have two files with some similar columns. You have saved employees phone numbers in
two files, one with their first name and the other with their last name. You want to combine the files
without repeating the data of common columns. How do you achieve this?
The above task can be achieved using join, which is essentially an enhanced version of paste. It first
checks whether the files share common fields, such as names or phone numbers, and then joins the
lines in two files based on a common field.
Using join
To combine two files on a common field, at
the command prompt type join file1
file2 and press the Enter key.
$ cat phonebook
555-123-4567 Bob
555-231-3325 Carol
555-340-5678 Ted
555-289-6193 Alice
152
$ cat directory
555-123-4567 Anytown
555-231-3325 Mytown
555-340-5678 Yourtown
555-289-6193 Youngstown
The result of joining these two file is as shown in the output of the following command:
$ join phonebook directory
555-123-4567 Bob Anytown
555-231-3325 Carol Mytown
555-340-5678 Ted Yourtown
555-289-6193 Alice Youngstown
split
split is used to break up (or split) a file into equal-sized
segments for easier viewing and manipulation, and is
generally used only on relatively large files. By
default split breaks up a file into 1,000-line segments.
The original file remains unchanged, and set of new
files with the same name plus an added prefix is created. By default, the x prefix is added. To split a
file into segments, use the command split infile.
To split a file into segments using a different prefix, use the command split infile <Prefix>.
Using split
To demonstrate the use of split, we'll apply
it to an american-english dictionary file of
over 99,000 lines:
$ wc -l american-english
99171 american-english
153
where we have used the wc program (soon to be discussed) to report on the number of lines in the file.
Then typing:
$ split american-english dictionary
will split the american-english file into equal-sized segments named 'dictionary'.
$ ls -l dictionary*
-rw-rw-r 1 me me 8552 Mar 23 20:19 dictionaryab
-rw-rw-r 1 me me 8653 Mar 23 20:19 dictionaryaa
Many text editors and utilities such as vi, sed, awk, find and grep work extensively with regular
expressions.
Some
of
the
popular
computer
languages
that
use
regular
expressions
include Perl, Python and Ruby. It can get rather complicated and there are whole books written about
regular expressions; we'll only skim the surface here.
These regular expressions are different from the wildcards (or "metacharacters") used in filename
matching in command shells such as bash (which were covered in the earlier Chapter on Command
Line Operations). The table lists search patterns and their usage.
Search Patterns
.(dot)
Usage
154
a|z
Match a or z
Command
Usage
a..
matches azy
b.|j.
..$
matches og
155
l.*
l.*y
matches lazy
the.*
11.4. Grep
grep
grep is extensively used as a primary text searching tool. It scans files for specified patterns and can
be used with regular expressions as well as simple strings as shown in the table.
Command
Usage
156
below the pattern) for matching the pattern. Here the number of
lines is specified as 3.
The items in the square brackets are optional. tr requires at least one argument and accepts a
maximum of two. The first, designated set1 in the example, lists the characters in the text to be
replaced or removed. The second, set2, lists the characters that are to be substituted for the characters
listed in the first argument. Sometimes these sets need to be surrounded by apostrophes (or singlequotes (')) in order to have the shell ignore that they mean something special to the shell. It is usually
safe (and may be required) to use the single-quotes around each of the sets as you will see in the
examples below.
For example, suppose you have a file named city containing several lines of text in mixed case. To
translate all lower case characters to upper case, at the command prompt type cat city | tr a-z A-Z and
press the Enter key.
Command
$ tr abcdefghijklmnopqrstuvwxyz
Usage
ABCDEFGHIJKLMNOPQRST UVWXYZ
157
testing" | tr -s [:space:]
tee
tee takes the output from any command, and while sending it to standard output, it also saves it to a
file. In other words, it "tees"the output stream from the command: one stream is displayed on the
standard output and the other is saved to a file.
For example, to list the contents of a directory on the screen and save the output to a file, at the
command prompt type ls -l | tee newfile and press the Enter key.
Typing cat newfile will then display the output of ls l.
wc
158
wc (word count) counts the number of lines, words, and characters in a file or list of files. Options are
given in the table below.
By default all three of these options are active.
Description
cut
cut is used for manipulating column-based files and is
designed to extract specific columns. The default column
separator is thetab character. A different delimiter can be
given as a command option.
159
System administrators need to work with configuration files, text files, documentation files, and log
files. Some of these files may be large or become quite large as
they accumulate data with time. These files will require both
viewing and administrative updating. In this section, you will learn
how to manage such large files.
For example, a banking system might maintain one simple large
log file to record details of all of one day's ATM transactions. Due
to a security attack or a malfunction, the administrator might be
forced to check for some data by navigating within the file. In such
cases, directly opening the file in an editor will cause issues, due to high memory utilization, as an
editor will usually try to read the whole file into memory first. However, one can use less to view the
contents of such a large file, scrolling up and down page by page without the system having to place
the entire file in memory before starting. This is much faster than using a text editor.
Viewing the file can be done by typing either of the two following commands:
$ less <filename>
$ cat <filename> | less
By default, manual (i.e., the man command) pages are sent through the less command.
head
head reads the first few lines of each named file (10 by
default) and displays it on standard output. You can give a
different number of lines in an option.
tail
160
tail prints the last few lines of each named file and displays it on standard output. By default, it
displays the last 10 lines. You can give a different number of lines as an option. tail is especially
useful when you are troubleshooting any issue using log files as you probably want to see the most
recent lines of output.
For example, to display the last 15 lines of atmtrans.txt, use the following command:
$ tail -n 15 atmtrans.txt
(You can also just say tail -15 atmtrans.txt.) To continually monitor new output in a growing log file:
$ tail -f atmtrans.txt
This command will continuously display any new lines of output in atmtrans.txt as soon as they
appear. Thus it enables you to monitor any current activity that is being reported and recorded.
strings
strings is used to extract all printable character
strings found in the file or files given as arguments.
It is useful in locating human readable content
embedded in binary files: for text files one can just
use grep.
161
Command
Description
$ zcat compressed-file.txt.gz
$ zless <filename>.gz
or
$ zmore <filename>.gz
$ zdiff filename1.txt.gz
filename2.txt.gz
Note that if you run zless on an uncompressed file, it will still work and ignore the
decompression stage. There are also equivalent utility programs for other compression methods
besides gzip; i.e, we have bzcat and bzless associated withbzip2, and xzcat and xzless associated
with xz.
Summary
You have completed this chapter. Lets summarize the key concepts covered:
The command line often allows the users to perform tasks more efficiently than the GUI.
cat, short for concatenate, is used to read, print and combine files.
sed is a popular stream editor often used to filter and perform substitutions on files and text data
streams.
162
awk is a interpreted programming language typically used as a data extraction and reporting tool.
sort is used to sort text files and output streams in either ascending or descending order.
paste combines fields from different files and can also extract and combine lines from
multiple sources.
join combines lines from two files based on a common field. It works only if files share a
common field.
Regular expressions are text strings used for pattern matching. The pattern can be used to
search for a specific location, such as the start or end of a line or a word.
grep searches text files and data streams for patterns and can be used with regular
expressions.
tr translates characters, copies standard input to standard output, and handles special
characters.
tee accepts saves a copy of standard output to a file while still displaying at the terminal.
wc (word count) displays the number of lines, words and characters in a file or group of files.
less views files a page at a time and allows scrolling in both directions.
head displays the first few lines of a file or data stream on standard output. By default it
displays 10 lines.
tail displays the last few lines of a file or data stream on standard output. By default it
displays 10 lines.
The z command family is used to read and work with compressed files.
163
Print documents.
12.1. Configuration
Introduction to Printing
To manage printers and print directly from a
computer or across a networked environment,
you need to know how to configure and install a
printer. Printing itself requires software that
converts information from the application you
are using to a language your printer can
understand. The Linux standard for printing
software is the Common UNIX Printing
System (CUPS).
CUPS Overview
CUPS is the software that is used behind the scenes to print from
applications like a web browser or LibreOffice. It converts page
descriptions produced by your application (put a paragraph here, draw a line
there, and so forth) and then sends the information to the printer. It acts as
a print server for local as well as network printers.
Printers manufactured by different companies may use their own particular
print languages and formats. CUPS uses a modular printing system which
accommodates a wide variety of printers and also processes various data
formats. This makes the printing process simpler; you can concentrate more on printing and less on
how to print.
164
Generally, the only time you should need to configure your printer is when you use it for the first time.
In fact, CUPS often figures things out on its own by detecting and configuring any printers it locates.
Configuration Files
Scheduler
Job Files
Log Files
Filter
Printer Drivers
Backend
You will learn about each of these components in detail in the next few screens.
Scheduler
CUPS is designed around a print scheduler that manages print jobs, handles administrative
commands, allows users to query the printer
status, and manages the flow of data through
all CUPScomponents.
Configuration Files
The print scheduler reads server settings from several configuration files, the two most important of
which are cupsd.conf andprinters.conf. These and all other CUPS related configuration files are stored
under the /etc/cups/ directory.
165
cupsd.conf is where most system-wide settings are located; it does not contain any printer-specific
details. Most of the settings available in this file relate to network security, i.e. which systems can
accessCUPS network capabilities, how printers are advertised on the local network, what management
features are offered, and so on.
printers.conf is where you will find the printer-specific settings. For every printer connected to the
system, a corresponding section describes the printers status and capabilities. This file is generated
only after adding a printer to the system and should not be modified by hand.
You can view the full list of configuration files by typing: ls -l /etc/cups/
Job Files
CUPS stores print requests as files under
the/var/spool/cups directory (these can actually be
accessed before a document is sent to a printer). Data files
are prefixed with the letter d while control files are
prefixed with the letter c. After a printer successfully
handles a job, data files are automatically removed. These
data files belong to what is commonly known as the print
queue.
Log Files
Log files are placed in /var/log/cups and are used by the scheduler to record activities that have taken
place. These files include access, error, and page records.
166
Installing CUPS
Due to printing being a relatively important and fundamental feature of any Linux distribution, most
Linux systems come with CUPS preinstalled. In some cases, especially for Linux server
setups, CUPS may have been left uninstalled. This may be fixed by installing the corresponding
package manually. To install CUPS, please ensure that your system is connected to the Internet.
Managing CUPS
After installing CUPS, you'll need to start and manage the CUPS daemon so that CUPS is ready for
configuring a printer. Managing the CUPS daemon is simple; all management features are wrapped
around the cups init script, which can be easily started, stopped, and restarted.
Each Linux distribution has a GUI application that lets you add, remove, and configure local or remote
printers. Using this application, you can easily set up the system to use both local and network printers.
The following screens show how to find and use the appropriate application in each of the distribution
families covered in this course.
When configuring a printer, make sure the device is currently turned on and connected to the system;
if so it should show up in the printer selection menu. If the printer is not visible, you may want to
troubleshoot using tools that will determine if the printer is connected. For common USB printers, for
example, thelsusb utility will show a line for the printer. Some printer manufacturers also require
some extra software to be installed in order to make the printer visible to CUPS, however, due to the
standardization these days, this is rarely required.
Configure printers:
Local/remote printers
Share a printer as a CUPS server
Some pages require a username and password to perform certain actions, for example to add a printer.
For most Linux distributions, you must use the root password to add, modify, or delete printers or
classes.
168
The screenshots below show the GUI interfaces for CTRL-P for (from left to
right) CentOS, Ubuntu and openSUSE.
169
lp is just a command line front-end to the lpr utility that passes input to lpr. Thus, we will discuss
only lp in detail. In the example shown here, the task is to print the file called test1.txt.
Using lp
lp and lpr accept command line options that help you perform all operations that the GUI
can accomplish.lp is typically used with a file name as an argument.
Some lp commands and other printing utilities you can use are listed in the table.
Command
Usage
lp <filename>
lp -d printer <filename>
program | lp
echo string | lp
lp -n number <filename>
lpoptions -d printer
lpq -a
170
lpadmin
The lpoptions utility can be used to set printer options and defaults. Each printer has a set
of tags associated with it, such as the default number of copies and authentication requirements. You
can execute the command lpoptions help to obtain a list of supported options. lpoptions can also be
used to set system-wide values, such as the default printer.
In Linux, command line print job management commands allow you to monitor the job state as well as
managing the listing of all printers and checking their status, and cancelling or moving print jobs to
another printer.
Some of these commands are listed in the table.
Command
Usage
lpstat -p -d
lpstat -a
cancel job-id
OR
171
lprm job-id
It can be used on any printer that is PostScript-compatible; i.e., any modern printer
enscript is a tool that is used to convert a text file to PostScript and other formats. It also
supports Rich Text Format (RTF)and HyperText Markup Language (HTML). For
example, you can convert a text file to two column (-2) formatted PostScript using the
command: enscript -2 -r -p psfile.ps textfile.txt. This command will also rotate (-r) the output
to print so the width of the paper is greater than the height (aka landscape mode) thereby
reducing the number of pages required for printing.
The commands that can be used with enscript are listed in the table below (for a file called
'textfile.txt').
Command
Usage
172
enscript -p psfile.ps
textfile.txt
psfile.ps)
enscript -n -p psfile.ps
textfile.txt
(saved in psfile.ps)
enscript textfile.txt
1. Evince is available on virtually all distributions and the most widely used program.
2. Okular is based on the older kpdf and available on any distribution that provides
the KDE environment.
3. GhostView is one of the first open source PDF readers and is universally available.
4. Xpdf is one of the oldest open source PDF readers and still has a good user base.
All of these open source PDF readers support and can read files following the PostScript standard
unlike the proprietary Adobe Acrobat Reader, which was once widely used on Linux systems but
with the growth of these excellent programs, very few Linux users use it today.
173
In short, theres very little pdftk cannot do when it comes to working with PDF files; it is indeed the
Swiss Army knife of PDF tools.
Using pdftk
You can accomplish a wide variety of tasks using pdftk including:
174
Command
Usage
Merge the two documents 1.pdf and 2.pdf. The output will
12.pdf
be saved to 12.pdf.
output new.pdf
saved to new.pdf.
result in new.pdf.
175
pdfinfo can extract information about PDF files, especially when the files are very large or when a
graphical interface is not available.
flpsed can add data to a PostScript document. This tool is specifically useful for filling in forms or
adding short comments into the document.
pdfmod is a simple application that provides a graphical interface for modifying PDF documents.
Using this tool, you can reorder, rotate, and remove pages; export images from a document; edit the
title, subject, and author; add keywords; and combine documents using drag-and-drop action.
on
or
available
on
all
Linux
distributions.
As
an
alternative,
there
are pstopdf and pdftops which are usually part of the poppler package which may need to be added
through your package manager. Unless you are doing a lot of conversions or need some of the fancier
options (which you can read about in the man pages for these utilities) it really doesn't matter which
ones you use.
176
Command
Usage
pdf2ps file.pdf
ps2pdf file.ps
Convertsfile.ps to file.pdf
Summary
You have completed this chapter. Lets summarize the key concepts covered:
lp and lpr are used to submit a document to CUPS directly from the command line.
PostScript effectively manages scaling of fonts and vector graphics to provide quality prints.
Portable Document Format (PDF) is the standard format used to exchange documents while
ensuring a certain level of consistency in the way the documents are viewed.
177
pdftk joins and splits PDFs; pulls single pages from a file; encrypts and decrypts PDF files;
adds, updates, and exports a PDFs metadata; exports bookmarks to a text file; adds or
removes attachments to a PDF; fixes a damaged PDF; and fills out PDF forms.
pdfmod is a simple application with a graphical interface that you can use to modify PDF
documents.
178
Know how to test for properties and existence of files and other objects.
179
. -name "*.c" -ls at the command line accomplishes the same thing as executing a script file containing
the lines:
#!/bin/bash
find . -name "*.c" -ls
The #!/bin/bash in the first line should be recognized by anyone who has developed any kind of script
in UNIX environments. The first line of the script, that starts with #!, contains the full path of the
command interpreter (in this case /bin/bash) that is to be used on the file. As we will see on the next
screen, you have a few choices depending upon which scripting language you use.
Most Linux users use the default bash shell, but those with long UNIX backgrounds with other shells
may want to override the default.
bash Scripts
180
#!/bin/bash
# Interactive reading of variables
echo "ENTER YOUR NAME"
read sname
# Display of variable values
echo $sname
Once again, make it executable by doing chmod +x ioscript.sh.
181
In the above example, when the script ./ioscript.sh is executed, the user will receive a prompt ENTER
YOUR NAME. The user then needs to enter a value and press the Enter key. The value will then be
printed out.
Additional note: The hash-tag/pound-sign/number-sign (#) is used to start comments in the script and
can be placed anywhere in the line (the rest of the line is considered a comment).
Return Values
All shell scripts generate a return value upon
finishing execution; the value can be set with
the exit statement. Return values permit a process to
monitor the exit state of another process often in a
parent-child relationship. This helps to determine
how this process terminated and take any appropriate steps necessary, contingent on success or failure.
$ ls /etc/passwd
/etc/ passwd
$ echo $?
0
In this example, the system is able to locate the file /etc/passwd and returns a value of 0 to indicate
success; the return value is always stored in the $? environment variable. Applications often translate
these return values into meaningful messages easily understood by the user.
182
13.2. Syntax
Basic Syntax and Special Characters
Scripts require you to follow a standard language syntax. Rules delineate how to define variables and
how to construct and format allowed statements, etc. The table lists some special character usages
within bash scripts:
Character
Description
Note that when # is inserted at the beginning of a line of commentary, the whole line is ignored.
# This line will not get executed.
183
The concatenation operator (\) is used to concatenate large commands over several lines in the shell.
scp [email protected]:\
/var/ftp/pub/userdata/custdata/read \
[email protected]:\
/opt/oradba/master/abc/
The command is divided into multiple lines to make it look readable and easier to understand. The \
operator at the end of each line combines the commands from multiple lines and executes it as one
single command.
The three commands in the following example will all execute even
if the ones preceding them fail:
$ make ; make install ; make clean
However, you may want to abort subsequent commands if one fails. You can do this using
the && (and) operator as in:
$ make && make install && make clean
If the first command fails the second one will never be executed. A final refinement is to use the || (or)
operator as in:
$ cat file1 || cat file2 || cat file3
In this case, you proceed until something succeeds and then you stop executing any further steps.
Functions
184
A function is a code block that implements a set of operations. Functions are useful for executing
procedures multiple times perhaps with varying input variables. Functions are also often
called subroutines. Using functions in scripts requires two steps:
1. Declaring a function
2. Calling a function
The function declaration requires a name which is used to invoke it. The proper syntax is:
function_name () {
command...
}
For example, the following function is named display:
display () {
echo "This is a sample function"
}
The function can be as long as desired and have many statements. Once defined, the function can be
called later as many times as necessary. In the full example shown in the figure, we are also showing
an often-used refinement: how to pass an argument to the function. The first argument can be referred
to as $1, the second as $2, etc.
Compiled applications
Other scripts
Compiled applications are binary executable files that you can find on the filesystem. The shell script
always has access to compiled applications such as rm, ls, df, vi, andgzip.
185
bash has many built-in commands which can only be used to display the output within a terminal
shell or shell script. Sometimes these commands have the same name as executable programs on the
system, such as echo which can lead to subtle problems. bash built-in commands include
and cd, pwd, echo, read, logout, printf, let, and ulimit.
A complete list of bash built-in commands can be found in the bash man page, or by simply
typing help.
Command Substitution
At times, you may need to substitute the result of a command as a portion of another command. It can
be done in two ways:
No matter the method, the innermost command will be executed in a newly launched shell
environment, and the standard output of the shell will be inserted where the command substitution was
done.
Virtually any command can be executed this way. Both of these methods enable command
substitution; however, the $( )method allows command nesting. New scripts should always use this
more modern method. For example:
$ cd /lib/modules/$(uname -r)/
In the above example, the output of the command "uname r" becomes the argument for
the cd command.
Environment Variables
Almost all scripts
use variables containing a value, which
can be used anywhere in the script. These
variables can either be user or system
defined. Many applications use
such environment variables (covered in
the "User Environment" chapter) for
186
Some examples of standard environment variables areHOME, PATH, and HOST. When referenced,
environment variables must be prefixed with the $ symbol as in $HOME. You can view and set the
value of environment variables. For example, the following command displays the value stored in
the PATH variable:
$ echo $PATH
However, no prefix is required when setting or modifying the variable value. For example, the
following command sets the value of the MYCOLOR variable to blue:
$ MYCOLOR=blue
You can get a list of environment variables with the env, set, or printenv commands.
Exporting Variables
By default, the variables created within a script are available only to the subsequent steps of that script.
Any child processes (sub-shells) do not have automatic access to the values of these variables. To
make them available to child processes, they must be promoted to environment variables using
the export statement as in:
export VAR=value
or
Script Parameters
Users often need to pass parameter values to a script, such as a filename, date, etc. Scripts will take
different paths or arrive at different values according to the parameters (command arguments) that are
passed to them. These values can be text or numbers as in:
187
$ ./script.sh /tmp
$ ./script.sh 100 200
Within a script, the parameter or an argument is represented with a $ and a number. The table lists
some of these parameters.
Parameter
Meaning
$0
Script name
$1
First parameter
$*
All parameters
$#
Number of arguments
#!/bin/bash
echo The name of this program is: $0
echo The first argument passed from the command line is: $1
echo The second argument passed from the command line is: $2
echo The third argument passed from the command line is: $3
188
echo All of the arguments passed from the command line are : $*
echo
echo All done with $0
Make the script executable with chmod +x. Run the script giving it three arguments as in: script3.sh
one two three, and the script is processed as follows:
Output Redirection
Most operating systems accept input from the keyboard and display the output on the terminal.
However, in shell scripting you can send the output to a file. The process of diverting the output to a
file is called output redirection.
The > character is used to write output to a file. For example, the following command sends the output
of free to the file/tmp/free.out:
$ free > /tmp/free.out
To check the contents of the /tmp/free.out file, at the command prompt type cat /tmp/free.out.
Two > characters (>>) will append output to a file if it exists, and act just like > if the file does not
already exist.
Input Redirection
Just as the output can be redirected to a file, the input of a command can be read from a file. The
process of reading input from a file is called input redirection and uses the < character. If you create a
file called script8.sh with the following contents:
#!/bin/bash
echo Line count
wc -l < /temp/free.out
189
and then execute it with chmod +x script8.sh ; ./script8.sh, it will count the number of lines from
the/temp/free.out file and display the results.
13.3. Constructs
The if Statement
Conditional decision making using an if statement, is a basic construct
that any useful programming or scripting language must have.
When an if statement is used, the ensuing actions depend on the evaluation of specified conditions
such as:
if condition
then
statements
else
statements
fi
if [ -f /etc/passwd ]
then
190
Notice the use of the square brackets ([]) to delineate the test condition. There are many other kinds of
tests you can perform, such as checking whether two numbers are equal to, greater than, or less than
each other and make a decision accordingly; we will discuss these other tests.
In modern scripts you may see doubled brackets as in[[ -f /etc/passwd ]]. This is not an error. It is
never wrong to do so and it avoids some subtle problems such as referring to an empty environment
variable without surrounding it in double quotes; we won't talk about this here.
Executable permission
Condition
-e file
Meaning
191
-d file
-f file
Check if the file is a regular file (i.e., not a symbolic link, device
node, directory, etc.)
-s file
-g file
-u file
-r file
-w file
-x file
You can view the full list of file conditions using the command man 1 test.
192
In the example illustrated here, the if statement is used to compare the input provided by the user and
accordingly display the result.
Numerical Tests
You can use specially defined operators with the if statement to compare numbers. The various
operators that are available are listed in the table.
Operator
Meaning
-eq
Equal to
-ne
Not equal to
193
-gt
Greater than
-lt
Less than
-ge
-le
Arithmetic Expressions
Arithmetic expressions can be evaluated in the following
three ways (spaces are important!):
expr 8 + 8
echo $(expr 8 + 8)
Using the $((...)) syntax: This is the built-in shell format. The syntax is as follows:
echo $((x+1))
194
Summary
You have completed this chapter. Lets summarize the key concepts covered:
Scripts are a sequence of statements and commands stored in a file that can be executed by a
shell. The most commonly used shell in Linux is bash.
Command substitution allows you to substitute the result of a command as a portion of another
command.
Functions or routines are a group of commands that are used for execution.
Environmental variables are quantities either pre-assigned by the shell or defined and modified
by the user.
Scripts can behave differently based on the parameters (values) passed to them.
195
Use Boolean expressions when working with multiple data types including strings or numbers as
well as files.
A string variable contains a sequence of text characters. It can include letters, numbers, symbols and
punctuation marks. Some examples: abcde, 123, abcde 123, abcde-123, &acbde=%123
String operators include those that do comparison, sorting, and finding the length. The following table
demonstrates the use of some basic string operators.
Operator
Meaning
196
[ string1 == string2 ]
myLen1=${#mystring1}
In the first example, we compare the first string with the second string and display an appropriate
message using the if statement.
In the second example, we pass in a file name and see if that file exists in the current directory or not.
Parts of a String
197
At times, you may not need to compare or use an entire string. To extract the first character of a string
we can specify:
Operator
&&
Operation
Meaning
AND
||
OR
NOT
198
Boolean expressions return either TRUE or FALSE. We can use such expressions when working with
multiple data types including strings or numbers as well as with files. For example, to check if a file
exists, use the following conditional test:
[ -e <filename> ]
Similarly, to check if the value of number1 is greater than the value of number2, use the following
conditional test:
199
case expression in
pattern1) execute commands;;
pattern2) execute commands;;
pattern3) execute commands;;
pattern4) execute commands;;
*)
200
for
while
until
All these loops are easily used for repeating a set of statements until the
exit condition is true.
201
In this case, variable-name and list are substituted by you as appropriate (see examples). As with other
looping constructs, the statements that are repeated should be enclosed by do and done.
The set of commands that need to be repeated should be enclosed between do and done. You can use
any command or operator as the condition. Often it is enclosed within square brackets ([]).
Similar to the while loop, the set of commands that need to be repeated should be enclosed
between do and done. You can use any command or operator as the condition.
202
While working with scripts and commands, you may run into errors. These may be due to an error in
the script, such as incorrect syntax, or other ingredients such as a missing file or insufficient
permission to do an operation. These errors may be reported with a specific error code, but often just
yield incorrect or confusing output. So how do you go about identifying and fixing an error?
Debugging helps you troubleshoot and resolve such errors, and is one of the most important tasks a
system administrator performs.
# turns on debugging
...
set +x
File
Description
stream
stdin
File
Descriptor
203
stdout
stderr
saved
Using redirection we can save the stdout and stderr output streams to one file or two separate files for
later analysis after a program or command is executed
Temporary files (and directories) are meant to store data for a short time. Usually one arranges it so
that these files disappear when the program using them terminates. While you can also use touch to
create a temporary file, this may make it easy for hackers to gain access to your data.
The best practice is to create random and unpredictable filenames for temporary storage. One way to
do this is with the mktemp utility as in these examples:
The XXXXXXXX is replaced by the mktemp utility with random characters to ensure the name of the
temporary file cannot be easily predicted and is only known within your program.
Command
204
Usage
TEMP=$(mktemp /tmp/tempfile.XXXXXXXX)
To create a temporary
TEMPDIR=$(mktemp -d
directory
/tmp/tempdir.XXXXXXXX)
First, the danger: If someone creates a symbolic link from a known temporary file used by root to
the /etc/passwd file, like this:
$ ln -s /etc/passwd /tmp/tempfile
There could be a big problem if a script run by root has a line in like this:
echo $VAR > /tmp/tempfile
The password file will be overwritten by the temporary file contents.
To prevent such a situation make sure you randomize your temporary filenames by replacing the
above line with the following lines:
TEMP=$(mktemp /tmp/tempfile.XXXXXXXX)
echo $VAR > $TEMP
It discards all data that gets written to it and never returns a failure on write operations. Using the
proper redirection operators, it can make the output disappear from commands that would normally
generate output to stdout and/or stderr:
205
Such random numbers can be generated by using the $RANDOM environment variable, which is
derived from the Linux kernels built-in random number generator, or by the OpenSSL library
function, which uses the FIPS140 algorithm to generate random numbers for encryption
Regardless of which of these two sources is used, the system maintains a so-called entropy pool of
these digital numbers/random bits. Random numbers are created from this entropy pool.
The Linux kernel offers the /dev/random and /dev/urandom device nodes which draw on the entropy
pool to provide random numbers which are drawn from the estimated number of bits of noise in the
entropy pool.
/dev/random is used where very high quality randomness is required, such as one-time pad or key
generation, but it is relatively slow to provide vaules.
enough) for most cryptographic purposes.
206
Furthermore, when the entropy pool is empty, /dev/random is blocked and does not generate any
number until additional environmental noise (network traffic, mouse movement, etc.) is gathered
whereas /dev/urandom reuses the internal pool to produce more pseudo-random bits.
Summary
You have completed this chapter. Lets summarize the key concepts covered:
You can manipulate strings to perform actions such as comparison, sorting, and finding length.
You can use Boolean expressions when working with multiple data types including strings or
numbers as well as files.
Operators used in Boolean expressions include the && (AND), ||(OR), and ! (NOT) operators.
We looked at the advantages of using case statement in scenarios where the value of a
variable can lead to different execution paths.
The standard and error outputs from a script or shell commands can easily be redirected into
the same file or separate files to aid in debugging and saving results
Linux allows you to create temporary files and directories, which store data for a short
duration, both saving space and increasing security.
Linux provides several different ways of generating random numbers, which are widely used.
207
Use at, cron, and sleep to schedule processes in the future or pause them.
Process Types
A terminal window (one kind of command shell), is a process that runs as long as needed. It allows
users to execute programs and access resources in an interactive environment. You can also run
programs in the background, which means they become detached from the shell.
208
Processes can be of different types according to the task being performed. Here are some different
process types along with their descriptions and examples.
Process Type
Description
Example
Interactive Processes
bash,
firefox, top
Batch Processes
updatedb
Daemons
httpd,
xinetd, sshd
Threads
gnome-
terminal,
firefox
209
Kernel Threads
kswapd0,
migration,
ksoftirqd
There are some other less frequent process states, especially when a process is terminating. Sometimes
a child process completes but its parent process has not asked about its state. Amusingly such a
process is said to be in a zombie state; it is not really alive but still shows up in the system's list of
processes.
210
New PIDs are usually assigned in ascending order as processes are born. Thus PID 1 denotes
the init process (initialization process), and succeeding processes are gradually assigned higher
numbers.
The table explains the PID types and their descriptions:
ID Type
Description
Process ID (PID)
Thread ID (TID)
The user who determines the access rights for the users is
identified by the Effective UID (EUID). The EUID may or may
not be the same as the RUID.
211
Users can be categorized into various groups. Each group is identified by the Real Group ID,
or RGID. The access rights of the group are determined by the Effective Group ID, or EGID. Each
user can be a member of one or more groups.
Most of the time we ignore these details and just talk about the User ID (UID).
The priority for a process can be set by specifying a nice value, or niceness, for the process. The lower
the nice value, the higher the priority. Low values are assigned to important processes, while high
values are assigned to processes that can wait longer. A process with a high nice value simply allows
other processes to be executed first. In Linux, a nice value of -20 represents the highest priority and 19
represents the lowest. (This does sound kind of backwards, but this convention, the nicer the process,
the lower the priority, goes back to the earliest days of UNIX.)
You can also assign a so-called real-time priority to time-sensitive tasks, such as controlling
machines through a computer or collecting incoming data. This is just a very high priority and is not to
be confused with what is called hard real time which is conceptually different, and has more to do
with making sure a job gets completed within a very well-defined time window.
ps has many options for specifying exactly which tasks to examine, what information to display about
them, and precisely what output format should be used.
212
Without options ps will display all processes running under the current shell. You can use the u option to display information of processes for a specified username. The command ps -ef displays all
the processes in the system in full detail. The command ps -eLf goes one step further and displays one
line of information for every thread (remember, a process can contain multiple threads).
Command
Output
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME
ps aux
COMMAND
root 1 0.0 0.0 19356 1292 ? Ss Feb27 0:08 /sbin/init
root 2 0.0 0.0 0
? S
? S
...
Command
Output
ps aux stat,priority,pid,pcpu,comm
20 2 0.0 kthreadd
213
To terminate a process you can type kill -SIGKILL <pid> orkill -9 <pid>. Note however, you can
only kill your own processes: those belonging to another user are
off limits unless you are root.
top
While a static view of what the system is doing is useful,
monitoring the system performance live over time is also valuable.
One option would be to run ps at regular intervals, say, every two
minutes. A better alternative is to use top to get constant real-time
updates (every two seconds by default) until you exit by
typing q. top clearly highlights which processes are consuming the
most CPU cycles and memory (using appropriate commands from
within top.)
214
The load average determines how busy the system is. A load average of 1.00 per CPU indicates a
fully subscribed, but not overloaded, system. If the load average goes above this value, it indicates that
processes are competing for CPU time. If the load average is very high, it might indicate that the
system is having a problem, such as a runaway process (a process in a non-responding state).
215
Both categories display total memory, used memory, and free space.
You need to monitor memory usage very carefully to ensure good system performance. Once the
physical memory is exhausted, the system starts using swap space (temporary storage space on the
hard drive) as an extended memory pool, and since accessing disk is much slower than accessing
memory, this will negatively affect system performance.
If the system starts using swap often, you can add more swap space. However, adding more physical
memory should also be considered.
Status (S)
Command (COMMAND)
216
Command
Output
217
Sleeping: i.e., waiting for some kind of resource (typically, I/O) to become available.
If we had more than one CPU, say a quad-CPU system, we would divide the load average numbers by
the number of CPUs. In this case, for example, seeing a 1 minute load average of 4.00 implies that the
system as a whole was 100% (4.00/4) utilized during the last minute.
Short term increases are usually not a problem. A high peak you see is likely a burst of activity, not a
new level. For example, at start up, many processes start and then activity settles down. If a high peak
is seen in the 5 and 15 minute load averages, it would may be cause for concern.
218
219
interactive tasks, and you can type other commands in the terminal window while the background job
is running. By default all jobs are executed in the foreground. You can put a job in the background by
suffixing & to the command, for example: updatedb &
You can either use CTRL-Z to suspend a foreground job or CTRL-C to terminate a foreground job
and can always use the bg and fg commands to run a process in the background and foreground,
respectively.
Managing Jobs
The jobs utility displays all jobs running in background. The display shows the job ID, state, and
command name, as shown here.
jobs -l provides a the same information as jobs including the PID of the background jobs.
The background jobs are connected to the terminal window, so if you log off, the jobs utility will not
show the ones started from that window.
cron
cron is a time-based scheduling utility program. It can launch routine background jobs at specific
times and/or days on an on-going basis. cron is driven by a configuration file
220
called /etc/crontab (cron table) which contains the various shell commands that need to be run at the
properly scheduled times. There are both system-wide crontab files and individual user-based ones.
Each line of a crontab file represents a job, and is composed of a so-called CRON expression,
followed by a shell command to execute.
The crontab -e command will open the crontab editor to edit existing jobs or to create new jobs. Each
line of the crontab file will contain 6 fields:
Field
Description
Values
MIN
Minutes
0 to 59
HOUR
Hour field
0 to 23
DOM
Day of Month
1-31
MON
Month field
1-12
DOW
Day Of Week
0-6 (0 = Sunday)
CMD
Command
Examples:
1. The entry "* * * * * /usr/local/bin/execute/this/script.sh" will schedule a job to execute 'script.sh'
every minute of every hour of every day of the month, and every month and every day in the week.
221
2. The entry "30 08 10 06 * /home/sysadmin/full-backup" will schedule a full-backup at 8.30am, 10June irrespective of the day of the week.
sleep
Sometimes a command or job must be
delayed or suspended. Suppose, for
example, an application has read and
processed the contents of a data file
and then needs to save a report on a
backup system. If the backup system is
currently busy or not available, the
application can be made to sleep (wait)
until it can complete its work. Such a
delay might be to mount the backup
device and prepare it for writing.
sleep suspends execution for at least the specified period of time, which can be given as the number of
seconds (the default), minutes, hours or days. After that time has passed (or an interrupting signal has
been received) execution will resume.
Syntax:
sleep NUMBER[SUFFIX]...
where SUFFIX may be:
1. s for seconds (the default)
2. m for minutes
3. h for hours
4. d for days
sleep and at are quite different; sleep delays execution for a specific period while at starts execution at
a later time.
222
Summary
You have completed this chapter. Lets summarize the key concepts covered:
Every process has a unique identifier (PID) to enable the operating system to keep track of it.
You can use top to get constant real-time updates about overall system performance as well as
information about the processes running on the system.
Load average indicates the amount of utilization the system is under at particular times.
223
Web browsers
Email clients
Other applications
Web Browsers
As discussed in the earlier chapter on Network Operations, Linux offers a wide variety of web
browsers, both graphical and text based, including:
Firefox
Google Chrome
224
Chromium
Epiphany
Konqueror
w3m
lynx
Email Applications
Email applications allow for sending, receiving, and
reading messages over the Internet. Linux systems offer
a wide number of email clients, both graphical and
text-based. In addition many users simply use their
browsers to access their email accounts.
Graphical email clients, such as Thunderbird (produced by Mozilla), Evolution, and Claws
Mail
225
Application
FileZilla
Use
Intuitive graphical FTP client that supports FTP, Secure File Transfer Protocol
(SFTP), and FTP Secured (FTPS). Used to transfer files to/from(FTP) servers.
Pidgin
To access GTalk, AIM, ICQ, MSN, IRC and other messaging networks
Ekiga
XChat
Office Applications
Most day-to-day computer systems have productivity
applications (sometimes calledoffice suites) available or
installed (click here for a list of commonly used suites).
Each suite is a collection of closely coupled programs used to
create and edit different kinds of files such as:
226
Spreadsheet
Presentation
Graphical objects
Most Linux distributions offer LibreOffice, an open source office suite that started in 2010 and has
evolved from OpenOffice.org. While other office suites are available as we have
listed, LibreOffice is the most mature, widely used and intensely developed.
The component applications included in LibreOffice are:
Components of LibreOffice
Usage
Writer
Word processing
Calc
Spreadsheets
Impress
Presentations
Draw
Development Applications
Linux distributions come with a complete set of applications and tools that are needed by those
developing or maintaining both user applications and the kernel itself.
These tools are tightly integrated and include:
227
Compilers (such as gcc for programs in C and C++) for every computer language that has ever
existed.
Debuggers such as gdb and various graphical front ends to it and many other debugging tools (such
as valgrind).
Performance measuring and monitoring programs, some with easy to use graphical interfaces, others
more arcane and meant to be used only by serious experienced development engineers.
Complete Integrated Development Environments (IDE's) such as Eclipse, that put all these tools
together.
On other operating systems these tools have to be obtained and installed separately, often at a high
cost, while on Linux they are all available at no cost through standard package installation systems.
Sound players
Multimedia applications are used to listen to music, view videos, etc, as well as to present and view
text and graphics. Linux systems offer a number of sound player applications including:
Application
Amarok
Use
Audacity
228
Rhythmbox
Of course Linux systems can also connect with commercial online music streaming services such
as Pandora and Spotifythrough web browsers.
Movie Players
Movie (video) players can portray input from many different sources, either local to the machine or
on the Internet.
Linux systems offer a number of movie players including:
VLC
MPlayer
Xine
Totem
Movie Editors
Movie editors are used to edit videos or movies. Linux systems offer a number of movie editors
including:
Application
Kino
Use
Acquire and edit camera streams. Kino can merge and separate video clips.
229
Cinepaint
Blender
Create 3D animation and design. Blender is a professional tool that uses modeling as a
starting point. There are complex and powerful tools for camera capture, recording,
editing, enhancing and creating video, each having its own focus.
Cinelerra
FFmpeg
Record, convert, and stream audio/video. FFmpeg is a format converter, among other
things, and has other tools such as ffplay and ffserver.
GIMP (GNU Image Manipulation Program) is a featurerich image retouching and editing tool similar to Adobe Photoshop and is available on all Linux
distributions. Some features of the GIMP are:
It provides extensive information about the image, such as layers, channels, and histograms.
Graphics Utilities
230
In addition to the GIMP, there are other graphics utilities that help perform various image-related
tasks, including:
Graphic Utility
eog
Use
Eye of Gnome (eog) is an image viewer that provides slide show capability
and a few image editing tools, such as rotate and resize. It can also step
through the images in a directory with just a click.
Inkscape
Inkscape is an image editor with lots of editing features. It works with layers
and transformations of the image. It is sometimes compared to Adobe
FrameMaker.
convert
Scribus
Scribus is used for creating documents used for publishing and providing
a What You See Is What You Get (WYSIWYG) environment. It also provides
numerous editing tools.
Summary
You have completed this chapter. Lets summarize the key concepts covered:
Linux offers a wide variety of Internet applications such as web browsers, email clients, online
media applications, and others.
Web browsers supported by Linux can be either graphical or text-based such as Firefox, Google
Chrome, Epiphany,w3m, lynx and others.
231
Linux supports graphical email clients, such as Thunderbird,Evolution, and Claws Mail, and
text mode email clients, such asmutt and mail.
Linux systems provide many other applications for performing Internet-related tasks, such
as Filezilla, XChat, Pidgin, and others.
Most Linux distributions offer LibreOffice to create and edit different kinds of documents.
Linux systems offer entire suites of development applications and tools, including compilers
and debuggers.
Linux systems offer a number of movie players including VLC,MPlayer, Xine, and Totem.
Linux systems offer a number of movie editors including Kino,Cinepaint, Blender among
others.
The GIMP (GNU Image Manipulation Program) utility is a feature-rich image retouching
and editing tool available on all Linux distributions.
232