Linux - David A Williams
Linux - David A Williams
2 BOOKS IN 1
The Ultimate Bible to Learn Linux
Command Line, Linux
Administration and
Shell Scripting Step by Step
David A. Williams
© Copyright 2020 - All rights reserved.
The contents of this book may not be reproduced, duplicated or transmitted
without direct written permission from the author.
Legal Notice:
This book is copyright protected. This is only for personal use. You cannot
amend, distribute, sell, use, quote or paraphrase any part or the content within
this book without the consent of the author.
Disclaimer Notice:
Please note the information contained within this document is for educational
and entertainment purposes only. Every attempt has been made to provide
accurate, up to date and reliable complete information. No warranties of any
kind are expressed or implied. Readers acknowledge that the author is not
engaging in the rendering of legal, financial, medical or professional advice.
The content of this book has been derived from various sources. Please
consult a licensed professional before attempting any techniques outlined in
this book.
Linux Administration
The Ultimate Beginners Guide to Learn Linux Step by Step
Introduction
Chapter One: Installing Red Hat Enterprise Linux on your Computer
Creating Red Hat Enterprise Linux 7 Installation Media on Windows
Creating Red Hat Enterprise Linux 7 Installation Media on Mac OS X
Installing Red Hat Enterprise Linux 7
Chapter Two: The Linux Command Line
The Bash Shell
Basics of Shell
Executing Commands on the Bash Shell
Shortcuts to Edit the Command Line
Managing Files using Commands on the Command Line
Directory Creation
Deleting Files and Directories
File Globbing
Chapter Three: Managing Text Files
Redirecting the Output from a File to another File or Program
Rearranging the Existing Content in Vim
Using the Graphical Editor to Edit Text Files in Red Hat Enterprise Linux 7
Chapter Four: User and Group Management
Users and Groups
Getting Superuser Access
Using Su to Switch Users
Managing User Accounts
User Password Management
Access Restriction
Chapter Five: Accessing Files in Linux and File System Permissions
Linux File System Permissions
Managing File System Permissions using the Command Line
Chapter Six: Linux Process Management
Processes
Controlling Jobs
Running Background Jobs
Killing Processes
Process Monitoring
Chapter Seven: Services and Daemons in Linux
Identifying System Processes Started Automatically
Service states
Controlling System Services
Enabling System Daemons to Start or Stop at Boot
Chapter Eight: OpenSSH Service
Using SSH to Access the Remote Command Line
SSH Based Authentication
Customizing the SSH Configuration
Chapter Nine: Log Analysis
Architecture of System Logs
Syslog File Review
Reviewing Journal Entries for Systemd
Systemd Journal Preservation
Maintaining time accuracy
The Chronyd Service
Chapter Ten: Archiving Files
Managing Compressed Archives
Conclusion
Linux Command
Beginners Guide to Learn Linux Commands and Shell Scripting
Introduction
Chapter 1: Starting with the Linux Shell
Chapter 2: Exploring the Realm of Commands
Chapter 3: The Linux Environment
Chapter 4: Package Management & Storage on Linux Systems
Chapter 5: Linux Environment Variables
Chapter 6: The Basics of Shell Scripting
Chapter 7: Moving On To The Advanced Level In Shell Scripting
Conclusion
Resources
Linux Administration
The Ultimate Beginners Guide to
Learn Linux Step by Step
David A. Williams
Introduction
The term Linux refers to a kernel or an operating system that was developed
by Linus Torvals along with a few other contributors. The first time it was
released publicly in September 1991. In a world where Microsoft was
charging consumers for an operating system like Windows, the advantage of
Linux was that it was an open-source software meaning that programmers
had the option to customize it, create their own operating system out of it and
use it as per their requirement. The Linux operating system code was written
mostly in the C programming language.
There are literally hundreds of operating systems available today, which use
the Linux kernel and the most popular among them are Ubuntu, Debian,
Fedora and Knoppix. This is not the end of the list as new operating systems
come up almost every year, which use the kernel from the original Linux
system.
Linux was a milestone in computing and technology and most of the mobile
phones, web servers, personal computers, cloud-servers, and supercomputers
today are powered by Linux. The job profile of Linux System Administration
refers to that of maintaining operations on a Linux based system and ensuring
maximum uptime from the system, in short, making sure that the system is
consumed in the most optimum possible way. In the modern world, most of
your devices run on a Linux powered server or are associated with a Linux
system in some way or the other because of its high stability and open source
nature.
Owing to this, the job a Linux Administrator revolves around the following.
Linux File System
Managing the superuser on the Linux system known as Root
Command Line using the Bash Shell
Managing users, file and directories
You can think of it as maintaining your own personal computer, which you
would do at home, but on a larger scale, in this case for an entire
organization. Linux system administration is a critical requirement for
organizations in the modern world and, therefore, there is a huge demand in
the market for this profile. The job description may vary from one
organization to another, but the fundamentals of Linux Administration
remain the same for every organization. The following responsibilities are
associated with the profile of a system administrator.
Maintaining regular backups of the data of all users on the system.
Analyzing logs for errors to continuously improve the system.
Maintain and enhance the existing tools for users of the system and for
the Linux environment.
Detecting problems and solving them, which range from user login
issues to disaster recovery.
Troubleshoot other problems on the system.
Perhaps, the most important skill to be found in a system admin is performing
under a lot of load and stress. A system admin usually works a seven day
week wherein he or she is on call two days a week and has to come online as
soon as there is an issue with the system and must be quick to resolve it so
that the system goes online immediately. A system or a server, if kept down
for a long time can lead to losses worth thousands of dollars for an
organization. As an example, take the website of the shopping giant Amazon.
If the Amazon website went down even for one hour, all sales would suffer,
leading to huge losses in revenue. This is where a system admin steps in and
saves the day. The role of a system admin is nothing short of a superhero who
saves an organization whenever the need arises.
Chapter One: Installing Red Hat Enterprise Linux
on your Computer
In this chapter, we will learn how to install Red Hat Enterprise Linux 7 on
your computer. It is advisable to have a Linux operating system installed
before you begin with all the other chapters as doing activities mentioned in
the upcoming chapters will only give you hands on experience on a Linux
system, which will help you understand Linux Administration better.
We will first need to create installation media to install Red Hat Enterprise
Linux 7 on your computer. Depending on whether your current computer has
Windows as an operating system or Mac OS X as an operating system, the
steps to create the installation media for Red Hat Enterprise Linux 7 will
differ. Let us go through the steps to create installation media on both
Windows and Mac OS X one by one.
Before creating the installation media, you will need to download a Red Hat
Enterprise Linux 7 installation image, which you will later load onto the
installation media. You can download the installation ISO from for Red Hat
Enterprise Linux 7 from the following URL.
https://developers.redhat.com/products/rhel/download
Make sure that you download from the link that says DVD ISO as that is the
image that contains the complete installation for Red Hat Enterprise Linux 7.
Note: You will need to register on the website before you can download the
Red Hat Enterprise Linux 7 installation ISO. You can do this when you are
prompted to create an account to download the ISO.
Once you have downloaded the ISO required for installation, you can
proceed with creating the installation media. Installation media can be a
DVD, SD card or a USB drive. The following steps will help you create
installation media on a USB drive on Windows and Mac OS X.
Creating Red Hat Enterprise Linux 7 Installation Media on
Windows
The process of creating installation media for Red Hat Enterprise Linux 7 on
a Windows machine is simple and straightforward. There are many tools
available on the Internet using, which you can create the installation media by
writing the Red Hat Enterprise Linux 7 ISO image to the USB drive. We
have tested many tools and we recommend that you create the installation
media using a tool called Fedora Media Writer. It can be downloaded from
the following URL.
https://github.com/FedoraQt/MediaWriter/releases
You can download the .EXE file available on this GitHub repository to install
it on your Windows machine. You can then proceed with the steps given
below to complete the creation of installation media for Red Hat Enterprise
Linux 7.
1. Download Fedora Media Writer from
https://github.com/FedoraQt/MediaWriter/releases
Make sure that you download from the link that says DVD ISO as that is the
image that contains the complete installation for Red Hat Enterprise Linux 7
2. By this time you would have already downloaded the ISO for Red Hat
Enterprise Linux 7 from their website as mentioned earlier. If not, you
can download it from
https://developers.redhat.com/products/rhel/download
3. Plug the USB drive into your computer, which you intend to use as the
bootable media for installing Red Hat Enterprise Linux 7
4. Launch Fedora Media Writer
5. When the main window pops up, click on Custom Image and then
navigate to the path where you have downloaded the Red Hat
Enterprise Linux 7 ISO and select it
6. You will see a drop-down menu, which lets you select the drive on,
which you want to write the ISO. You should be able to see the USB
drive that you have plugged in. Select the USB drive
7. If your USB drive is not available in the drop-down, plug it out and
plug it in again and launch Fedora Media Write again
8. Click on Write To Disk, which will initiate the media creation. Kindly
ensure that the USB drive is plugged in until the process is complete.
The installation media creation time will depend on the size of the ISO
image and the USB drive’s write speed.
9. You will see a popup saying Complete when the media creation process
is finished.
10. You can now click on the Safely Remove Hardware icon in the
taskbar of your Windows computer and physically remove the USB
drive.
And with that, you have successfully created installation media for Red Hat
Enterprise Linux 7 on a USB drive using your Windows machine.
Creating Red Hat Enterprise Linux 7 Installation Media on
Mac OS X
The installation media for Red Hat Enterprise Linux 7 is created using the
command line utility on Mac OS X. We will be using the dd command to
create the installation media. The steps to create the installation media are as
follow.
1. Download Fedora Media Writer from
https://github.com/FedoraQt/MediaWriter/releases
Make sure that you download from the link that says DVD ISO as that is the
image that contains the complete installation for Red Hat Enterprise Linux 7
2. By this time you would have already downloaded the ISO for Red Hat
Enterprise Linux 7 from their website as mentioned earlier. If not, you
can download it from
https://developers.redhat.com/products/rhel/download
3. Plug the USB drive into your computer, which you intend to use as the
bootable media for installing Red Hat Enterprise Linux 7
4. Launch the terminal
5. First, we need to identify the path for the USB drive using the diskutil
list command. The path of the device will have a format, which will be
like /dev/disknumber where number denotes the number of the disk. In
the example, we are using in the next step, the disknumber is disk2
6. You should see something listed like
/dev/disk2
#: Type Name Size Identifier
0: Windows_NTFS SanDisk USB 8.0
GB disk2s1
You can identify your USB drive by comparing the type, name and size
columns, which should give you a fair idea if it is the USB drive you plugged
in or not.
7. To make sure that you have the correct device name for your USB
drive, you can unmount the USB drive by using the command diskutil
/dev/disknumber.
Note that you will be prompted with an error message that states failed
to unmount if you try to unmount a system drive.
8. The dd command of the Linux terminal can be used to start writing the
image to the USB drive.
$ sudo dd if=/pathtoISO of=/dev/rdisknumber bs=1m>
We are using rdisknumber instead of just disknumber because it is a faster
method to write the ISO to the USB drive.
Note that you will need to replace the pathtoISO part with the actual path of
the downloaded Red Hat Enterprise Linux 7 ISO on your Mac OS X
machine.
9. This will now begin the writing process. You will need to wait until the
writing completes, as you will not see anything until the process
actually completes.
The status of the writing progress can be displayed on the terminal by
sending the Ctrl+t input from your keyboard.
load: 2.01 cmd: dd 3675 uninterruptible 0.00u 1.92s
114+0 records in
114+0 records out
116451517 bytes transferred in 125.843451 secs (1015426 bytes/sec)
10. The installation media creation time will depend on the size of
the ISO image and the USB drive’s write speed.
11. Once the transfer of data is complete, the USB drive can be
unplugged.
This is it. You have successfully created installation media for Red Hat
Enterprise Linux 7 on a USB drive using your Mac OS X machine.
Installing Red Hat Enterprise Linux 7
There are various options for installing Red Hat Enterprise Linux 7 but the
steps we have provided in this section will help you install Red Hat
Enterprise Linux 7 with minimal software and no graphical interface. You do
not need to panic because of the absence of a graphical interface as most
tasks that need to be performed as a Linux System Administrator are done
using the command line.
Let us begin with the Red Hat Enterprise Linux 7 installation using the USB
media drive that you have created in the previous section.
1. Plug in the USB drive into your computer’s USB port and start your
computer. You should make sure that you have enabled USB boot in
your computer’s BIOS settings
2. Once the computer boots up, you should be able to see a list of bootable
devices and one of them would be your USB drive titled Red Hat
Enterprise Linux 7
3. Once the system is up, you will get an option to choose your language.
Click on Continue after you have selected the required language
4. You will now be presented with the Installation Summary screen where
you can customize the installation of Red Hat Enterprise Linux 7 as per
your needs. Select the Date and Time to configure the locale for your
system using the world map that is provided and then click on Done
5. On the next screen, choose the language for your system and for your
keyboard. We would recommend that you use the English language
6. The installation source for your Red Hat Enterprise Linux 7 will be the
USB drive primarily, but you can add other sources for the repositories
by specifying locations on the Internet on your local network using
protocols such as FTP, HTTP, HTTPS, etc. Once you have defined all
your sources, click on Done. You can just leave it on the default source
if you do not have any other sources to be used
7. Now, you can select the software that has to be installed along with the
operating system. As we have discussed, we will only be installing
software that is essential. To do this, select on Minimal Install along
with Compatibility Libraries Add-ons and click on Done
8. On the next step, we will configure partitions for the system. Click
Installation Destination and then choose LVM scheme for partition,
which will give optimized management for the disk space and then
click on “Click here to create them automatically”
9. You will be presented with the default partition scheme, which you can
edit as per your requirements. As we are going to use the Red Hat
Enterprise Linux 7 operating system to learn server administration, you
can use the essential partition scheme as given below
/boot partition, which should have disk space of 500 MB and should be a
non-LVM partition
/root partition with a minimum disk space of 20 GB and an LVM partition
/home partition, which should be an LVM partition
/var partition with a minimum disk space of 20 GB and an LVM partition
The filesystem that you need to use is XFS, which is the world’s most
advanced filesystem right now. Once you have specified the edits for the
partitions, click on Update Settings and then click Done followed by Accept
Changes, which will apply your edits to the system.
10. This is the final step before initiating the Red Hat Enterprise
Linux 7 installation. You need to setup the network. Select Network
and Hostname, which will allow you to specify a hostname for your
system. You can use a short hostname for the system or use a Fully
Qualified Domain Name (FQDN)
11. Once you have specified the network, you can toggle the
Ethernet button on top to ON to switch on the network. If you have a
router, which has a DHCP that allots IPs to devices, your IP will now
be visible. If not, you can click on the Configure button to manually
specify the settings for your network
12. Once you have configured the Ethernet settings, click on Done
and you will be presented with the Installation screen. You will get one
last chance to review your installation settings before the setup starts
writing files to your disk. After reviewing, click on the Begin
Installation option to start the installation.
13. The installation will now start writing files to your hard disk.
Meanwhile, you will be prompted to create a new user for your system
along with a password. Click on Root Password and supply a password
that is strong and has at least 8 characters with a combination of the
alphabet and numbers. Click on Done
14. Next, you can create a new user other than root and provide the
credentials for this user. We recommend that you make this new user a
system administrator who will have privileges similar to root user by
using the sudo command. So check the box, which says “Make this
user administrator” and then click Done. Give the installation some
time to complete
15. Once the installation is complete, you will see a message
confirming the same and that you can now reboot the system and get
ready to use it
Voila! You can now unplug your installation media, which is the USB drive
and restart your computer. You will be presented with the login screen for a
minimal installation of Red Hat Enterprise Linux 7. You can either use the
root user or the additional user that you created to login into the system.
Chapter Two: The Linux Command Line
In this chapter, we will learn about the Linux Command Line and how to use
it to perform various tasks on your Linux operating system. By the end of this
chapter, you will be well versed with the basic commands that are used on the
command line and also commands that are used to manage files and folders
on a Linux system.
The Bash Shell
The command line utility available on Linux operating systems is known as
BASH, which is short for Bourne-Again Shell. The command line utility is
basically an interface, which allows you to give instructions in text-mode to
the computer and its operating system. There are different types of shell
interfaces that are available in the many Unix0-ike systems that have been
developed over the years, but Red Hat Enterprise Linux 7 uses the bash shell.
The bash shell is an evolved and improved version of the former popular
shell known as Bourne Shell.
The bash shell displays a string, which implies that it is waiting for a
command to be input by the user. This string that you see is called the shell
prompt. For a user that is not a root user, the shell prompt ends with a $
symbol.
[student@desktop ~]$
If the root user is logged into the system, the shell prompt ends with a #
symbol. The change in this symbol quickly lets the user know if he is logged
in as root or a regular user so that mistakes and accidents can be avoided.
[root@desktop ~]#
If you need an easy comparison, the bash shell in Linux operating systems is
similar to the command prompt utility that is available in Windows operating
systems, nut you can say that the scripting language used in bash is far more
sophisticated compared to the scripting that can be done in command prompt.
Bash is also comparable to the power shell utility, which was made available
from Windows 7 and Windows Server 2008 R2. The bash shell has been
called as a very powerful tool for administration by many professionals. You
can automate a lot of tasks on your Linux system using the scripting language
that is provided by the bash shell. Additionally, the bash shell also has the
option to perform many other tasks, which are complicated or even
impossible if tried via a graphical interface.
The bash shell is accessed through a tool known as the terminal. The input to
the terminal is your keyboard and the output device is your display monitor.
Linux operating systems also provide something known as virtual consoles,
which can be used to access the bash shell as well. So, you will have multiple
virtual consoles on the base physical console and each virtual console can act
as a separate terminal. Also, you can use each virtual console to create login
sessions for different users.
Basics of Shell
There are three parts to commands that are entered on the shell prompt.
1. Command that you want to run
2. Options that will define the behavior of the command
3. Arguments, which are the command’s targets
The command basically defines that program that you want to execute. The
options that follow the command can be none, one or more. The options
govern the behavior of the command and define what the command will do.
Options are usually used with one dash or two dashes. This is done so that we
can differentiate options from arguments. Example: -a or --all
Arguments also follow the command on the command line and can be one or
more like options. Arguments indicate the target on, which a command is
supposed to operate.
Let us take an example command
usermod -L John
The command in this example is usermod
The option is -L
The argument is John
What this command does is locks the password of the user John on the
system.
To be effective with the commands on the command line, it is essential for a
user to know what options can be used with a particular command. If you run
the --help option with any command, you will get a set of options, which can
be passed with that particular command. So, it’s not really necessary that you
know all the options that are to be used by all the commands by heart. The
list will also tell you what each option does.
The use of statements can sometimes seem very difficult and complicated to
read. Once you get used to certain conventions, reading the statement
becomes much easier.
Options are surrounded by square brackets [ ]
If a command is followed by … it specifies the arbitrary length list of
items belonging to that type
If there are multiple items and if the pipe separates them | it implies that
you can specify only one of them
Variables are represented using text, which is in angle brackets <>. So
if you see <filename>, you have to replace it with the filename that you
wish to pass
For example, check the below command
[student@desktop ~]$ date --help
date [OPTION]... [+FORMAT]
This indicates that the command date takes the options represented by
[OPTION]... with another option [FORMAT], which will be prefixed with
the + symbol.
Executing Commands on the Bash Shell
The Bourne Again Shell, known as bash, does the job of interpreting
commands that are input by the user on the shell prompt. We have already
learned how the string that you type in at shell prompt is divided into three
parts, command, options and arguments. Every word that you type into the
shell is separated using blank space. The program that is already installed on
the system is defined by every command that you type, and every command
has options and arguments that are associated with it.
When you have typed a command on the shell prompt along with the options
and arguments and are ready to run it, you can press the Enter key on the
keyboard, which will execute that command. You will then see the output of
that command displayed on the terminal and when the output completes, you
will be presented with the shell prompt again, which is an indication that the
previous command has been executed successfully. If you wish to type more
than one command on a single line, you can use a semicolon to separate the
commands.
Let us go through some simple commands that are used on a daily routine on
the command line in Red Hat Enterprise Linux 7 and other Linux based
operating systems.
The date command will display the current date and time of the system. You
can also use this command to set the time of the system. If you are passing an
argument with the + sign for the date command, it indicates the format in,
which you want the date to be displayed on the output.
[student@desktop ~]$ date
Sat Aug 5 08:15:30 GMT 2019
[student@desktop ~]$ date +%R
08:15
[student@desktop ~]$ date +%x
08/05/2019
The passwd command can be used to change the password of a user. You
will, however, need to specify the original password for the given user before
you can set a new password. The command requires you to specify a strong
password, which makes it necessary to include letter belonging to lowercase
and uppercase, numbers, and symbols. You also need to ensure that the
password being specified is not a word in the dictionary. The root user has
the option to change the password of any other user on the system.
[student@desktop ~]$ passwd
Changing password for user student.
Changing password for student.
(current) UNIX password: type old password here
New password: Specify new password here
Retype new password: Type new password again
passwd: all authentication tokens updated successfully.
File types and extensions are not specified in a Linux operating system. The
file command can scan any file and tell you the kind of file it is. The file you
want to classify needs to be passed as an argument to the file command.
[student@desktop ~]$ file /etc/passwd
/etc/passwd: ASCII text
If you are passing a folder/directory as an argument to the file command, it
will tell you that it is a directory.
[student@desktop ~]$ file /home
/home: directory
The next set of commands are head and tail, which print the first ten lines
and the last ten lines of a file respectively. Both these commands can be
combined with the option -n, which can be used to specify the number of
lines that you want to be displayed.
[student@desktop ~]$ head /etc/passwd
This will print the first 10 lines that are there in the passwd file.
[student@desktop ~]$ tail - n /etc/passwd
This will print the last 3 lines that are there in the passwd file.
The wc command is used to count lines, words and characters in a file that is
passed as an argument. It supports options such as -l, -w, -c, which stands for
lines, words, and characters respectively.
[student@desktop ~]$ wc /etc/passwd
30 75 2009 /etc/passwd
This shows that there are 30 lines, 75 words and 2009 characters in the file
passwd.
If you pass the -l, -c, -w options along with the wc command, it will only
display the count of lines, words or characters based on, which option you
have passed.
The history command displays all the commands that you have typed
previously along with the command number. You can use the ! mark along
with the command number to expand what was typed in that command along
with the output.
[student@desktop ~]$ history
1 clear
2 who
3 pwd
[student@desktop ~]$ !3
/home/student
This shows that we used !3 to expand the pwd command, which shows the
present working directory for, which the output was /home/student, which
was the home directory of the student user.
You can use the arrow keys to navigate through the output given by the
history command. Up Arrow will take you to the commands on top and
Down Arrow will take you to the commands below. Using the Right Arrow
and the Left Arrow keys, you can move on the current command and edit it.
Shortcuts to Edit the Command Line
There is an editing feature available on the command line in bash when you
are interacting with bash. This helps you to move around the current
command that you are typing so that you can make edits. We have already
seen how we can use the arrow keys with the history command to move
through commands. The following list will help you use edits when you are
working on a particular command.
Ctrl+a This will take your cursor to the start of the command line
Ctrl+e This will take your cursor to the end of the command line
Ctrl+u Clear the line from where the cursor is to the start of the
command line
Ctrl+k Clear the line from where the cursor is to the end of the
command line
Ctrl+Left Arrow Take the cursor to the start of the previous word
Ctrl+Right Arrow Take the cursor to the start of the next word
Ctrl+r Search for patterns in the history list of commands used
Managing Files using Commands on the Command Line
In this section, we will learn to execute commands that are needed to manage
files and directories in Linux. You will learn how to move, copy, create,
delete and organize files and directories using commands on the bash shell
prompt.
Linux File System Hierarchy
Let us first understand the hierarchy of the file system in a Linux operating
system. The Linux file system has a tree, which has directories forming the
file system hierarchy. However, this is an inverted tree as the root starts from
the top of the hierarchy in Linux and then the branches, which are directories
and subdirectories extend below the root.
The root directory denoted by / sits at the top of the file system hierarchy.
Although the root is denoted by / do note that the slash character / is also
used to separate directories and filenames. As an example, take etc, which is
a subdirectory of the root and is denoted by /etc. Similarly, if there is a file
named logs in the /etc directory, we will reference it as /etc/logs
The subdirectories of root / are standard and store files based on their specific
purpose. For example, files under /boot will contain files, which are needed
to execute the boot process of the Linux operating system.
We will now go through the important directories in the Red Hat Enterprise
Linux 7 file system hierarchy.
/usr
The shared libraries, which are installed with the software, are stored in this
directory. It has subdirectories further, which are important such as
/usr/bin: User command files
/usr/sbin: Commands used in system administration
/usr/local: Files of software that has been customized locally
/etc
System configuration files are stored here.
/var
Files that change dynamically such as databases, log files, etc. are stored in
this directory.
/run
This directory contains files that were created during runtime and were
created since the last boot. The files created here get recreated on the next
boot.
/home
This is the home directory for all users that are created on the system. The
users get to store their personal data and configurations under their specific
home path.
/root
This is the home directory of the root user who is also the superuser of the
system.
/tmp
This is a directory used to store temporary files. Files, which are older than
10 days and have not been accessed or modified automatically get deleted.
There is another directory at /var/tmp where file not accessed or modified in
30 days get deleted automatically.
/boot
The files required to start the boot process are stored here.
/dev
Contains files, which reference to hardware devices on the system.
Let us now learn how we can locate and access files on the Linux file system.
In this section, we will learn to use absolute file paths, change the directory
in, which we are currently working and learn commands, which will help us
to determine the location and contents of a directory.
Absolute Paths
An absolute path indicates a name that is fully qualified. It begins from the
root at /, which is followed by each subdirectory that is traversed through
until you reach a specific file. There is an absolute path defined for every file
that exists on the Linux file system, which can be identified using a simple
rule. If the first character of the path is / then it implies an absolute path
name.
For example, system messages are logged in a file called messages. The
absolute path name for the messages file is /var/log/messages
There are relative path names in place as well since absolute path names can
sometimes become very long to type.
When you first login to the Linux system, you are automatically placed in the
location of your home directory. There is an initial directory in place for
system processes as well. After that, both users and system processes
navigate through other directories based on their requirement. The current
location of a user or a system process is known as a working directory or
current working directory.
Relative Paths
A relative path refers to the unique path, which is required to reach a file but
this path is only from the current directory that you are in and does not start
with root /
The rule, as mentioned, is simple. If the path does not begin with a / symbol,
it is a relative path for the file.
For example, if you are already working in the /var directory, then the
relative path for the messages file for you will be log/messages
Navigating through paths
You can use the pwd command, which will output the path of the current
working directory you are in. Once you know this information, you can use it
to traverse to different directories using relative paths.
The ls command when used with a directory or directory path specifies the
content of that particular directory. If you do not specify a directory with it, it
will list the content of the directory that you are currently working in.
[student@desktop ~]$ pwd
/home/student
[student@desktop ~]$ ls
Documents Music Downloads Pictures
You can use the cd command to change directories. For example, if you are
in the directory /home/student and you want to go to the directory Music, you
can use relative path to get there. However, if you would want to get into the
Downloads directory, you will then need to use the absolute path.
[student@desktop ~]$ cd Music
[student@desktop Music]$ pwd
/home/student/Music
[student@desktop ~]$ cd /home/student/Downloads
[student@desktop Downloads]$ pwd
/home/student/Downloads
As you can see, the shell prompt will display the last part of the current
directory that you are working in for convenience. If you are in
/home/student/Music only Music is displayed at the shell prompt.
The cd command is used to navigate through directories. If you use cd with
an argument of a relative path or an absolute path, your current working
directory will switch to that path’s end directory. You can also use cd -,
which will take you back to the previous directory you were working in and
if you use it again, you will be back to the directory that you switched from.
If you just keep using it, you will keep alternating between two fixed
directories.
The touch command is another simple command, which if applied to an
existing file, updates its timestamp to the current timestamp without actually
modifying the content of the file. If used by passing an argument for a
filename that does not exist, it will create an empty file with that filename.
This allows new users to touch and create files for practice since these files
will not harm the system in any manner.
[student@desktop ~]$ touch Documents/test.txt
This will create an empty text file called test in the Documents subdirectory.
The ls command lists down the files and directories of the directory that you
are in. If you pass the path of a directory with the ls command, it will list
down the files and directories in that path.
The ls command can be used with options, which further help listing down
files.
-l will list down all the files with timestamps and permissions of the files and
directories. It will also list down the owner and the group of that file.
-a can be combined with -l to additionally list down hidden files and
directories.
-R used with the above two options will list down files and directories
recursively for all subdirectories.
When you list down the files and directories, you will see that the first two
listing are a . and ..
. denotes the current directory and .. denotes the parent directory and these
are present on the system in every directory.
File Management using Command Line
When we talk about file management, we are discussing how to create,
delete, copy, and move files. The same set of actions can be performed on
directories as well. It is very important to know your current working
directory so that when you are managing files and directories, you know if
you need to specify relative paths or absolute paths.
Let us go through a few commands that can be used for file management.
mv file1 file2
The result of this is a rename
cp -r dir1 dir2
rm -r dir1
-r is used to process the source directory recursively
mv dir1 dir2
If dir2 exists, the content of dir1 will be moved to dir2. If it does not exist,
dir1 will be renamed to dir2
cp file_1 file_2 file_3 directory
mv file_1 file_2 file_3 directory
cp -r directory1 directory2 directory3 directory4
mv directory1 directory2 directory3 directory4
Make sure that the last argument to be passed in the command should be a
directory
rm -f file_1 file_2 file_3
rm -rf directory1 directory2 directory3
Kindly use this carefully as the -f uses a force option will delete everything
without any confirmation prompt
mkdir -p par1/par2/dir
Kindly use this carefully as using -p will keep creating directories starting
from the parent and irrespective of typing errors
Let us now go through these file management commands one by one to see
how they work.
Directory Creation
You can use the mkdir command to create a directory, or even
subdirectories. If a filename already exists or if the parent directory that you
have specified does not exist, you will see errors generated. When you use
the mkdir command along with the option -p it will create all parent
directories along the path that do not exist. You need to be careful while
using the -p option or you will end up creating directories that are not
required, as it does not check for any spelling errors.
[student@desktop ~]$ mkdir Drawer
[student@desktop ~]$ ls
Drawer
As you can see, the new directory called Drawer is created in the home
directory of the user.
[student@desktop ~]$ mkdir -p Thesis/Chapter1
[student@desktop ~]$ ls -R
Thesis thesis_chapter1
As you can see, a new directory called Thesis was created and its
subdirectory called Chapter1 was created at the same time.
Copy Files
The cp command is used to copy files. You can copy one or more files and
syntax gives you the option to copy a file in the same directory or even copy
a file in one directory to another file in another directory.
Note: The file that you are specifying at the destination should be a unique
file. If you specify an existing file, you will end up overwriting the content of
that existing file.
[student@desktop ~]$ cd Documents
[student@desktop Documents]$ cp one.txt two.txt
This will copy content of one.txt to two.txt
Similarly, you can copy from the current directory to a file in another
directory as shown below.
[student@desktop ~]$ cp one.txt Documents/two.txt
This will copy content of one.txt in home directory to two.txt in the
Documents sub-directory.
Move Files
The mv command can be used for two operations. If you are using it in the
same directory, it will rename the file. If you are specifying another directory,
it will move the file to the destination directory. The content of the files are
retained if you rename or move the file. Also note that if the file size is huge,
it may take longer to move from one directory to another.
[student@desktop ~]$ ls
Hello.txt
[student@desktop ~]$ mv Hello.txt Bye.txt
[student@desktop ~]$ ls
Bye.txt
You can see that in this example, since we were operating in the same
directory, the mv command just renamed the Hello.txt file to Bye.txt
[student@desktop ~]$ ls
Hello.txt
[student@desktop ~]$ mv Hello.txt Documents
[student@desktop ~]$ ls Documents
Hello.txt
In this example, you will see that mv command moved Hello.txt file from the
home directory to the Documents subdirectory.
Deleting Files and Directories
The rm command can be used to delete files. To delete directories, you will
need to use rm -r, which will delete a directory, subdirectories and files in
the whole path.
Note: There is nothing such as trash or recycle bin while operating from the
command line. If you delete something, it is deleted permanently.
[student@desktop ~]$ ls
File1.txt Directory1
[student@desktop ~]$ rm file1
[student@desktop ~]$ ls
Directory1
[student@desktop ~]$rm -r Directory1
[student@desktop ~]$ ls
[student@desktop ~]$
The above command has demonstrated how you can delete a file and a
directory using the rm command and using the -r option.
Also note that there is a rmdir command, which can be used to delete a
directory as well provided that the directory is completely empty.
File Globbing
Management of files can become hectic if you are dealing with a large
number of files. To overcome this hurdle, Linux offers a feature called file
globbing, also known as path name expansion. It uses a technique called
pattern matching, also known as wildcards, which is achieved with the use of
meta-characters that expand and allow operations to be performed on
multiple files at the same time.
Pattern matching using * and ?
[student@desktop ~]$ ls a*
alpha apple
As you can see, the * is used as a wildcard to match a pattern, which has any
files that begin with a.
You can try this pattern placing the start at different locations such as *a and
*a*
[student@desktop ~]$ ls ???
are tab map
The number of question marks defines the number of characters in a file
name. The output will show all files, which have a filename of 3 characters.
You can try it with additional question marks as well.
Tilde Expansion
The tilde symbol ~ followed by a slash / will point to the active user’s home
directory. It can be followed by a directory name and can be used with
commands such as cd and ls
[student@desktop ~]$ ls ~/Documents
file.txt hello.txt
[student@desktop ~]$ cd Documents
[student@desktop Documents]$
[student@desktop Documents]$ cd ~/
[student@desktop ~]$
Brace Expansion
The brace command is used when files have something in command, and you
do not want to type it repetitively. It can be used with strings, which are
comma-separated, and with expressions, which have a sequential nature. You
can have nested braces as well.
[student@desktop ~]$ echo {sunday, monday, tuesday}.log
sunday.log monday.logtuesday.log
[student@desktop ~]$ echo file{1..3}.txt
file1.txt file2.txt file3.txt
[student@desktop ~]$echo file{a{1, 2}, b, c}.txt
filea1.txt filea2.txt fileb.txt filec.txt
This is where we end this chapter and you have learned how to manage files
and directories using simple commands using the command line interface on
a Linux system. Most of these commands are generic to any flavor of a Linux
operating system and not just Red Hat Enterprise Linux 7.
Chapter Three: Managing Text Files
In this chapter, we will learn how to create, view and edit text files on a
Linux operating system. We will also learn how to redirect the output from
one text file to another text file. We will learn to edit existing text files on the
command shell prompt using a tool known as ‘Vim’.
Redirecting the Output from a File to another File or Program
In this section, we will be discussing the terms such as standard input,
standard output and standard error. We will further learn how to redirect
outputs from a file to another file and redirect the output from a file to
another program.
Standard Input, Standard Output, and Standard Error
When a program or a process is in running state, it will take inputs from
somewhere and then write the output to a file or display it on the screen.
When you are using a terminal on the Linux operating system, the input is
usually taken from the keyboard and the output is sent to be displayed on the
screen.
A process uses a number of channels known as file descriptors, which take
some input and send some output. There are at least three file descriptors in
every process.
1. Standard Input, also known as Channel 0, which takes inputs from the
keyboard
2. Standard Output, also known as Channel 1, which sends outputs to the
screen
3. Standard error, also known as Channel 3, which sends error messages
Let us go through the channel names for file descriptors
This will output the current timestamp and redirect it to the file named
time in the student’s home directory.
2. Copy the last 100 lines from a log file and save it in another file
This will concatenate contents of file1, file2 and file3 and save it in a
single file called onefile in the user’s home directory.
4. List the hidden directories in the home directory and save the file
names in a file
This will list the hidden directories in the user’s home directory and
save the directory names in the file called hiddenfiles in the user’s
home directory.
This will append the string Hello World to the file in the user’s home
directory.
6. Direct the standard output to one file and standard error to another file
[student@desktop ~]$ find /etc -name passwd > ~/output 2> ~/error
This will redirect the output to the output file and the errors to the error
file in the student’s home directory.
7. Discarding the error messages
[student@desktop ~]$ find /etc -name passwd > ~/output 2> /dev/null
This will redirect the errors to /dev/null, which is an empty file and
discard it.
This will redirect the standard output and standard error to the file
onefile in the student’s home directory.
Using the Pipeline
A pipeline is an operator, which separates one or more commands by using a
pipe operator |
The pipe basically takes the standard output of the first command and passes
it as standard input to the second command.
The output will keep passing through various commands, which are separated
using the pipe and only the final output will be displayed on the terminal. We
can visualize it as a flow of data through a pipeline from one process to
another process and that data is being modified on its way by every command
it passes through.
Let us go through some examples of the pipeline, which are useful in day to
day tasks of system administration.
[student@desktop ~]$ ls -l /var/log | less
This will list the files and directories located at /var/log and display it on the
terminal one screen at a time because of the less command.
[student@desktop ~]$ ls | wc -l
This command will pass the output through the pipe and the wc -l command
will count the number of lines in the output and just display the number of
lines and not the actual output of the ls command.
Pipelines, Redirections and the Tee command
As already discussed, when you are using the pipeline, the pipeline makes
sure that all the data is processed and passed through every command and
only the final output it displayed on the terminal screen. This means that if
you were to use output redirection before a pipeline, the output would be
redirected to the file and not to the next command in the pipeline.
[student@desktop ~]$ ls > ~/file | less
In this example, the output of the ls command was redirected to the file in the
student’s home directory and never passed to the less command and the final
output never appeared on the terminal screen.
This is exactly where the tee command comes into the picture to help you
work around such scenarios. If you are using a pipeline and use the tee
command in it, tee will copy its standard input to standard output and at the
same time will also redirect the standard output to the specified files named
as arguments to the command. If you visualize data as water flowing through
a pipe, tee command will be the T joint of that data, which will direct the
output in two directions.
Let us go through some examples, which will help us understand how to use
the tee command with pipelines.
[student@desktop ~]$ ls -l | tee ~/Documents/output | less
Using tee in this pipeline, firstly redirects the output of the ls command to the
file at Documents/output in the student’s home directory. After that, it also
feeds the output of the ls command to the pipe as input to the less command,
which is then displayed on the terminal screen.
[student@desktop ~]$ ls -l | less | tee ~/Documents/output
In this case, we see that tee is used at the end of the command. What this does
is it displays the output of the commands in the pipeline on the terminal
screen and saves the same output to the file at Documents/output in the
student’s home directory as well.
Note: You can redirect standard error while using the pipe, but you will not
be able to use the merging operators &> and &>>
Therefore, if you wish to redirect both standard output and standard error
while using the pipe, you will have to use it in the following manner.
[student@desktop ~]$ find -name / passwd 2>&1 | less
Using the Shell Prompt to Edit Text Files
In this section, we will learn how to use the shell prompt to create new files
and edit existing files. We will also learn about Vim, which is a very popular
editor used to edit files from the shell prompt.
Using Vim to edit files
One of the most interesting things about Linux is that it is designed and
developed in a way where all information is stored in text-based files. There
are two types of text files, which are used in linux. Flat files in, which text is
stored in rows containing similar information, which you will find in the /etc
directory, and Extensible Markup Language(XML) file, which have text
stored using tags, which you will find in the /etc and /usr directories. The
biggest advantage of text files is that they can be transferred from one system
or platform to another without having the need to convert them, and they can
also be viewed and edited using simple text editors.
Vim is a the most popular text editor across all Linux flavors and is an
improved version of the previously popular vi editor. Vim can be configured
as per the needs of a user and includes features like color formatting, split
screen editing, and highlighting text for editing.
Vim works in 4 modes, which are used for different purposes.
1. Edit mode
2. Command mode
3. Visual edit mode
4. Extended command mode
When you first launch Vim, it will open in the command mode. The
command mode is useful for navigation, cut and paste jobs, and other tasks
related to manipulation of text. To enter the other modes of Vim, you need to
enter single keystrokes, which are specific to every mode.
If you use the i keystroke in the command mode, you will be taken to
the insert mode, which lets you edit the text file. All content you type in
the insert mode becomes a part of the file. You can return to the
command mode by pressing the Esc key on the keyboard
If you use the v keystroke in the command mode, you will be taken to
the visual mode, where you can manipulate text by selecting multiple
characters. You can use V and Ctrl+V to select multiple lines and
multiple blocks, respectively. You can exit the visual mode by using
the same keystroke that is v, V or Ctrl+V.
The : keystroke takes you to the extended command mode, which lets
you save the content that you typed to the file and exit the vim editor.
There are more keystrokes that are available in vim for advanced tasks related
to text editing. Although it is known to be one of the best text editors in
Linux in the world, it can get overwhelming for new users. We will go
through the minimum keystrokes that are essential for anyone using vim to
accomplish editing tasks in Linux.
Let us go through the steps given below to get some hands-on experience of
vim for new users.
1. Open a file on the shell prompt using the command vim filename.
2. Repeat the text entry cycle given below as many times as possible until
you get used to it.
Use the arrow keys on the keyboard to position the cursor
Press i to go to insert mode
Enter some text of your choice
You can use u to undo steps taken on the current line that you are
editing
Press the Esc key on the keyboard to return to the command mode
3. Repeat the following cycle, which teaches you to delete text, as many
times as possible until you get the hang of it.
Position the cursor using the arrow keys on the keyboard
Delete a selection of text by pressing x on the keyboard
You can use u to undo steps taken on the current line that you are
editing
4. You can use the following keystrokes next to save, edit, write or
discard the file.
Enter :w to save/write the changes you have made to the file and stay in
the command mode
Enter :wq to save/write the changes to the file and exit Vim
Enter :q to discard the changes that you have made to the file and exit
Vim
Rearranging the Existing Content in Vim
The tasks of copy and paste are known as yank and put in Vim. This can be
achieved using the keystrokes of y and p. To start, place the cursor at the first
character where you wish to copy from and then enter the visual mode. You
can now use the arrow keys to expand your selection. You can then press y to
copy the text to the clipboard. Next place your cursor where you wish to
paste the selected content and press p.
Let us go through the steps given below to understand how to use the copy
and paste feature using yank and put in Vim.
1. Open a file on the shell prompt using the command vim filename.
2. Repeat the text selection cycle given below as many times as possible
until you get used to it.
Position your cursor to the first character using the arrow keys on the
keyboard
Enter the visual mode by pressing v
Position your cursor to the last character using the arrow keys on the
keyboard
Copy the selection by using yank y
Position your cursor to the location where you want to paste the content
using the arrow keys on the keyboard
Paste the selection by using put p
3. You can use the following keystrokes next to save, edit, write or
discard the file.
Enter :w to save/write the changes you have made to the file and stay in
the command mode
Enter :wq to save/write the changes to the file and exit Vim
Enter :q to discard the changes that you have made to the file and exit
Vim
Note: Before you take tips from the advanced vim users, it is advisable that
you get used to the basics of vim as performing advanced functions in vim
without proper knowledge may lead to modification of important files and
result in permanent loss of information. You can learn more about the basics
of vim by looking up the Internet for vim tips.
Using the Graphical Editor to Edit Text Files in Red Hat
Enterprise Linux 7
In this section, we will learn to access, view and edit text files using a tool in
Red Hat Enterprise Linux 7 known as gedit. We will also learn how to copy
text between to or more graphical windows.
Red Hat Enterprise Linux 7 comes with a utility known as gedit, which is
available in the graphical desktop environment of the operating system . You
can launch gedit by following the steps given below.
Applications > Accessories > gedit
You can also launch gedit without navigating through the menu. You can
press Alt+F2, which will open the Enter A Command dialog box. Type gedit
in the text box and hit Enter.
Let us go through the basic keystrokes that are available in gedit. The menu
in gedit will allow you to perform numerous tasks related to file management.
Creating a new file: Navigate through File > New (Ctrl+n) on the menu
bar or click the blank paper icon on the toolbar to start a new file
Saving a file: Navigate through File > Save (Ctrl+s) on the menu bar or
click the disk icon on the toolbar to save a file
Open and existing file: Navigate through File > Open (Ctrl+o) on the
menu bar or click on the Open icon on the toolbar. A window will open
up showing you all the files on your system. Locate the file that you
wish to open and select it and click on open
If you select multiple files and click on open, they will all open up and will
have a separate tab under the menu bar. The tabs will display a filename for
existing files or when you save a new file with a new name.
Let us now learn how to copy text between two or more graphical windows
in Red Hat Enterprise Linux 7. Using the graphical environment in Red Hat
Enterprise Linux 7, you can copy text between text windows, documents, and
command windows. You can select the text that you want to duplicate using
copy and paste, or you can move text using the cut and paste options. In
either case, the text is stored in the clipboard memory so that you can paste it
to a destination.
Let us go through the steps to perform these operations.
Selecting the text:
Left click and hold the mouse button at the first character of the text
Drag the mouse until you have selected all the desired content and then
release the button. Make sure that you do not press any mouse button
again as that will result in deselection of all the text
Pasting the selected text: There are multiple methods to achieve this. This is
the first one.
Right click the mouse on the selected text at any point
A menu will be displayed, and you will get the option to either cut or
copy
Next, open the window where you want to paste the text and place the
cursor in the desired location where you wish to paste the text. Right
click the mouse again and select paste on the menu that appears
There is a shorter method to achieve the same result as well.
Firstly, select the text that you need
Go to the window where you wish to paste the text and place the cursor
at the desired location in the window. Middle click the mouse just once
and will paste the selected text
This method will help you copy and paste but not cut and paste. However, to
emulate a cut and paste, you can delete the original text as it remains selected.
The copied text remains in the clipboard memory and can be pasted again and
again.
The last method is the one using shortcut keys on the keyboard:
After selecting the text, you can press Ctrl+x to cut or Ctrl+c to copy
Go to the window where you want to paste the text and place the cursor
at the desired location
Press Ctrl+v
Chapter Four: User and Group Management
In this chapter, we will learn about users and groups in Linux and how to
manage them and administer password policies for these users. By the end of
this chapter, you will be well versed with the role of users and groups on a
Linux system and how they are interpreted by the operating system. You will
learn to create, modify, lock and delete user and group accounts, which have
been created locally. You will also learn how to manually lock accounts by
enforcing a password-aging policy in the shadow password file.
Users and Groups
In this section, we will understand what users and groups are and what is
their association with the operating system.
Who is a user?
Every process or a running program on the operating system runs as a user.
The ownership of every file lies with a user in the system. A user restricts
access to a file or a directory. Hence, if a process is running as a user, that
user will determine the files and directories the process will have access to.
You can know about the currently logged-in user using the id command. If
you pass another user as an argument to the id command, you can retrieve
basic information of that other user as well.
If you want to know the user associated with a file or a directory, you can use
the ls -l command and the third column in the output shows the username.
You can also view information related to a process by using the ps command.
The default output to this command will show processes running only in the
current shell. If you use the ps a option in the command, you will get to see
all the process across the terminal. If you wish to know the user associated
with a command, you can pass the u option with the ps command and the
first column of the output will show the user.
The outputs that we have discussed will show the users by their name, but the
system uses a user ID called UID to track the users internally. The usernames
are mapped to numbers using a database in the system. There is a flat file
stored at /etc/passwd, which stored the information of all users. There are
seven fields for every user in this file.
username: password: UID: GID: GECOS: /home/dir: shell
username:
Username is simply the pointing of a user ID UID to a name so that humans
can retain it better.
password:
This field is where passwords of users used to be saved in the past, but now
they are stored in a different file located at /etc/shadow
UID:
It is a user ID, which is numeric and used to identify a user by the system at
the most fundamental level
GID:
This is the primary group number of a user. We will discuss groups in a while
GECOS:
This is a field using arbitrary text, which usually is the full name of the user
/home/dir:
This is the location of the home directory of the user where the user has their
personal data and other configuration files
shell:
This is the program that runs after the user logs in. For a regular user, this
will mostly be the program that gives the user the command line prompt
What is a group?
Just like users, there are names and group ID GID numbers associated with a
group. Local group information can be found at /etc/group
There are two types of groups. Primary and supplementary. Let’s understand
the features of each one by one.
Primary Group:
There is exactly one primary group for every user
The primary group of local users is defined by the fourth field in the
/etc/passwd file where the group number GID is listed
New files created by the user are owned by the primary group
The primary group of a user by default has the same name as that of the
user. This is a User Private Group (UPG) and the user is the only
member of this group
Supplementary Group:
A user can be a member of zero or more supplementary groups
The primary group of local users is defined by the last field in the
/etc/group file. For local groups, the membership of the user is
identified by a comma separated list of user, which is located in the last
field of the group’s entry in /etc/group
groupname: password:GID:list, of, users, in, this, group
The concept of supplementary groups is in place so that users can be
part of more group and in turn have to resources and services that
belong to other groups in the system
Getting Superuser Access
In this section, we will learn about what the root user is and how you can be
the root or superuser and gain full access over the system.
The root user
There is one user in every operating system that is known as the super user
and has all access and rights on that system. In a Windows based operating
system, you may have heard about the superuser known as the administrator.
In Linux based operating systems, this superuser is known as the root user.
The root user has the power to override any normal privileges on the file
system and is generally used to administer and manage the system. If you
want to perform tasks such as installing new software or removing an
existing software, and other tasks such as manage files and directories in the
system, a user will have to escalate privileges to the root user.
Most devices on an operating system can be controlled only by the root user,
but there are a few exceptions. A normal user gets to control removable
devices such as a USB drive. A non-root user can, therefore, manage and
remove files on a removable device but if you want to make modifications to
a fixed hard drive, that would only be possible for a root user.
But as we have heard, with great power comes great responsibility. Given the
unlimited powers that the root user has, those powers can be used to damage
the system as well. A root user can delete files and directories, remove or
modify user accounts, create backdoors in the system, etc. Someone else can
gain full control over the system if the root user account gets compromised.
Therefore, it is always advisable that you login as a normal user and escalate
privileges to the root user only when absolutely required.
As already mentioned, the root account on Linux operating system is the
equivalent of the local Administrator account on Windows operating systems.
It is a practice in Linux to login as a regular user and then use tools to gain
certain privileges of the root account.
Using Su to Switch Users
You can switch to a different user account in Linux using the su command. If
you do not pass a username as an argument to the su command, it is implied
that you want to switch to the root user account. If you are invoking the
command as a regular user, you will be prompted to enter the password of the
account that you want to switch to. However, if you invoke the command as a
root user, you will not need to enter the password of the account that you are
switching to.
su - <username>
[student@desktop ~]$ su -
Passord: rootpassword
[root@desktop ~]#
If you use the command su username, it will start a session in a non-login
shell. But if you use the command as su - username, there will be a login
shell initiated for the user. This means that using su - username sets up a new
and clean login for the new user whereas just using su username will retain
all the current settings of the current shell. Mostly, to get the new user’s
default settings, administrators usually use the su - command.
sudo and the root
There is a very strict model implemented in linux operating systems for users.
The root user has the power to do everything while the other users can do
nothing that is related to the system. The common solution, which was
followed in the past was to allow the normal user to become the root user
using the su command for a temporary period until the required task was
completed. This, however, has the disadvantage that a regular user literally
would become the root user and gain all the powers of the root user. They
could then make critical changes to the system like restarting the system and
even delete an entire directory like /etc. Also, gaining access to become the
root user would involve another issue that every user switching to the root
user would need to know the password of the root user, which is not a very
good idea.
This is where the sudo command comes into the picture. The sudo command
lets a regular user run command as if they are the root user, or another user,
as per the settings defined in the /etc/sudoers file. While other tools like su
would require you to know the password of the root user, the sudo command
requires you to know only your own password for authentication, and not the
password of the account that you are trying to gain access to. By doing this, it
allows the administrator of the system to allow a certain list of privileges to
regular users such that they perform system administration tasks, without
actually needing to know the root password.
Lets us see an example where the student user through sudo has been granted
access to run the usermod command. With this access, the student user can
now modify any other user account and lock that account
[student@desktop ~]$ sudo usermod -L username
[sudo] password for student: studentpassword
Another benefit of using the sudo access is that all commands that any user
runs using sudo are logged to /var/log/secure.
Managing User Accounts
In this section, you will learn how to create, modify, lock and delete user
accounts that are defined locally in the system. There are a lot of tools
available on the command line, which can be invoked to manage local user
accounts. Let us go through them one by one and understand what they do.
useradd username is a command that creates a new user with the
username that has been specified and creates default parameters for
the user in the /etc/passwd file when the command is run without
using an option. Although, the command will not set any default
password for the new user and therefore, the user will not be able to
login until a password has been set for them.
The useradd --help will give you a list of options that can be
specified for the useradd command and using these will override the
default parameters of the user in the /etc/passwd file. For a few
options, you can also use the usermod command to modify existing
users.
There are certain parameters for the user, such as the password aging
policy or the range of the UID numbers, which will be read from the
/etc/login.defs file. The file only comes into picture while creating
new users. Modifying this file will not make any changes to existing
users on the system.
● usermod --help will display all the basic options that you can use with
this command, which can be used to manage user accounts. Let us go
through these in brief
-g, --gid GROUP The primary group of the user can be specified
using this option
-G, --groups Associate one or more supplementary groups
GROUPS with user
-m, --move-home You can move the location of the user’s home
directory to a new location by using the -d
option
-s, --shell SHELL The login shell of the user is changed using
this option
● userdel username deletes the user from the /etc/passwd file but does not
delete the home directory of that user.
userdel -r username deletes the user from /etc/passwd and deletes their
home directory along with its content as well.
● id displays the user details of the current user, which includes the UID
of the user and group memberships.
id username will display the details of the user specified, which
includes the UID of the user and group memberships.
● passwd username is a command that can be used to set the user’s initial
password or modify the user’s existing password.
The root user has the power to set the password to any value. If the
criteria for password strength is not met, a warning message will
appear, but the root user can retype the same password and set the
password for a given user anyway.
If it is a regular user, they will need to select a password, which is at
least 8 characters long, should not be the same as the username, or a
previous word, or a word that can be found in the dictionary.
● UID Ranges are ranges that are reserved for specific purposes in Red
Hat Enterprise Linux 7
UID 0 is always assigned to the root user.
UID 1-200 are assigned by the system to system processes in a static
manner.
UID 201-999 are assigned to the system process that does not own any
file in the system. They are dynamically assigned whenever an installed
software request for a process.
UID 1000+ are assigned to regular users of the system.
Managing Group Accounts
In this section, we will learn about how to create, modify, and delete group
accounts that have been created locally.
It is important that the group already exists before you can add users to a
group. There are many tools available on the Linux command line that will
help you to manage local groups. Let us go through these commands used for
groups one by one.
● groupadd groupname is a command that if used without any options
creates a new group and assigns the next available GID in the group
range and defines the group in the /etc/login.defs file
You can specify a GID by using the option -g GID
The -r option will create a group that is system specific and assign it a
GID belonging to the system range, which is defined in the
/etc/login.defs file.
The -g option is passed along with the command if you want to assign a
new GID to the group.
[student@desktop ~]$ sudo groupmod -g 6000 ateam
● groupdel command is used to delete the group.
Using groupdel may not work on a group that is the primary group of a
user. Just like userdel, you need to be careful with groupdel that you
check that there are no files on the system owned by the user existing
after deleting the group.
You can add a user to the supplementary group using the usermod -aG
groupname username command.
Using the -a option ensures that modifications to the user are done in
append mode. If you do not use it, you will be removed from all other
groups and be only added to the new group.
User Password Management
In this section, we will learn about the shadow password file and how you
can use it to manually lock accounts or set password-aging policies to an
account. In the initial days of Linux development, the encrypted password for
a user was stored in the file at /etc/passwd, which was world-readable. This
was tested and found to be a secure path until attackers started using
dictionary attacks on encrypted passwords. It was then that it was decided to
move the location of encrypted password hash to a more secure location,
which is at /etc/shadow file. The latest implementation allows you to set
password-aging policies and expiration features using this new file.
The modern password hash has three pieces of information in it. Consider the
following password hash:
$1$gCLa2/Z$6Pu0EKAzfCjxjv2hoLOB/
1. 1: This part specifies the hashing algorithm used. The number 1
indicates that an MD5 hash has been implemented. The number 6
comes into the hash when a SHA-512 hash is used.
2. gCLa2/Z: This indicates the salt used to encrypt the hash. It is a
randomly chosen salt at first. The combination of the unencrypted
password and salt together form the encrypted hash. The advantage of
having a salt is that two users who may be using the same password
will not have identical hash entries in the /etc/shadow file.
3. 6Pu0EKAzfCjxjv2hoLOB/: This is the encrypted hash.
In the event of a user trying to log in to the system, the system looks up for
their entry in the /etc/shadow file. It then combines the unencrypted password
entered by the user with the salt for the user and uses the hash algorithm
specified to encrypt this combination. It is implied that the password typed by
the user is correct of this hash matches with the hash in the /etc/shadow file.
Otherwise, the user has just typed in the wrong password and their login
attempt fails. This method is secure as it allows the system to determine if a
user typed in the correct password without having to store the actual
unencrypted password in the file system.
The format of the /etc/shadow file is as below. There are 9 fields for every
user as follows.
name: password: lastchange:minage: maxage: warning:
inactive: expire: blank
name: This must be a valid username on the system through, which a user
logs in.
password: This is where the encrypted password of the user is stored. If the
field starts with an exclamation mark, it means that password is locked.
lastchange: This is the timestamp of the last password change done for the
account.
minage: This defines the minimum number of days before a password needs
to be changed. If it is the number 0, it means there is no minimum age for the
account.
maxage: This defines the maximum number of days before a password needs
to be changed.
warning: This is a warning period that shows that the password is going to
expire. If the number is 0, it means that no warning will be given before
password expiry.
inactive: This is the number of days after password expiry the account will
stay inactive. During this, the user can use the expired password and still log
into the system to change his password. If the user fails to do so in the
specified number of days for this field, the account will get locked and
become inactive.
expire: This is the date when the account is set to expire.
blank: This is a blank field, which is reserved for future use.
Password Aging
Password aging is a technique that is employed by system administrators to
safeguard bad passwords, which are set by users of an organization. The
policy will basically set a number of days, which is 90 days by default after,
which a user will be forced to change their password. The advantage of
forcing a password change implies that even if someone has gained access to
a user’s password, they will have it with them only for a limited amount of
time. The con to this approach is that users will keep writing their password
in some place since they can’t memorize it if they keep changing it.
In Red Hat Enterprise Linux 7, there are two ways through, which password
aging can be enforced.
1. Using the chage command on the command line
2. Using the User Management application in the graphical interface
The chage command with the -M option lets a system admin specify the
number of days for, which the password is valid. Let us look at an example.
[student@desktop ~]$ sudo chage -M 90 alice
In this command, the password validity for the user alice will be set to 90
days after, which the user will be forced to reset their password. If you want
to disable password aging, you can specify the -M value as 9999, which is
equivalent to 273 years.
You can set password aging policies by using the graphical user interface as
well. There is an application called User Manager, which you can access
from the Main Menu Button > System Settings > Users & Groups.
Alternatively, you can type the command system-config-users in the
terminal window. The User Manager window will pop up. Navigate to the
Users tab, select the required user from the list, and click on the Properties
button where you can set the password aging policy.
Access Restriction
You can set the expiry for an account using the chage command. The user
will not be allowed to login to the system once that date is reached. You can
use the usermod command with the -L option to lock a particular user
account.
[student@desktop ~]$ sudo usermod -L alice
[student@desktop ~]$ su - alice
Password: alice
su: Authentication failure
The usermod command is useful to lock and expire an account at the same
time in a case where the employee might have left the company.
[student@desktop ~]$ sudo usermod -L -e 1 alice
A user may not be able to authenticate into the system using a password once
their account has been locked. It is one of the best practices to prevent
authentication of an employee to the system who has already left the
organization. You can use the usermod -u username command later to
unlock the account, in the event that the employee has rejoined the
organization. While doing this, if the account was in an expired state, you
will need to ensure that you set a new expiry date for the account as well.
The nologin shell
There will be instances where you want to create a user who can authenticate
using a password and get a login into the system but would not need a shell to
interact with the system. For example, a mail server may require a user to
have an email account so that the user can login and check their emails. But it
is not necessary that the user needs a login to the system to check their
emails.
This is where the nologin shell comes as a solution. What we do is we specify
the shell for this user to point to /sbin/nologin. Once this is done, the user
cannot login to the system using the direct login procedure.
[root@desktop ~]# usermod - s /sbin/nologin student
[root@desktop ~]# su - student
Last login: Tue Mar 5 20:40:34 GMT 2015 on pts/0
The account is currently not available.
By using the nologin shell for the user, you are denying the user interactive
login into the system but not all access to the system. The user will still be
able to use certain web applications for file transfer applications to upload or
download files.
Chapter Five: Accessing Files in Linux and File
System Permissions
In this chapter, we will learn about the working of the Linux file system
permissions model and how the permissions and ownership of files can be
changed using the command line tools available to us. By the end of the
chapter, we will be well versed with how file permissions affect files in Linux
and how permissions can be used to employ security to files and directories.
Linux File System Permissions
File Permissions is a feature in Linux through, which the access to a file by a
user can be controlled. Although the Linux file system model is simple, it is
still flexible in nature, which makes it easy for a new user to understand and
apply it and handle file permissions in an easy manner.
There are three categories of users to, which permissions apply with respect
to a file. The three categories of file users are as follows.
1. user
2. group
3. other
The hierarchy is such that user permissions will override group permissions,
which will override other permissions.
The permissions that apply to a file or directory also belong to three
categories.
1. read
2. write
3. execute
Let us see their effects on a file or a directory.
A user is given read and execute access to a file by default so that they can
read the content of the file and execute the file if it is an executable file.
However, if a user has only read only access, they will only be able to read
the contents of the file but no other information at all such as permissions of
the file, timestamps, etc. If there is only an execute access for a user to the
file, they will not be able to list the filename down in a directory, but if they
already know the name of the file, they will be able to execute the file in a
command.
If a user has write permissions to a directory, they have all rights to delete
any file in that directory irrespective of the actual permissions on the file
itself. There is, however, an exception to this where this can be overridden by
special permission, known as the sticky bit, which we will discuss later in this
chapter.
Let us now see how you can view the permissions, which are assigned to a
file or a directory along with the ownership. You can use the ls command
with the -l option to list down the files and directories along with their
permissions and ownerships.
[student@desktop ~]$ ls -l test
-rw-rw-r--. 1 student student 0 Feb 5 15:45test
Using the ls -l directoryname command, you can list down all the contents
of the directory with additional information of permissions, ownerships,
timestamps, etc. If you wish to see the listing of the parent directory itself and
not descend down to the contents of the directory, you can use the -d option.
[student@desktop ~]$ ls -ld /home
drwxr-xr-x. 5 root root 4096 Feb 8 17:45/home
If you have been using a Windows system, you will realize that the List
Folder Contents permission from Windows is the equivalent of the Linux
Read permission, and
the Windows Modify permission is the equivalent of the Linux Write
permission.
Windows has a feature called Full Control which is the equivalent of the
power that the Root user has with respect to files and directories in Linux.
Managing File System Permissions using the Command Line
In this section, we will learn how we can use the command line in Linux to
manage the permissions and ownerships for a file.
Changing the permissions of files and directories
The command we can use to change the permissions of files in Red Hat
Enterprise Linux 7 from the command line is chmod, which is the short form
for change mode since permissions are also referred to as the mode of a file
or directory. The syntax of the command is followed by an instruction as to
what needs to be changed and then the name of the file or directory on, which
the operation needs to be executed. You can provide the instruction in two
ways, that is, either numerically or symbolically.
Let us first go through the symbolic method and the syntax looks like this
chmod WhoWhatWhich files|directory
Who is the user u, group g, other o and a for all
What is + to add, - to remove, = to set exactly
Which is r for read, w for write, and x for execute
You will use letter to specify the different groups that you wish to change the
permissions for. u is for the user , g is for the group, o is for other, and a is for
all.
When you are setting the permissions using the symbolic method, you do not
need to specify a new set of permissions for the file. You can rather just make
changes to the existing permissions. You can achieve this by using the three
symbols +, - and =to add permissions to a set, remove permissions to a set or
replace the entire set for a group of permissions respectively.
Lastly, the permissions are represented by using the letters where r is for
read, w is for write, and x is for execute. Note that if you are using the chmod
command with the symbolic method, and use a capital X as a permission flag,
it will add the execute permission only if the file is a directory or already has
an execute permission set for user u, group g, or other o.
Let us first go through the numeric method and the syntax looks like this
chmod ### files|directory
Every position of the # represents an access level viz. User, group,
and other
# is the sum of read r=4, write w=2, execute x=1
In this method, we can set up permissions for files and directories using 3
digits(and sometimes 4 for special permissions) known as an octal number. A
single digit can specify the number between 0-7, which shows the exact
number of possibilities we can have with read, write and execute values.
Ig we understand the mapping between symbolic and numeric values, we will
learn how yo do the conversion between the two as well. In the numeric
representation, which is done by three digits, each digit represents
permissions for a group. If we start from left to right, the first bit is for user,
the second bit is for group and the third bit is for other. And for each of these
groups, we can use a combination of the the read write and execute values,
which are 4, 2 and 1 respectively.
Let us look at a symbolic representation of permissions, which is -rwxr-x---
In this representation, the user has the permissions of rwx, which is read
write and execute. If we convert this to numeric form, it will be read r 4 +
write w 2 + execute x 1, which is a total of 7.
Next for the group, we see that the permissions in symbolic form are r-x,
which is to only read and execute. If we convert this to numeric form, it will
be read r 4 + execute x 1, which is a total of 5.
The permission for others is ---, which means read write and execute are all 0.
This means the other bit will be 0.
We can now say that the complete permission for all groups to this file in
numeric format is represented as 750.
We can also to a converse operation on this and do a conversion from
numeric format to symbolic format.
Consider the permission 640.
We know that the user bit is the left most bit, which is 6. The only
combination of read, write and execute that will give us a 6 is that of read r 4
+ write w 2. This means that the permission for the user in symbolic format is
rw-.
Next the group bit is 4 and the only combination of read, write and execute
that will give us a 4 is that of read r 4. This means that the permission for the
group in symbolic format is r--.
Next the other bit is 0 and the only combination of read, write and execute
that will give us a 0 is if values for read, write and execute are all 0. This
means that the permission for the group in symbolic format is ---.
As a whole, the permission for this file in symbolic format will look like -rw-
r-----.
Note: You can use the -R option with the chmod command if you want to set
the same permissions recursively for all files under a directory tree. It is also
noteworthy that while doing this, you can use the X flag symbolically to set
the permissions of all directories so that they are accessible and that you can
skip files while doing so.
Changing the user and group ownership of files and directories
By default, if a file is newly created, it is owned by the user who created the
file. The default group ownership of that file is also associated by default to
the primary group of the user who created it. Since Red Hat Enterprise Linux
7 has the concept of user private groups, the group will mostly be a group,
which has only one member who is the user themselves. Access to files and
directories can be granted by changing their owner and group.
You can use the chown command to change the ownership of a file or a
directory. Let us see an example.
[root@desktop ~]# chown student newfile
In the example above, we are changing the user owner of the file newfile to
student.
You can also use the option -R with the chown command, which will
recursively change the ownership of a directory and all its files and
subdirectories. The command can be used as shown below.
[root@desktop ~]# chown -R student parentdirectory
We can also use the chown command to change the group ownership of files
and directories. The command is to be followed by the group name preceded
by the colon :
Let us look at an example.
[root@desktop ~]# chown :admins newfile
This will change the group ownership of the file newfile to admins.
You can also use the chown command to change the user ownership and
group ownership at the same time. The syntax for it is as follows.
[root@desktop ~]# chown student:admins newfile
This will change the user ownership to student and group ownership to
admins for the file newfile.
The ownership of files and directories can only be changed by the root user.
However, the group ownership of a file can be changed both by the root and
the actual user who owns that file. Non-root users have access to provide
ownership to groups that they are part of.
Note: An alternate command to the chown command is the chgrp command,
which can be used to change the group ownership of a file or directory. The
chgrp command works exactly the same way as the chown command and
also works with the -R option to recursively change group ownership.
Managing File Access and Default Permissions
In this section, we are going to learn about special permissions. We will
create a directory under, which files that will be created will have write
access for users of the group that owns the directory by default. This will be
achieved using the special permissions known as sticky bits.
Let us see how we can use the special permissions and apply them. There is a
bit known as the setuid and setgid on the permissions, which allows an
executable file to run as the user of that file or the group of that file and not as
the user that ran the actual command.
One such example is the passwd file. Let us have a look at it.
[student@desktop ~]$ ls -l /usr/bin/passwd
-rwsr-xr-x. 1 root root 34598 Jul 152011 /usr/bin/passwd
The sticky bit for any file in the permissions sets a restriction on file
deletions. Only the user who owns the file and the root user can delete the
file. An example of this is /tmp.
[student@desktop ~]$ ls -ld /tmp
drwxrwxrwt 39 root root 4096 Jul 102011 /tmp
Lastly, the setgit bit is a bit that allows all files created within a directory to
inherit the permissions of the directory rather than getting it set by the user
who created the file. Group collaborative directories mostly use this feature to
change file permissions from the default private group to shared groups.
Let us go through the effects of special permissions on files and directories.
g+s sgid File executes as the group The group owner of the
that owns the file newly created file in the
directory will match the
group owner of the directory
Let us see how we can set special permissions on files and directories.
Symbolically, setuid is u+s, setgid is g+s and sticky is o+t
Numerically, the special permissions use the fourth bit that precedes
every user’s first digit. setuid is 4, setgid is 2 and sticky is 1.
Let us now try and understand what default file permissions are. The
permissions set for files by default are the ones that are set by the processes
that created those files. For example, if you are using a text editor like Vim to
create a file, the file will have read and write access for everyone but no
execute access. Shell redirection follows the same rule. Additionally,
compilers can create files that are binary executable in nature. Therefore, the
files will have executable permissions. The mkdir command is used to create
directories and these directories have all permissions for read, write and
execute.
Research and experience has shown that the permissions on files and
directories are not set when they are created because the umask of the shell
process clears these permissions. If you use the umask command without any
arguments, it will show the value of the current umask of the shell.
[student@desktop ~]$ umask
0002
There is a umask for every process on the system. The umask is basically an
octal bitmask that clears the permissions of newly created files and
directories that are created by a process. If the umask has a bit set, the
corresponding permission is cleared in newly created files.
Let us take an example. The bitmask value shown above is 0002 where the
bit for other users is 2. We know by this that the special, user and group
permissions will not be cleared since those bits are all 0. Therefore
permissions for other users will be cleared since the corresponding umask bit
is 2. We assume that there are zeroes that lead if the umask is less than three
digits.
The default umask values in the system for users of bash shell are defined at
/etc/profile and /etc/bashrc file. The system defaults can be overridden by the
users in their .bash_profile and .bashrc files.
Chapter Six: Linux Process Management
In this chapter, we will learn how to monitor and manage processes that run
on Red Hat Enterprise Linux 7. By the end of this chapter, we will be able to
list processes and interpret basic information about them on the system, use
bash job control to control processes, use signals to terminate processes, and
monitor system resources and system load caused by processes.
Processes
In this section, we will define the cycle of a typical process and understand
the different states of a process. We will also learn to view and interpret
processes.
What is a process?
An executable program in a state where it is running after being launched is
called a process. A process has the following features.
Allocated memory that points to an address space
Properties with respect to security, which include ownership
privileges and credentials
Program code that contains one or more executable threads
The state of the process
The process environment has the following features
Variables that are both local and global in nature
A current scheduling context
System resources allocated to it, which include network ports and
file descriptors
An existing process is known as a parent process, which splits and duplicates
its address space to create a child process. For security and tracking, a unique
process ID known as PID is assigned to every new process. The PID and the
parent process’s ID known as PPID together make the environment for the
child process. A child process can be created by any process. All the
processes in the system descend from the very first process of the system,
which is known as systemd on Red Hat Enterprise Linux 7.
As the child process splits from a parent process through a fork, properties
such as previous and current file descriptors, security identities, port
privileges, resource privileges, program code, environment variables are all
inherited by the child process. Once these properties have been inherited, the
child process can then execute its own program code. When a child process
runs, the parent process goes to sleep by setting a request to a wait flag until
the child process completes. Once the child process completes, it leaves the
system and releases all system resources and environment it has previously
locked, and what remains of it is known as a zombie. Once the child process
leaves, the parent process wakes up again and clean the remaining bit and
starts to run its own program code again.
Process States
Consider an operating system, which is capable of multitasking. If it has
hardware with a CPU that has multiple cores, every core can be dedicated to
one process at a given point in time. During runtime, the requirements of
CPU and other resources keep changing for a given process. This leads to
processes being in a state, which changes as per the requirements of the
current circumstance.
Let us go through the states of a process one by one by looking at the table
given below.
Listing processes
The current processes in the system can be listed using the ps command at
shell prompt. The command provides detailed information about processes,
which include:
The UID user identification, which determines the privileges of the
process
The unique process ID PID
The real time usage of the CPU
The allocated memory by the process in various locations of the
system
The location of the process STDOUT standard output, known as the
controlling terminal
The current state of the process
The option aux can be used with the ps command, which will display
detailed information of all the processes. It includes columns, which are
useful to the user and also shows processes, which are without a controlling
terminal. If you use the long listing option lax, you will get some more
technical details, but it may display faster skipping the lookup of the
username.
If you run the ps command without any options, it will display processes,
which have the same effective user ID EUID as that of the current user and
associated with the same terminal where the ps command was invoked.
The ps listing also shows zombies, which are either exiting or
defunct
ps command only shows one display. You can alternatively use the
top command, which will keep repeating the display output in
realtime
Process, which have round brackets are usually the ones run by
kernel threads. They show up at the top of the listing
The ps command can display a tree format so that you can
understand the parent and child process relationships
The default order in, which the processes are listed is not sorted.
They are listed in a manner where the first process started, and the
rest followed. You may feel that the output is chronological, but
there is no guarantee unless you explicitly use options like -O or --
sort
Controlling Jobs
In this section, we will learn about the terms such as foreground, background
and the controlling terminal. We will also learn about using job control,
which will allow us to manage multiple command line tasks.
Jobs and Sessions
Job control is a feature in shell through, which multiple commands can be
managed by a single shell instance.
Every pipeline that you enter at the shell prompt is associated with a job. All
processes in this pipeline are a part of the job and are members of the same
process group. A minimal pipeline is when only a single command is entered
on the shell prompt. In such a case, that command ends up being the only
member of the job.
At a given time, inputs given to the command line from a keyboard can be
read by only one job. That terminal is known as the controlling terminal and
the processes that are a part of that job are known as foreground processes.
If there is any other job associated with that controlling terminal of, which it
is a member, it is known as the background process of that controlling
terminal. Inputs given from the keyboard to the terminal cannot be read by
background processes, but they can still write to the terminal. A background
job can be in a stopped state or a running state. If a background process tries
to read from the terminal, the process gets automatically suspended.
Every terminal that is running is a session of its own and can have processes
that are in the foreground and the background. A job is a part of one session
only, the session that belongs to its controlling terminal.
If you use the ps command, the listing will show the name of the device of
the controlling terminal of a process in a column named TTY. There are
some processes started by the system, such as system daemons, which are not
a part of the shell prompt. Therefore, these processes are not part of a job, or
they do not have a controlling terminal and will never come to the
foreground. Such processes, when listed using the ps command shows ? mark
in the TTY column.
Running Background Jobs
You can add an ampersand & to the end of a command line, which will run
the command in the background. There will be a unique job number assigned
to the job, and a process ID PID will be assigned to the child process, which
is created in bash. The shell prompt will show up again after the command is
executed as the shell will not wait for the child process to complete since it is
running in the background.
[student@desktop ~]$ sleep 10000 &
[1] 5683
[student@desktop ~]$
Note: When you are putting a pipeline in the background with an ampersand
&, the process ID PID that will show up in the output will be that of the last
command in the pipeline. All other command that precede will be a part of
that job.
[student@desktop ~]$ example_command | sort |mail -s “sort output” &
[1] 5456
Jobs are tracked in the bash shell, per session, in the output table that is
shown by using the jobs command.
[student@desktop ~]$ jobs
[1]+ Running sleep 10000 &
[student@desktop ~]$
You can use the fg command with a job ID(%job number) to bring a job from
the background to the foreground.
[student@desktop ~]$ fg %1
sleep 10000
-
In the example seen above, we brought the sleep command, which was
running in the background to the foreground on the controlling terminal. The
shell will go back to sleep until this child process completes. This is why you
will have to wait until the sleep command is over for the shell prompt to
show up again.
You can send a process from the foreground to the background by pressing
Ctrl+z on the keyboard, which will send a suspend request.
sleep 10000
^Z
[1]+ Stopped sleep 10000
[student@desktop ~]$
The job will get suspended and will be placed in the background.
The information regarding jobs can be displayed using the ps j command.
The display will show a PGID, which is the PID of the process group leader
and refers to the first job in the pipeline of the job. The SID is the is the PID
of the session leader, which with respect to a job refers to the interactive shell
running on the controlling terminal.
[student@desktop ~]$ ps j
PPID PID PGID SID TTY TPGID STAT UID TIMECOMMAND
2434 2456 2456 2456 pts/0 5677 T 1000 0:00 sleep 10000
The status of the sleep command is T because it is in the suspended state.
You can start a suspended process again the in background and put it into a
running state by using the bg command with the same job ID.
[student@desktop ~]$ bg %1
[1]+ sleep 10000 &
[student@desktop ~]$
If there are jobs that are suspended and you try to exit the shell, you will get a
warning that will let you know that there are suspended jobs in the
background. If you confirm to leave, the suspended jobs are killed
immediately.
Killing Processes
In this section, we will learn how to use command to communicate with
processes and kill them. We will understand what is a daemon process and
what are its characteristics. We will also learn how to end processes and
sessions owned by a user.
Using signals to control processes
A signal is an interrupt developed through software to be sent to a process.
Events are sent to a program with the help of signals. These events that
generate a signal can be external events, errors, or explicit requests such as
commands sent using the keyboard.
Let us go through a few signals, which are useful for system admins in their
routine day to day system management activities.
Note: The number of signal number can change based on the hardware being
used for the Linux operating system, but the signal names and their purposes
are standardized. Therefore, it is advisable that you use the signal names
instead of the signal number on the command line. The signal numbers that
we discussed above are only for systems that are associated with the Intel x86
architecture.
There is a default action associated with every signal, which corresponds to
one of the following.
Term - The program is asked to exit or terminate at once.
Core - The program is asked to terminate but is asked to also save a memory
image or a core dump before terminating.
Stop - the program is suspended or asked to stop and will have to wait to
resume again.
Expected event signals can be tackled by programs by implementing routines
for handlers so that they can replace, ignore, or extend the default action of a
signal.
Commands used to send signals through explicit requests
Processes that are running in the foreground can be signaled using the
keyboard by users, wherein control signals are sent to the process using keys
like Ctrl+z for suspend, Ctrl+c for kill, and Ctrl+\ for getting a core dump. If
you want to send signals to processes that are running in the background or
are running in a different session altogether, you will need to use a command
to send signals.
You can either use signal names(-HUP or -SIGHUP) to signal numbers(-1) to
specify a signal. Processes, which are owned by a user can be killed by the
users themselves, but processes owned by others will need root user
privileges to be killed.
● The kill command can be sent to a process using the process ID PID.
However, irrespective of the name, the kill command can be used to
send other signals to a process as well and not just for sending a signal
to terminate the process.
● Just like the killall command, there is another command called pkill,
which can be used to signal multiple processes at the same time. The
selection criteria used by pkill is advanced in comparison to killall and
contains the following combinations.
● You can now divide the load average values that you see by the number
of logical CPUs that are present in the system. If the result shows a
value below 1, it implies that resources utilization and wait times are
minimal. If the value is above 1, it indicates that resources are saturated
and that there is waiting time.
● If the CPU queue is idle, then load number will be 0. Threads that are
waiting or ready will add a count of 1 to the queue. If the total count on
the queue is 1, resources of CPU, disk and network are busy, but there
is no waiting time for other requests. With every additional request, the
count increases by 1, but since many requests can be executed
simultaneously, the resource utilization goes up but there is no wait
time for other requests.
● The load average in increases by processes that may be in the sleeping
state since they are waiting for input or output, but the disk and the
network are busy. Although this does not mean that the CPU is being
utilized, it still means that there are processes and users waiting for
system resources.
● The load average will stay below 1 until all the resources begin to get
saturated as tasks are seldom found to be waiting in the queue. It is only
when requests start getting queued and are counted by the calculation
routine that the load average starts spiking up. Every additional request
coming in will start experiencing wait time when the resource
utilization touches 100 percent.
Process monitoring in Real time
Much like the ps command, the top command gives a dynamic view of the
processes in the system, which shows a header summary and list of threads
and processes. The difference is that the output in the ps command is static in
nature and just gives a one time output. The output of the top command is
dynamic and keeps refreshing the values in real time. The interval at, which
the values refresh can be customized. You can also configure other things
such as sorting, column reordering, highlighting, etc. and these user
configurations can be saved and are persistent.
The default output columns are as follows.
● The process ID PID
● The process owner that is the user name USER
● All the memory that is used by a process VIRT, which includes
memory used by shared libraries, resident set, and memory pages that
may be mapped or swapped.
● The physical memory used by a process known as resident memory
RES, which includes memory used by shared objects.
● The state of the process S displays as
D: Uninterruptible Sleeping
R: Running or Runnable
S: Sleeping
T: Traced or Stopped
Z: Zombie
● The total processing time since the process began is known as CPU
time TIME. It can be toggled so as to show the cumulative time of all
the previous child processes.
● The command name process COMMAND
Let us now go through some keystrokes that are helpful for system admins
while using the top display.
Key Purpose
q Quit
Chapter Seven: Services and Daemons in Linux
In this chapter, we will learn now to control and monitor network related
services and system daemons using the systemd utility. By the end of this
chapter, you will be able to list system daemons and network services, which
are started by the systemd service and socket units. You will also be able to
control networking services and system daemons using the systemctl
command line utility.
Identifying System Processes Started Automatically
In this section, we will learn about system processes such as system daemons
and network services that are automatically invoked by the Linux system
when it initiates the systemd service and socket units.
What is systemd?
When your Linux system boots up, all the processes that are invoked at
startup are managed by systemd, which is the System and Service Manager.
The program includes methods that call and activate system resources, server
daemons, and other relevant processes, both when the system is booting up
and then later running.
Processes that are waiting or running in the background and performing
different tasks are known as daemons. Generally, daemons are invoked
automatically during the boot process. and they shut down only when the
system shuts down, or if they are stopped exclusively. The naming
convention for any daemon maintains that the name of the daemon ends with
the letter d.
A socket is something that is used by a daemon to listen to connections. A
socket is the primary channel for communication with both local and remote
clients. A daemon can create a socket and can be separated from the socket as
well such that they get created by other processes like systemd. When a
connection is established with the client, the daemon takes control over the
socket.
A service implies one or more daemons. However, the state of the system
may be changed as a one time process by starting or stopping a process. This
does not involve keeping a daemon process in the running state afterward.
This is known as oneshot.
Let us go through some history about how systemd was created. For many
years now, the process ID 1 in Linux system was dedicated to a process
known as init. This is the process, which was invoked first during boot up
and was responsible for starting all other processes and services on the
system. The term “init system” got its origin from this process. The daemons
that would be needed frequently would be initiated on the system at boot up
by using the LSB init scripts. These scripts are shell scripts and you can
expect variations based on the Linux system that you are on. If there were
daemons, which were seldom used, they would be started by other services
such as initd or xinetd, which listen to connections from clients. These
previous systems had a lot of limitations and were later addressed by
introducing systemd.
In Red Hat Enterprise Linux 7, the process ID 1 is assigned to systemd,
which is the modern initd process. Let us go through the features of systemd
one by one.
The boot speed of the system was increased because of parallel
processes.
Daemons could be started on demand without needing other services
to start them.
Linux control groups, which made way for tracking related
processes together.
Service dependency management was automated, which helped
reduce timeouts by preventing a network service from starting when
it was not available.
systemd and systemctl
Different types of systemd objects known as units can be managed using the
systemctl command. You can use the systemctl -t help command to list
down all the unit types. Let us go through a few common unit types.
● Service units, which represent services of the system have a .service
extension. This unit will be used to start daemons like a web server,
which are frequently accessed.
● The inter process communication sockets IPC are represented y socket
units, which have the .service extension. When a connection is made by
the client, the control of the socket is transferred to the daemon. At
booth time, the start of a service can be delayed using socket units or to
start services, which are not used very frequently.
● Path units, which have the extension .path are used to delay the
initiation of a service until a desired change occurs in the file system.
This is used for services, which use the spool directory such as the
printer service.
Service states
The command systemctl service name.type can be used to view the status of
a service. If the tpe of the unit is not provided, systemctl will show the status
of a service unit.
The output of the command has certain keywords that are interesting to a
system admin, which will indicate the state of a service. Let us go through
these keywords one by one.
Keyword Description
active (running) The service is running and has one or more processes
continuing
Note: The systemctl status NAME command replaces the command service
NAME status, which was used in the previous version of Red Hat Enterprise
Linux.
Let us go through an example where we will list files using the systemctl
command
5. List the active state of all units that are currently loaded. You can
also filter the type of unit. Additionally, you can use the --all option
to show unites that are inactive as well
6. View the setting for all units with respect to enabled or disabled. As
an option, filter the type of unit
[root@desktop ~]# systemctl list-unit-files --type=service
Task Command
In this chapter, we will learn how to configure and secure the openSSH
service. The openSSH service is used to access Linux systems using the
command line. By the end of this chapter, we will learn how to log into a
remote system using SSH and how to run a command on the shell prompt of
the remote system. We will also learn how to configure the SSH service to
implement password free login between two systems by using a private key
file for authentication. We will learn how to make SSH secure by configuring
it to disable root logins and to disable password based authentication as well.
Using SSH to Access the Remote Command Line
In this section, we will learn how we can log in to a remote system using ssh
and run commands on the shell prompt of the remote system.
What is OpenSSH secure shell (SSH)?
The term OpenSSh refers to the software implementation in Linux systems
known as Secure Shell. The terms OpenSSH, ssh, Secure Shell, which are
synonymous with each other is an implementation that lets you run shell on a
remote system in a very secure manner. If you have a user configured for you
on a remote Linux system, which also has SSH services, you can remotely
login to the system using ssh. You can also run a single command on a
remote Linux system using the ssh command.
Let us go through some example of the secure shell. They will give you an
idea of the syntax used for remote logins and how to run commands on the
remote shell.
● Login using ssh on a remote shell with the current user and then use the
exit command to return to your original shell
The w command that we learned about previously displays all the users that
are currently logged into the system. The FROM column of the output of this
command will let you know if the user who is logged in is from the local
system or from a remote system.
SSH host keys
The communication between two systems via ssh is secured through public
key encryption. A copy of the public key is sent by the server to the client
before the ssh client connects to the server. This method is used to complete
the authentication of the server to the client and also to set up a connection
using secure encryption.
When you try to ssh into a remote system for the first time, the ssh command
stores the public key of the server in your ~/.ssh/known_hosts file. Every
time after this, when you try to login to the remote system, the server sends a
public key and compares it to the public key that is stored in you
~/.ssh/known_hosts file. If the keys match, a secure connection is established.
If the keys do not match, it is assumed that the connection attempt was
altered by some hijacking the connection and connection is closed
immediately.
If the public key of the server is changed for reasons such as loss of data on
the hard drive or if the hard drive was replaced for a genuine reason, you will
need to remove the old entry of the server’ public key from
~/.ssh/known_hosts file and replace it with the new public key.
● Host IDs are stored on your local system at ~/.ssh/known_hosts
You can cat this file to see all the public keys of remote hosts stored on
your local system.
● The keys of the host are stored on the SSH server at
/etc/ssh/ssh_host_key*
SSH Based Authentication
In this section, we will learn how to setup a secure login via ssh without
using password based authentication and by enabling key based logins using
the private key authentication file.
SSH key based authentication
There is a way to authenticate ssh logins without using passwords through a
method known as public key authentication. A private-public key pair
scheme can be used by users to authenticate their ssh logins. There are two
keys generated. One is a private key and a public key. The private key file
must be kept secret and in a secure location as it is like a password credential.
The public key is copied to the system that a user may want to login to and is
used to verify and match with the private key. There is no need for the public
key to be a secret. An SSh server, which has your public key stored on it can
issue a challenge, which will only be met by a system that has your private
key on it. Therefore, when you log in from your system to a server, your
private key will be present on your system, which will match the public key
on the server resulting into a secure authentication. This is a method that is
secure and does not require you to type your password to login every time.
You can use the ssh-keygen command on your local system to generate your
private and public key pair. The private key is then generated and kept at
~/.ssh/id_rsa and the public gey is generated and kept at ~/.ssh/id_rsa.pub.
Note: When you are generating your keys, you are given an option to set a
passphrase as well. In the event that someone steals your private key, they
will not be able to use your private key without a passphrase since only you
would know the passphrase. This additional security measure will give you
enough time to set a new key pair before the attacker cracks your private key,
knowing that your existing private key is stolen.
Once you have generated the SSH keys, they will be stored in the user’s
home directory under /.ssh. You then need to copy your public key to the
destination system with, which you want to establish key based
authentication. You can do this by using the ssh-copy-id command.
[student@system1 ~]$ ssh-copy id student@system2
When you use the command ssh-copy-id to copy your public key from your
system to another system, it automatically copies your public key from
~/.ssh/id_rsa.pub
Customizing the SSH Configuration
In this section, we will learn how to customize the sshd configuration such
that we can restrict password based logins or direct logins.
It is not really necessary to configure the openSSH service but there are
options available to customize it. All the parameters of the sshd service can
be configured in the file that is located at /etc/ssh/sshd_config.
Prohibiting root user logins from SSH
With respect to security, it is advisable that we restrict the root user to login
directly using the ssh service.
The user name root is available on every Linux system. If you allow
root logins, an attacker only needs to know the root user’s password
to be able to login as root via ssh. Therefore, it is good practice to
not allow root logins via ssh at all.
The root user is a super user with unlimited privileges, and
therefore, it makes sense to not allow the root user to login using
ssh.
The ssh configuration file has a line, which we can comment out to restrict
root user logins. You need to edit the file /etc/ssh/sshd_config and comment
out the following line:
#PermitRootLogin yes
When to comment out the line for permitting root login, the root user will not
be able to login using the ssh service once the sshd service has been restarted.
PermitRootLogin no
For the changes in the configuration file to come into effect, you will need to
restart the sshd service
root@desktop ~]# systemctl restart sshd
Another option available it to allow only key based login where you can edit
the following line into the file.
PermitRootLogin without-password
Prohibiting password authentication during ssh
There are many advantages of allowing only key based logins to a remote
system.
The length of an SSH key is longer than a password and therefore, it
is more secure.
Once you have completed the initial setup, there is hardly any time
taken for the future logins.
You need to edit the file /etc/ssh/sshd_config where there is a line that allows
password authentication by default.
PasswordAuthentication yes
To stop password authentication, you need to edit this line to no and then
restart the sshd service.
PasswordAuthentication no
Always make sure that after you have modified the sshd service configuration
file at /etc/ssh/sshd_config you will need to restart the sshd service
root@desktop ~]# systemctl restart sshd
Chapter Nine: Log Analysis
In this chapter, we will learn how to locate logs in the Linux system and
interpret them for system administration and troubleshooting purposes. We
will describe the basic architecture of syslog in Linux systems and learn to
maintain synchronization and accuracy for the time zone configuration such
that timestamps in the system logs are correct.
Architecture of System Logs
In this section, we will learn about the architecture of system logs in Red Hat
Enterprise Linux 7 system.
System logging
Events that take place as a result of processes running in the system and the
kernel of the operating system need to be logged. The logs will help in
system audits and to troubleshoot issues that are faced in the system. As a
convention, all the logs in Linux based systems are stored at /var/log
directory path.
Red Hat Enterprise Linux 7 has a system built for standard logging by
default. This logging system is used by many programs. There are two
services systemd-journald and rsyslog, which handle logging in Red Hat
Enterprise Linux 7.
The systemd-journald collects and stores logs for a series of process, which
are listed below.
Kernel
Early stages of the boot process
Syslog
Standard output and errors of various daemons when they are in the
running state
All these activities are logged in a structural pattern. Therefore, all these
events get logged in a centrally managed database. All messages relevant to
syslog are also forwarded by systemd-journald to rsyslog to be processed
further.
The messages are then sorted by rsyslog based on facility or type and priority
and then writes them to persistent files in /var/log directory.
Let us go through all the types of logs, which are stored in /var/log based on
the system and services.
Log file Purpose
/var/log/messages Most of the syslog messages are stored in this file with
the exception of messages related to email processing
and authentication, cron jobs and debugging related
errors
The method to handle these log messages is determined by the priority and
the type of the message by rsyslog. This is already configured in the file at
/etc/rsyslog.conf and by other conf files in /etc/rsyslog.d. As a system admin,
you can overwrite this default configuration and customize the way rsyslog
file to be able to handle these log messages as per your requirement. A
message that has been handled by the rsyslog service can show up in many
different log files. You can prevent this by changing the severity field to none
so that messages directed to this service will not append to the specified log
file.
Log file rotation
There is logrotate utility in place in Red Hat Enterprise Linux 7 and other
linux variants so that log files do not keep piling up the /var/log file system
and exhaust the disk space. The log file gets appended with the date of
rotation when it is rotated. For example, and old file named /var/log/message
will change to /var/log/messages-20161023 if the file was rotated on October
23, 2016. A new log file is created after the old log file is rotated and it is
notified to the relevant service. The old log file is usually discarded after a
few days, which is four weeks by default. This is done to free up disk space.
There is a cron job in place to rotate the log files. Log files get rotated on a
weekly basis, but this may vary based on the size of the log file and could be
done faster or slower.
Syslog entry analysis
The system logs, which are logged by the rsyslog program have the oldest log
message at the top of the file and the latest message at the end of the file.
There is a standard format that is used to maintain log entries that are logged
by rsyslog. Let us go through the format of the /var/log/secure log file.
Feb 12 11:30:45 localhost sshd[1432] Failed password for user from
172.25.0.11 port 59344 ssh2
The first column shows the timestamp for the log entry
The second column shows the host from, which the log message was
generated
The third column shows the program or process, which logged the
event
The final column shows the message that was actually sent
Using the tail command to monitor log files
It is a common practice for system admins to reproduce the issue so that error
logs for the issue get generated in real time. The tail -f /path/to/file command
can be used to monitor logs that are generated in real time. The last 10 lines
of the log file are displayed with this command while it still continues to print
new error logs that are generated in real time. For example, if you wanted to
look for real time logs of failed login attempts, you can use the following tail
command, which will help you see real time logs.
[root@desktop ~]# tail -f /var/log/secure
…
Feb 12 11:30:45 localhost sshd[1432] Failed password for user from
172.25.0.11 port 59344 ssh2
Using logger to send a syslog message
You can send messages to the rsyslog service by using the logger command.
The command useful when you have made some changes to the configuration
file of rsyslog and you want to test it. You can execute the following
command, which will send a message that gets logged at /var/log/boot.log
[root@desktop ~]# logger -p local7.notice “Log entry created”
Reviewing Journal Entries for Systemd
In this section, we will learn to review the status of the system and
troubleshoot problems by analyzing the logs in the systemd journal.
Using journalctl to find events:
The systemd journal uses a structured binary file to log data. Extra
information about logged events is included in this data. For syslog events,
this contains the severity and priority of the original message.
When you run journalctl as the root user, the complete system journal is
shown, starting from the oldest log entry in the file.
Messages of priority notice or warning are highlighted in bold by the
journalctl command. The higher priority messages are highlighted in red
color.
You can use the journalctl command successfully to troubleshoot and audit is
to limit the output of the command to only show relevant output.
Let us go through the various methods available to limit output of the
journalctl command to show only desired output.
You can display the last 5 entries of the journal by using the following
command.
[root@server ~]# journalctl -n 5
You can use the priority criteria to filter out journalctl output to help while
troubleshooting issues. You can use the -p option with the journalctl
command to specify a name or a number of the priority levels, which shows
the entries that are of high level. journalctl know the priority levels such as
info, debug, notice, err, warning, crit, emerg, and alert.
You can use the following command to achieve the above mentioned output.
[root@server ~]# journalctl -p err
There is command for journalctl similar to tail -f, which is journalctl -f. This
will again list the last 10 lines of journal entry and then keep printing log
entries in real time.
[root@server ~]# journalctl -f
You can also use some other filters to filter out journalctl entries as per your
requirement. You can pass the following options of --since and --until to
filters out journal entries as per timestamps. You need to then pass the
arguments such as today, yesterday or an actual timestamp in the format
YYYY-MM-DD hh:mm:ss
Let us look at a few examples below.
[root@server ~]# journalctl --since today
[root@server ~]# journalctl --since “2015-04-23 20:30:00” --until “2015-05-
23 20:30:00”
There are more fields attached to the log entries, which will be visible only if
you use the verbose option by using verbose with the journalctl command.
[root@server ~]# journalctl -o verbose
This will print out detailed journalctl entries. The following keywords are
important for you to know as a system admin.
_COMM, which is the name of the command
_EXE show the executable path for the process
_PID will show the PID of the process
_UID will show the user associated with the process
_SYSTEMD_UNIT show the systemd unit, which started the
process
You can combine one or more of these options to get an output from the
journalctl command as per your requirement. Let us have a look at the
example below, which will print journal entries that contain the systemd unit
file sshd.service bearing the PID 1183.
[root@server ~]# journalctl _SYSTEMD_UNIT=sshd.service _PID=1183
Systemd Journal Preservation
In this section, we will learn how to make changes to the systemd-journald
configuration such that the journal is stored on the disk instead of memory.
Permanently storing the system journal
The system journal is kept at /run/log/journal by default, which means that
when the system reboots, the entries are cleared. The journal is a new
implementation in Red Hat Enterprise Linux 7.
We can be sure that if we create a directory as /var/log/journal, the journal
entries can be logged there instead. This will give us an advantage that
historical data will be available even after a reboot. However, even though we
will have a journal that is persistent, we cannot have any data that can be kept
forever. There is a log rotation, which is triggered by a journal on a monthly
basis. Also, by default, the journal is not allowed to have a dusk
accumulation of more than 10% of the file system it occupies, or even leave
less than 15% of the file system free. You can change these values as per
your needs in the configuration file at /etc/systemd/journald.conf and one
the process for systemd-journald starts, the new values will come into effect
and will be logged.
As discussed previously, the entries of the journal can be made permanent by
creating a directory at /var/log/journal
[root@server ~]# mkdir /var/log/journal
You will need to make sure that the owner of the /var/log/journal directory is
root and the group owner is systemd-journal, and the directory permission is
set to 2755.
[root@server ~]# chown root:systemd-journal /var/log/journal
[root@server ~]# chmod 2755 /var/log/journal
For this to come into effect, you will need to reboot the system or as a root
user, send a special signal USR1 to the systemd-journald process.
[root@server ~]# killall -USR1 systemd-journald
This will make the systemd journal entries permanent even through system
reboots, you can now use the command journalctl -b to show minimal output
as per the latest boot.
[root@server ~]# journalctl -b
If you are investigating an issue related to system crash, you will need to
filter out the journal output to show entries only before the system crash
happened. That will ideally be the last reboot before the system crash. In such
cases, you can combine the -b option with a negative number, which will
indicate how many reboots to go back to limit the output. For example, to
show outputs till the previous boot, you can use journalctl -b -1.
Maintaining time accuracy
In this section, we will learn how to make sure that the system time is
accurate so that all the event logs that are logged in the log files show
accurate timestamps.
Setting the local time zone and clock
If you want to analyze logs across multiple systems, it is important that the
clock on all those systems is synchronized. The systems can fetch the correct
time from the Internet using the Network Time Protocol NTP. There are
publicly available NTP projects on the Internet like the Network pool Project,
which will allow a system to fetch the correct time. The other option is to
maintain a clock made up of high quality hardware to serve time to all the
local systems.
To view the current settings for date and time on a Linux system, you can use
the timedatectl command. This command will display information such as
the current time, the NTP synchronization settings and the time zone.
[root@server ~]# timedatectl
The Red Hat Enterprise Linux 7 maintains a database with known time zones.
It can be listed using the following command.
[root@server ~]# timedatectl list-timezones
The names of time zones are based on zoneinfo database that IANA
maintains. The naming convention of time zones is based on the ocean or
continent. This is followed by the largest city in that time zone or region. For
example, if we look at the Mountain Time in the USA, it is represented as
“America/Denver”.
It is critical to select the correct name of the city because sometimes even
regions within the same time zone may maintain different settings for
daylight savings. For example, the US mountain state of Arizona does not
have any implementation of daylight savings and therefore falls under the
time zone of “America/Phoenix”.
The tzselect command is used to identify zone info time zone names if they
are correct or not. The user will get question prompts about their current
location and mostly gives the output for the correct time zone. While
suggesting the time zone, it will not automatically make any changes to the
current time on the system. Once you know, which timezone, you should be
using, you can use the following command to display the same.
[root@server ~]# timedatectl set-timezone America/Phoenix
[root@server ~]# timedatectl
If you wish to change the current date and time for your system, you can use
the set-time option with the timedatectl command. The time and date can be
specified in the format “”YYYY-MM-DD hh:mm:ss”. If you just want to set
the time, you can omit the date parameters.
[root@server ~]# timedatectl set-time 9:00:00
[root@server ~]# timedatectl
You can use the automatic time synchronization for Network Time Protocol
using the set-ntp option with the timedatectl command. The argument to be
passed along is true or false, which will turn the feature on or off.
[root@server ~]# timedatectl set-ntp true
The Chronyd Service
The local hardware clock of the system is usually inaccurate. The chronyd
service is used to keep the local clock on track by synchronizing it with the
configured Network Time Protocol NTP servers. If the network is not
available it synchronizes the local clock to the RTC clock drift that is
calculated and recorded in the driftfile, which is maintained in the
configuration file at /etc/chrony.conf.
The default behavior of the chronyd service is to use the clocks from the NTP
network pool project to synchronize the time and no additional configuration
is needed. It is advisable to change the NTP servers if your system happens to
be on an isolated network.
There is something known as a stratum value, which is reported by an NTP
time source. This is what determines the quality of the NTP time source. The
stratum value refers to the number of hops required for the system to reach a
high performance clock for reference. The source reference clock has a
stratum value of 0. An NTP server that is attached to the source clock will
have a stratum value of 1, while a system what is trying to synchronize with
the NTP server will have a stratum value of 2.
You can use the /etc/chrony.conf file to configure two types of time sources,
server and peer. The stratum level of the server is one level above the local
NTP server. The stratum level of the peer is the same as that of the local NTP
server. You can specify one or more servers and peers in the configuration
file, one per line.
For example, if your chronyd service synchronizes with the default NTP
servers, you can make changes in the configuration file to change the NTP
servers as per your need. Every time you change the source in the
configuration file, you will need to restart the service for the change to take
effect.
[root@server ~]# systemctl restart chronyd
The chronyd service has another service known as chronyc, which is a client
to the chronyd service. Once you have set up the NTP synchronization, you
may want to know if the system clock synchronizes correctly to the NTP
server. You can use the chronyc sources command or if you want a more
detailed output, you use the command chronyc sources -v with the verbose
option.
[root@server ~]# chronyc sources –v
Chapter Ten: Archiving Files
In this system, we will learn how to compress files and archive them. We will
also learn to extract the compressed file. We will learn the different
compression techniques that are available in Linux and why compression is
necessary and how it is useful to make the life a system admin easy.
Managing Compressed Archives
In this section, we will learn about the tar command and how it is used to
compress and archive files. We will also learn how to use the tar command to
extract data from existing archived files.
What is tar?
Compression and archiving of files is useful for taking backups of data and
for transferring huge files from one system to another over a network. The
tar command can be used to achieve this, which is the most common and one
of the oldest methods used to archive and compress files on the Linux
system. With the tar command, you can compress an archive using the gzip,
xz, or bzip2 compression.
The tar command is accompanied by one of the following three actions.
● c, which is used to create an archive
● t, which is used to list the content of an archive
● X, which is used to extract from an existing archive
The options that are commonly used along with the tar command are as
follows.
● f file name, which will be the file that you want to use
● v stands for verbosity, which shows the list of files getting archived or
being extracted
Using tar to archive files and directories
If you are looking to create a tar archive, you need to ensure that there is no
existing archive with the same file name as the one that you intend to create
because the tar command will not give you any prompt and will overwrite the
existing archive file.
You will need to use the c option to create a new archive followed by f file
name.
[root@server ~]# tar cf archive.tar file1 file2 file3
This command will create an archive file named archive.tar, which will
contain the files file1, file2, and file3.
Listing contents of a tar file
The t and f options are used with the tar command to list the contents of a tar
file.
[root@server ~]# tar tf /root/etc/etc.tar
etc/
etc/fstab
etc/mtab
…
Using tar to extract an existing archive
The x and f options are used with the tar command to list the contents of a tar
file.
[root@server ~]# tar xf /root/etc/etc.tar
Adding a p to the option ensures that all permissions are preserved after
extraction
[root@server ~]# tar xpf /root/etc/etc.tar
Creating a compressed tar archive
There are three types of compression techniques that can be used with the tar
command. They are as follows.
● z, which is used for a gzip compression with filename.tar.gz or
filename.tgz extension
● j used for bzip2 compression with a filename.tar.bz2 extension
● J used for a xz compression with a filename.tar.xz extension
You can pass one of these options with the regular tar create command to get
the required compressed archive.
Example: If you wish to create a tar archive with a gzip compression, you can
use the following command.
[root@server ~]# tar czf new.tar.gz /etc
This will create a compressed archive of the content of the /etc directory and
name it new.tar.gz.
Extracting a compressed tar archive
You can use the x option with the tar command and pass one of the
compression options along with it to extract the contents of a tar archive.
Example: If you wish to extract a tar archive file, which has a gzip
compression, you can use the following command.
[root@server ~]# tar xzf /root/etc/etc.tar.gz
This will extract all the contents from the compressed archive at
/root/etc/etc.tar.gz and place all its files in the home directory of the root user
since that is the present working directory of the root user.
Conclusion
User data is the most expensive entity in the world today. Compromise in
data can result in huge losses for an organization. You can maintain a
computer at home with a reasonable amount of security such as a simple
antivirus software. However, given the amount of data that is present on
business related systems on the Internet, they are more prone to attackers, and
therefore, the level of effort to maintain security on business machines is way
more than a personal computer. But Linux operating systems have proved to
be a secure platform for a choice of the operating system on server systems
for big organizations. Given the open-source nature of the Linux operating
system development, security patches come faster for Linux than they come
out for any other commercial operating systems, making Linux the most ideal
platform with respect to security.
All this said and done, what comes into the spotlight is the job profile of a
Linux system administrator. There is a huge demand for this profile in all the
major organizations worldwide, which work on Linux systems. This book
provides a beginner’s course to the Linux system and we hope that it will
encourage you to learn advanced Linux system administration in the future.
Linux Command Line
David A. Williams
Introduction
I decided to write this book to ease the hurdles that newbie Linux users face.
I want you to understand your computer in a better way than you did before.
This book contains a complete package for beginners to understand what the
Linux operating system is and how it differs from other operating systems.
Linux offers you a great learning experience.
Coupled with these facts are the dangers linked to this ubiquitous
connectivity. This is the age when you need a computer on which you have
customized applications and complete control. Would you hand over control
of your computer system to other companies who keep making a profit by
marketing how easy they have made the use of computers? Or would you
rather have more freedom and control over your own computer? You deserve
freedom to customize control over your computer. You deserve to build your
own software for your computer systems. That’s why you should prefer
Linux over all the other operating systems. With Linux on your computer,
you can direct your computer to do as you wish. It will act on your
commands.
You don’t have to be a master of programming to read this book. This book
is for beginners who are thinking of dipping their toes into the world of
Linux. What I expect from you before you go on to the next chapters is a
basic understanding of computers. It means that you should be able to tinker
with the graphical user interface and finish some key tasks. You should know
about booting, startup, log in, files and directories. You should know how to
create, save and edit documents and how to browse the web space. But the
most important thing of all is your will to learn new things. I’ll take you from
there.
This book also is for those who are fed up with other operating systems and
want to switch to a smarter operating system. I’ll tell you about Sylvia, my
friend, in the latter chapters, who was forced by her boss to learn Linux for
execution of key office tasks. I’ll tell the tale of how hard it was for her and
how she achieved such a gigantic goal.
If you have just come to know about Linux and want to switch to this unique
operating system, this book is definitely for you. Read it, learn the command
line and get started.
Before starting the journey, you should bear in mind that there is no shortcut
to mastering Linux. Like all great and exciting things, learning Linux takes
time. It may turn out to be challenging at times, and may also take great effort
on your part. The command line is simple. Also, you can easily remember the
commands and the syntax.
This book is filled with material that is easy to read and practice, and
perfectly suits starters. It is like learning from a tutor. Every step is explained
with the help of a command line exercise and a solution for you to understand
the process.
The first section explains what the shell is. You will learn about the
different components of the Linux system. In addition, there will be
various commands to enter in the shell window for your immediate
practice. This section also explains how to navigate the filesystem and
different directories in a Linux operating system.
The second section will take you further into the command line by
explaining more about the commands. You will also be able to create
your own commands.
The third section is packed with more details about the Linux
environment and system configuration. It will explain how GRUB
works on Linux. You will learn about the system startup, the init and
different runlevels of Linux.
In the sixth section, you will learn the basics of shell scripting. This is
like writing your own software. You will learn about key scripting
techniques by which you can control your computer like teaching your
computer about decision-making. It is almost akin to artificial
intelligence. Your computer will act on its own following a set of
instructions. Furthermore, you will learn about some important
commands like the case statements, the break statement and the
continue statement.
The final section will take you to the advanced level of shell scripting.
You will need an operational Linux system on your computer before starting
to read this book because it carries practical exercises and their solutions for
you.
You can install Linux on your computer. There are a number of distributions,
like Fedora, OpenSUSE and Ubuntu. Installing the distribution system is very
easy if done in the right way. Of course, there are some specifications that
you must have on your computer. For instance, at least 256 MB RAM and
around 6GB free space on the hard disk. The higher the specifications of the
system the better it is for the operations. It is recommended that you use a
personal computer and a landline internet connection. Wireless systems are
not suitable for the job.
You can also use the Linux distribution system from a live CD, so there is no
need to install it on the system. That’s less messy, to say the least. It is easier
to do that. On the startup, enter the setup menu and change the boot settings.
Switch your boot option to boot from a CD instead of the hard disk. After
that insert the CD and you are all ready to use the Linux distribution system.
You can run Ubuntu and Fedora from a live CD.
What is Linux?
Before moving on to the complex parts, it is better that you sail through the
basic world of Linux first. For beginners, understanding Linux can be
perplexing if they don’t know what Linux actually is and what it offers.
Though it may appear complex and undoable thing at the start, in reality, it is
easier to learn than you may think. The Linux operating system consists of
the following parts.
Bootloader: This manages the boot process of a computer. You might have
seen a splash screen coming and going in the blink of an eye before you boot
into the operating system.
The Kernel: This is also dubbed as the core of the system. It manages the
entire system by deploying software and hardware when they are needed. In
addition, the kernel manages system memory, as well as the file system.
The Shell: You can give directions to your computer by typing commands in
the form of texts. This command line is dubbed as the shell which is the
daunting part of Linux. People just don’t want to venture into it. My friend
Sylvia never liked Linux until her boss pushed her into this uncanny tech
world. Once she got into it, she loved it more than Windows and Mac. The
control the command line gave her made her quite comfortable with it at her
office. The shell allowed her to copy, move and rename files with the help of
the command prompt. All the struggle she had to go through was memorizing
the text commands and the rest of it was the easy part. The command prompt
does everything.
You can type in a program name to start it. The shell sends the relevant info
to the kernel, and it does the rest of the execution.
Getting Started
After you have installed Linux, create a regular user to run as your personal
account. Log in to the account.
The shell is an integral part of the command line. The command line, in
reality, is the shell itself. In simple words it takes keyboard commands as
input and then processes them toward the operating system for execution.
Different Graphical User Interfaces (GUIs) use different terminal emulators
which allow you to put in text commands through the shell. Some newbies
get confused by the large number of emulators available online. To clear your
mind, just remember that the basic job of all terminal emulators is to give you
a gateway to the shell. One more thing: a terminal emulator is also known as
the shell window.
Unix also uses a shell system, but Linux has the same system, but it’s better.
This enhanced version of the shell is dubbed as “bash” which comes as a
default shell on most of the Linux distributions.
After you have launched the emulator, the screen will show you something
like the following:
This text is known as the shell prompt in which aka is the username while
‘localhost’ denotes the machine name. Their values may differ. Usually, the $
sign accompanies you all the time you are in the shell window. If you see the
sign of #, it means you are logged in as a root user or the emulator you are
using offers super user privileges. If this is your first time with Linux, just
open the DOS command prompt on the windows. You will have the same
experience while using the shell window. There is not much difference except
for the commands. The environment is more or less alike.
Did you write your name? Write it down. The shell will analyze the
command and respond with the following:
You cannot use a mouse on the emulator. Instead, the keyboard arrows will
help you do some magic. Strike the upper arrow key to scroll through the
history of your commands. The downward arrow will bring you to the latest
command. Also, the previous command disappears while you get down to the
new one. Isn’t it like the DOS Command prompt? The right and left arrows
will help you to position the cursor in order to edit the command if need be.
If you type down ‘cal’ in place of ‘date,’ a full month’s calendar will be
displayed on the screen.
Basic Commands
If you are looking forward to operating Linux just like Windows, you are
wrong. Linux is smarter. You get to work with a command set. It is just like
coding. Enter the command to get the job done, and that’s it. Let’s see which
of these you can operate. The fun is going to start from here. Get ready!
The # sign indicates that I am logged in as a root user. I think we are done
with that. Now enter the following:
Let’s try the cat comman-d and see what it can do.
A huge number of rows and columns will appear below the command. This
command is meant to display the contents of the above file.
Do you want to move files from one place to another? Check out the mv
command.
With the help of the second command, you can move multiple files at the
same time into a particular directory. In the above command, lib denotes the
name of the directory.
The cp command is used to copy files in Linux from one place to another like
from downloads to documents.
As you enter the second command, it will show the modified time of the
respective file.
In the end, rm command is used to delete files from the system. Is the recycle
bin going through your mind? Forget it. You don’t have to search the file
manually to dispatch it to the recycle bin. Just type rm and the file name. It
will be removed.
This command creates new links. You can use it in the following forms:
Now the question on your mind may be whether the kernel does that when
you are out of memory. Well, no, it doesn’t. It does that randomly. So, the
swap space remains filled all the time. If you run a program that needs a
block which the kernel has swapped out, it starts creating space for it. Yes,
you understood this. The kernel swaps out some other page to bring in the
required page.
If you want to check the memory on your Linux system, there are specific
commands to do that. Some of them are as shown below, both for checking
the swap as well as the RAM memory:
The free command: Let’s start right away. You will see the following table:
The proc/ meminfo file: Another popular method to check the memory status
is reading the proc/meminfo file.
Type in the command: ‘$ cat/proc/meminfo’ and you will see a long list of
options along with details of the memory consumed by each of them. Let’s
see. The following columns will pop up on your screens when you put in the
commands.
#
This memory detail shows the total memory line of physical memory that the
Linux server has. How much is used and where it is used will be shown. You
can see all the memory pages which can differ from system to system. Your
Linux system might not have created some of the above-mentioned pages.
You will easily get bored with Linux if you get stuck in a single spot. You
must be able to navigate through different files and directories to enjoy your
experience with this one of a kind operating system. You need to get an idea
of where you are and what things you have access to and with which
command you can access them. First of all, you should know where your feet
are. To learn this, use the following command.
In most cases, people land in the home directory of their accounts. Now, what
is the home directory? Let me explain it all. A home directory is a place
where you can store your files in addition to building up directories. You are
the boss here. It is like a real home for you. Feel free to store your data here
and also create multiple directories. In the command ‘pwd’ p means print, w
means working, and d is for directory.
Is it a tree?
It doesn’t look like one, but it is one, for sure. As in the Windows operating
system Linux arranges all its files and folders into a hierarchical structure.
Yes, it looks like a tree if you could somehow materialize it into something
concrete. Hence the need for the word ‘tree.’ You may be fascinated to know
that things are mostly the same among different operating systems such as
Windows and Linux. The creators changed some names to differentiate these
systems for the ease of users. For example, what is called a directory in
Linux, is commonly known as a folder in Windows.
Let me take you further into the world of Linux directories. First of all, these
directories become a bit different from Windows’ folders in a way that they
can carry a large number of files as well as subdirectories. The very first
directory you are confronted with is the ‘root’ directory.
Well, disagreements happen in the world. If you are a Windows user, you are
most used to an individual filing system for the hard drive, USB storage
device and DVD drives. But Linux has defied this system. It offers a single
treelike filing system for all the external and internal storage devices.
Windows made things easier for an average user that detached us from the
relish of complex technical commands. Look, Windows presents the filing
system, of course, in the form of a tree, in a graphical fashion. There are
many plus signs and negative signs. With the pluses, you can further explore
the branches of the tree while the minuses help you close up those branches
and simplify the view. The root always remains at the top.
If you are thinking about something as cool as a graphical outlook, stop
thinking right now. Linux makes it fun by offering you the potential to
navigate through different files and folders with the help of putting in text
commands. In the Linux terminal or shell window, you are always in one or
another directory that is called the current directory. To know your exact
position ‘pwd’ command helps you as I have explained earlier.
Do you want to know about the contents of the directory? Everybody does. I
believe that you have already learned, as well as practiced knowing the
directory you are currently in. Time to delve deeper into the system. You
should know how to explore the contents of the directory with a single
command. Either you can stay in the home directory and explore it, or you
can move to your favorite one. To explore the directory you are currently
working in, try the following command.
[aka@localhost ~]$ ls
Pictures Videos Games Documents
My friend Sylvia loves to jump from one folder to another at the speed of
light. While it is fun to do this in Windows, it is faster and easier to do in
Linux. Just remember the ‘cd’ factor. Sylvia learned it, though the hard way,
by jotting it down in her notebook and memorizing the entire text. Now she
just types ‘cd’ and the pathname of the directory she wants to jump into.
Let me confess it is not as simple as it looks. The pathname is like a magic
circle in which we jump to reach the directory we want to be in. Pathnames
fall into two categories: relative and absolute. Shall we deal with the
‘absolute’ first?
You can at any time confirm if you have actually moved to the desired
directory by applying the ‘pwd’ command.
Relative Pathnames: You can start from the current directory and move to
the parent directory with a simple command. Do you love to put in dots like
(….). A single dot refers to the current working directory whole the double
(..) leads you to the parent directory.
I want to go to the /usr. You can do that either the long way or the shorter
way. Let’s do this first the longer but safer way.
Done. You have switched it to the parent directory. Wait. We are not all done
on this. Let’s get back to the working directory once again by two methods.
We are currently in the parent directory. Remember that. Your screen is
currently showing the following.
[aka@localhost usr]$
Let’s do this in a shorter and faster way. Do you remember what single dot
was supposed to do?
Sylvia just hated the dots. She always put doubles where she had to insert the
single and single where a double was needed. One day, when I was taking a
quick walk by her office, I dropped in and found her messed up with these
simple commands. A friend in need is a friend indeed. I told her a magic
formula for switching from parent directory to the working directory. Let’s
try that.
Not only are the dots omitted but also the slashes.
A trick of the trade: cd is an abbreviation of current and directory. If you try
out just the cd command, it will bounce you back to the directory in which
you first started after you logged in.
[aka@localhost lib]$ cd
[aka@localhost ~]$
More on the ‘cd’ command: Let’s try out the (-). It is supposed to change the
working directory to the one you left behind. Suppose you are currently in lib
after switching from the ‘usr’.
[aka@localhost lib]$ cd –
/usr
[aka@localhost usr]$
It is now time to learn more commands as you already know the basics. You
now know that ls is used to list directories. Also, you can view the content of
the directories.
[aka@localhost ~]$ ls
bin games include lib lib64 libexec local sbin share src tmp
In the above command, you can see four different directories. Now I want
you to take a dive into your brain and bring out a specific directory name you
want to list. Let me dive in first using mine. I’ll use # for this command - that
is the face of the super user.
[aka@localhost ~]$ ls –a
. .. bin games include lib lib64 libexec local sbin share src tmp
The second option is -d that can be extended to --directory. If used alone, this
command will show you a list of your directory’s contents. You can pair it up
with the -1 in order to view the content details.
The third option to consider is the -1 command. You can see the outcomes in
the long form.
[aka@localhost ~]$ ls –1
bin
games
include
lib
lib64
libexec
local
sbin
share
src
tmp
The fourth option is the -S. It sorts out the results with respect to the size of
the file. Here S denotes the word size. If you remember the signs by
assigning them a full, relevant and familiar word, it will help you memorize
the commands faster than normal.
The fifth option is -h with the long form --human-readable. This helps you to
get a full-size display of directories and files that is human-readable instead
of just showing bytes.
The sixth option is -r with the long form --reverse. It displays your results in
a reverse angle that means in the descending alphabetical order.
[aka@localhost ~]$ ls –r
tmp src share sbin local libexec lib64 lib include games bin
The seventh option is -t. As apparent from the use of the letter ‘t’ this option
offers to view the results with respect to modification of time.
[aka@localhost ~]$ ls –t
sbin lib lib64 libexec share include src local games tmp bin
You can see that the position of the files has been changed with respect to the
modification of time.
How to make out a File’s Type
Now that we are going deeper into the system, it is time to know how to
determine the type of a particular file. Actually, Linux does not make it that
much easy for the users as Windows does. There are not familiar and most
likely type file names that you could easily guess. It makes it rough and
tough. To determine a file type, you should type in the following command.
This is known as the file command. When you execute it, it is likely to print
the following information.
In Linux there are lots of files. As you explore the system, you get to know
more and more types of the files. So, stay tuned! Keep reading.
The Less Command
There is another important command in the arsenal of Linux that is the less
command. It is a full program for those who are interested in viewing text
files. This command helps you view the human-readable files in your system.
It tracks them out in no time and allows you to view them.
Wait! As I write, Sylvia knocks the door and jumps in the lounge where I was
working on my laptop. She is always curious about learning the Linux
command line since the day her boss made it mandatory for her to do that. I
must admit that what was earlier on a big burden on her nerves and shoulders
had now turned into a passion. She loves the way Linux made it easier to
wrap up office work in lesser time. The time she saves at the office allows her
to drop in at her friends’ homes. I hope you have stopped thinking about why
she is here in my home. Let’s welcome her.
Sylvia, when learning the less command, was always curious why do we
need this command in the first place? In fact, why do we need to see the text
files? Its answer is that many files are not just ordinary files. They are system
files that contain configuration files. They are stored in your system in this
special format.
If you are thinking like Sylvia, you might say that’s not a big deal. Why do I
have to view the configuration files in the first place? There is an answer to
that as well. This command not only shows the system file but also lots of
real programs that your system is using. These programs are technically
called scripts. They are always stored in the system in this specific format.
I am using the word text for quite some time. Information can be displayed on
a computer in a variety of forms. It is represented in the form of pictures,
videos, numbers and alphabets in Windows operating system. Normally the
language that the computer understands is in numerical form like the binary
digits.
If we look at text as the representation system, it feels simpler and easier than
all the other methods. The human brain processes the information faster than
in this form. Numbers are converted into text one-by-one. This system is very
compact. This text is different from that of the one in word processors. I am
telling you about this because in Linux a number of files are stored in the
system in the form of text files. Let’s see how can we use the less command.
Things to Remember
You can use the Page UP key to scroll back a page. Use ‘b’ instead if you
don’t like to use Page UP.
Use the PageDn key to scroll on to the next page. For the same purpose, you
can also use spacebar.
The up arrow and down arrow keys are used for moving to one line above
and below respectively.
If you are in the middle of the text and want to jump to the end of the text
file, press G. To reach the start of the text file, press 1G or g.
‘h’ can be used to get the help screen displayed.
‘q’ is always there to quit the less command.
Common Directory Names in Linux
‘/’ denotes the root. There are usually no files in this directory.
‘/lib/ denotes the library directory to store library files.
/root takes you to the home directory. A root user is also called the super
user.
/etc contains configuration files
/bin stores GNU user-level utilities. Also known as the binary directory.
/opt is executed to store optional software.
/dev is known as the device directory.
/usr is the user installed software directory
/var is the variable directory. It changes frequently.
Chapter 2: Exploring the Realm of Commands
I hope that by now, you have gained sufficient knowledge about how you can
navigate through the shell window using simple commands. That’s easy, you
see. A little bit of extra consideration will help you get through most of the
work in no time which in the Windows operating system might have taken
hours of exhaustive work. One more interesting thing is that the commands
are easy to memorize. You can also create your own commands and that’s
what we will learn in this chapter. Also, by getting acquainted with more
commands, you will feel more comfortable in using them in the real shell
window.
It can be a shell function, a kind of shell scripts that are incorporated in the
environment. Commands constitute executable programs just like software
engineers do while working on C language or Python and Ruby. Or a
command can be an alias which is your self-made command.
Do you know how to know a command’s type? Let’ see how it is done.
The info which command: this command displays the help information.
*Menu:
Now type h and the following details will appear on your screens.
You have learned lots of commands and their solutions. It is time to know the
command that can get you out of the mess if you are stuck in the middle of
the course and don’t know what to do. Sylvia loved this feature while she was
in the learning phase. One day I was planning to celebrate my daughter’s
birthday at a beach-side resort, Sylvia phoned me. She was panicked as she
had messed up some command at her office computer. She tried the help
command but to no avail. When the help command failed to do its job, she
panicked. Actually, the help command is only created for the shell built-ins.
An interesting thing about the help command is to get to know what the
command has to offer inside the shell window. Let’s see the syntax.
Here program refers to the command that you want to view. The pages of
man are displayed as such that there is a title. There also is a description that
explains why should you use a particular command? Beside that there will be
lists along with the description of the command’s options. Let’s dissect a man
printf command.
There will be a name of the document. Below which there will be the heading
of SYNOPSIS. Then comes the description. This section carries in-depth
details about the options available for the particular program. For the printf, --
help and –version are available.
That was about the full page of the manual. It takes a lot of time to skim
through the entire pages to locate a particular piece of information. Don’t
worry. You can type a targeted command to read a particular section of the
manual. The man command offers you to specify the section number. Let’s
see how to do it.
When you execute the command, you will reach the specific section of the
man page.
If you don’t know what a specific command has to offer, you can run the
whatis command to have a brief detail about it. In most cases, the detail
consists of a one liner but self-explanatory so that you learn about the
function of the command faster.
As I have already told you that Linux stores files in a tree like hierarchical
structure. The info files in a Linux also are in the form of a tree. They are
built into individual nodes. Each one of them contains a single topic. There
also are hyperlinks for easy navigation to another node. Well, they are not
dyed in blue, just as in Windows. They can be identified by their leading
asterisk. Move over the cursor and strike the Enter key to open it.
The info command is very interesting to use. You can use it from multiple
angles. If you want information on the physical location of the file, the syntax
is given as under:
For a normal human being, using Linux is not a child’s play. It is not that it is
something ethereal which the earthmen don’t understand, but it is its
commands that make it nothing less than a horrible thought. It is almost a
nightmare for me to even think that I had to memorize each command to be a
Linux pro. If you are just like me and cannot remember a ton of commands
that Linux has to offer, perhaps you need a comprehensive lecture on this
very command. It just saves me from lots of hassle of remembering so many
commands.
The skilled hands behind the development of Linux also understand this.
That’s why they have created the apropos command. Just a keyword of a
command is enough for apropos to transport it in front of your eyes. This
command helps users to be more efficient and effective when they are at the
shell window. Shall we see the syntax now?
It will display all the necessary information about the email. To add more
spice, you can use the option -d which will trigger the terminal to return
path& man directories etc. pertaining to the keyword you enter.
The list goes on. I have just written a little less than half of the details that the
window returned in response to the apropos command.
Are you a Google user? If you are, you might know that google cannot see
you in pain of thinking the exact keywords that you need to search. You enter
a word and a list drops down containing most relevant search phrases. Most
often, I go with one of the phrases from the list. Just imagine if you could
avail yourself of a similar facility in Linux. Thankfully, you can. Use the -e
option. Let’s jump to the syntax right away.
You will have a list of commands starting from the word ‘compress.’
We can do that with the help of the command we have just tried and tested.
Think about a possible name and confirm if it has not been taken.
So, xin is not there. Here is the syntax to create the alias.
[aka@localhost ~]$ alias xin=’cd/usr; ls; cd –‘
Just remember the string and finding out a name that has not already been
taken. Execute the command. Great! It seems you have already created your
own command.
Let’s try the new command.
If you desire to take a look at all the aliases in the environment, follow the
syntax as under.
Your screen can show a slightly different result as per the environment you
are using. Just be sure you have the information about aliases that are present
in your system.
The I/O Saga
We are going to learn about the input and output in the shell window. You
will be using meta characters. Basically, this is responsible for rerouting the
I/O between the files. In addition, it is used to create command pipelines.
Standard input: It is given the ‘0’ number. Your keyboard is used for the
input. In short form, it will look like this (stdin).
Standard output: It has the (1) number. Your guess is right. Of course, the
output is displayed on the screen before your eyes. No doubt about that, you
see. In short, it is displayed as (stdout.)
Standard error: Give it the number (2). If you miss something in the input or
don’t exactly know the command, the bash shell will throw off an error box.
It is displayed as (stderr) in short form.
There is nothing complex about the redirection process regarding the
relationship between a keyboard and the Windows operating system. You
will understand what it is all about.
Yes, you can. Commands in Linux definitely are like magic wands. With a
single and simple command, you can do overwrite the command. Let’s do
that.
You can see that the file is not found still it was overwritten and erased.
When I run the cat command after that, there was no return.
I don’t want the file to be erased like that: Do you mean it? Of course, you
mean it. There is a way to do that. We have the noclobber option to save
deletion due to overwriting of files. Let’s learn the syntax .
[aka@localhost ~]$ echo I love blue sky. > mydocuments.txt
[aka@localhost ~]$ cat mydocuments.txt
I love blue sky.
[aka@localhost ~]$ set –o noclobber
[aka@localhost ~]$ echo the sky is azure. > mydocuments.txt
sh: mydocuments.txt: cannot overwrite existing file
[aka@localhost ~]$ set +o noclobber
[aka@localhost ~]$ echo the sky is azure. > mydocuments.txt
[aka@localhost ~]$ cat mydocuments.txt
the sky is azure.
The first command kept the output from getting overwritten, but the second
one reverted it back to the prior position.
The noclobber can be overruled:
The sign >| does the magic. Clearly, the noclobber has been overruled.
If you want to redirect two file content without overwriting the first, there is a
command for this. As I said, we have magic signs for the purpose. So, let’s
do that.
This process is known as appending. It makes sense as two file contents are
combined in the process.
| is the operator. It is like a wall that helps pass one output to some other
virtual place. It is just like you are transferring some physical
commodity. Executing the pipeline command will help you push one
output to another command. No activity on the display screen is required
neither is required the dull exchange through temp files. A pipe does all
the process in the background in complete silence. One important thing
to remember is that you can only send the data. You cannot receive it
from the same pipe. The information flows in just one direction. I am
using the less command because it displays the standard output of almost
all commands. Test it and see the results. The results may differ from
user to user.
It is important to mention here that some users consider cat being similar to
the TYPE command. But the reality is different. Files can be displayed with
the help of the cat command without being paged first. The above command
displays the contents of ls-output.txt file. The output will come in the form of
short text files and not in the form of pages. It speeds up the process. Not
only does paging take considerable time, but it is also more complex and time
consuming to display it in the form of pages on the screen. You can add
multiple files to the cat command as arguments. This command can also be
used to append multiple files. If we have multiple files named snazzy.jpeg.01
snazzy.jpeg.02 ………… snazzy.jpeg.99
Now you want to join all these files. Let’s try the following command.
We will have the files in the right order. The cat command welcomes
standard input that is linked to our typing something on the keyboard.
You have just created a file by using the cat command. You have also put
some value in the file. When you are done with the above two steps, press ctrl
+ d to reach the end of the file. You can bring back the content of the file
with the help of the same command.
[
[
ack
ack
addr2line
addr2line
ag
ag
alias
alias
animalsay
animalsay
animate
animate
annocheck
annocheck
appliance-creator
appliance-creator
applydeltarpm
applydeltarpm
appstreamcli
appstreamcli
apropos
apropos
ar
ar
arch
arch
aria2c
aria2c
arpaname
arpaname
as
as
aserver
aserver
aulast
aulast
aulastlog
aulastlog
ausyscall
ausyscall
authvar
:
[aka@localhost ~]$
This is how filters can help you produce a single sorted listed. It is the magic
of the sort command that we put into the pipeline. Otherwise, there would
have been two lists for each directory. It saves times and is more efficient to
view multiple directories.
The grep
This also is one of the powerful programs that are used to find text patterns in
the files. Let’s look at how it is used.
The head/tail
This command is used to retrieve a specific part of the information. As
apparent from the command name, you can print the head or the tail of a
particular file. It means you can get the first few lines of a file printed or the
last few lines of the same file. Ten lines from the start or ten lines from the
last. That’s how it goes. The number of lines can be altered by applying the -
n option. Let’s scroll to the syntax of the command.
[aka@localhost ~]$ head -n 8 ls-filename.txt
With the help of the tail option, you can view the files in real time, and
review the progress of the log files while they are in the midst of being
written. The following command helps you in reviewing the messages file.
This program has a special job of reading the standard input. It reads it and
then copies it to the standard output. As a result, the data flows downwards.
You can capture what is being passed in the pipeline. Let’s see the syntax.
This command is used to sort a file. Both uniq and sort are applied together.
To read more about uniq, please take a look at its man page. I have already
briefed you about the man page. The following command will not sort the
duplicates. You can see by running it on the terminal.
[
ack
addr2line
ag
alias
animalsay
animate
annocheck
appliance-creator
applydeltarpm
appstreamcli
apropos
ar
arch
aria2c
arpaname
as
aserver
aulast
aulastlog
ausyscall
authvar
auvirt
awesome
awesome-client
awk
axel
To sort the files you have to remove the -d option from the syntax. Now try it
on.
[root@localhost ~]$ ls /bin /usr /bin | sort | uniq | less
The wc Command
As visible from the abbreviation, this one is to count the words in a file
content. It also counts how many lines there are in the file content in addition
to bytes.
It will tell you the number of lines, number of words and total bytes that
belong to the mydocuments.txt file. Unlike the Windows operating system,
you don’t have to search each file content and check out the weight of the file
in bytes. The pipeline command will retrieve the information you need.
Now that you are well aware of the shell, its usage, and some of the basic
commands to execute different tasks, I will move on to the use of kernel. You
will get to know how to use the kernel like how it boots and how it starts.
Let’s take a look at the normal booting process of the kernel. I have divided it
into some basic simplified steps.
1. The first step is the BIOS. In this phase, some integrity of the system is
thoroughly analyzed. BIOS located the boot loader and executes it. The
boot loader can be on a CD, a floppy or a hard drive. After BIOS loads
it up on the system, the MBR boot loader has full control. Wondering
what is MBR? Let’s jump to the next step to understand what MBR is.
2. MBR is the short form of Master Boot Record. MBR can be traced
back to the 1st sector of the hard drive or any other bootable disk. The
sector in the hard drive is titled as /dev/had or /dev/sda. MBR is a very
light program when it comes to size. Its major component is the
primary boot loader which is just shy of 512 bytes. This information in
the form of a few bytes is mainly about GRUB. Sylvia learned it in a
chain like system like BIOS loads the MBR then MBR load the GRUB.
3. What is GRUB ? It is the short form of Grand Unified Bootloader.
GRUB also is a loader for kernel images. If you there is one kernel
image on your system, GRUB will take it as the default image and
loads it right away. If there are multiple images, GRUB will bring out a
splash screen that has multiple images. You have got to choose one for
loading. If you don’t go for one, GRUB will load the default image as
per settings in the configuration file.
4. The fourth step is the kernel itself. Kernel runs the /sbin/init program
which has 1 as the process id. The term init can be expanded to initrd
that can be further expanded into initial RAM Disk. Kernel uses it for a
short window of time from being booted to mounting of root file
system.
5. After init starts running, you reach the user space start. The init takes
over from there by running the system.
The Startup
A wide range of diagnostic messages are produced at the booting time
originating from the kernel in the start then from the processes after init takes
over the operations of the system. The messages don’t run in a sequence and
also, they are not consistent. For a new user, they can be confusing and even
misleading. Some of the Linux distribution mechanisms try to slash them
with the help of splash screens and changes in boot options. You can see and
read these messages by the following commands. Let’s see. I have logged in
as a superuser. Let’s look at the result of the command.
It makes use of the ring buffer of the kernel. The above is a short version of
what you will see on the page. It may not be exactly the same as above, but it
will be nearly the same. Let’s explore the dmesg command with different
options to get more information.
[aka@localhost ~]$ dmesg | less
You can use other options with the dmesg command. Let’s take a look at
different options that can be explored in Linux. I will not go into the details
of what you will see on your screens. But I will definitely tell you what can
you expect by entering each command. Let’s see the syntax. This time I have
switched the user.
Kernel initialization in Linux consists of some key steps. It starts with the
examination of the Central Processing Unit (CPU). After that, memory is
examined. Then comes the device is discovered, root filesystem goes
operational and in the end user space starts.
The Parameters: When the kernel starts running, it receives its parameters
from the boot loader. These parameters guide the kernel on how should it
start. These parameters carry specifications on the diagnostic output, as well
as options, on the drivers of the device. The parameters can be viewed by a
dedicated command. Let’s look at the syntax.
Not all parameters are important but one which is the root parameter. The
kernel is unable to operate without it because it just cannot locate init. You
might be thinking of adding something in the parameter that it doesn’t
understand? Well, Sylvia added it for sure. She added the -s in the
parameters. The kernel saved the unknown phrase and moved it to init. The
kernel interprets it as the command to start in the single user mode.
The bootloaders: A bootloader is a program that is responsible for loading
the tasks necessary for at the boot time. Additionally, it loads the operating
system. In the world of technology, a bootloader is also called boot manager.
As I have already explained, this program is kickstarted after the BIOS are
done with performing the initial checks on hardware. Now we know that the
kernel and parameters lie on root filesystem.
Let’s take a brief look at what is the job of a bootloader.
A bootloader has to select one kernel from multiple options. Secondly, with
the aid of a bootloader, users can manually override names of kernel images,
as well as parameters. It also allows you to switch between different kernel
parameters.
Bootloaders are now not the same as before. They have become more
advanced with added features like the option of history, as well as menu.
However, experts still believe that the basic job of the bootloader is to offer
different options regarding the kernel image as well as selection of
parameters.
Let’s take a look at multiple bootloaders that you may come across when you
enter the field. GRUB, LOADLIN, LILO, coreboot are some of the most
popular bootloaders of this age. Let’s explore GRUB.
Linux takes the tough course. It makes users learn the intricate details about
complex things in the operating system. It reroutes our minds toward the
background technology of the operating system. Let’s move on to the
bootloader itself to understand how it processes. So, how does it work?
You make your computer boot and as I have explained, the BIOS, after
running some checks, passes the control of the machine to the boot device
that differs from user to user. It can be a floppy or a hard disk drive. We
already know that the hard disk has different sectors. The first of which is
known as Master Boot Record (MBR).
GRUB drives out the MBR code and fills in its own. It helps users navigate
the file system, which facilitates users to select the desired kernel image as
well as configuration. GRUB is the short form of Grand Unified Boot
Loader. Just explore the menu of the GRUB to learn more about it. Mostly
the bootloader is hidden from the users and in order to access the bootloader,
you should hold the SHIFT button for a while immediately after the BIOS
startup shows up. There also is an automatic reboot problem. To avoid that,
press ESC and disable the timeout of automatic reboot which starts counting
right after the GRUB menu shows up. If you want to view the configuration
commands of the boot loader, press e to enter the default boot option.
GRUB can search all partitions for a UUID to locate the kernel. If you want
to see how the GRUB works on the Linux operating system, just hit C while
you are at the editor and you will reach the GRUB prompt. Something similar
to the following will appear on the screen of the editor.
grub>
This is the GRUB command line. Get started by entering any command here.
In order to get started, you need to run a diagnostic command.
grub> ls
(hd0) (h0,msdos1)
The result will be a list of devices that are already known to the GRUB. You
can see that there is just one disk, the hd0, and one partition, (hd0,msdos1).
The information we can get from this is that the disk contains an MBR
partition table.
To delve deeper into more information, enter 1s -1 on the GRUB command
line.
grub> ls -1
This command tells us more about the partitions on the disk.
You can easily navigate files on the GRUB system. Let’s take a look at its
filesystem navigation. First of all, you need to find out about the GRUB root.
You should use the ls command in order to list the files as well as directories
in the root. Let’s see how to do that.
grub> ls ($root)/
You may expect a list of directory names that are on that particular partition.
The filesystem and the directories can eb listed in the form of etc/,bin/, and
dev/.
All the files in the directory /etc/grub.d are multiple shell script programs. By
combining they make up the central configuration file that is grub.cfg. This is
the default shape of the GRUB configuration. Now let’s move on to the
function which allows you to alter the command. The short answer is that you
can do that easily. You can customize it according to your needs. You know
that a central file for configuration exists. You just have to create another file
for customization purposes. You can name the file as customized.cfg. This
will be added to the configuration directory of GRUB. You can find it
following this path: /boot/grub/custom.cfg.
If you want to write a fresh configuration and also want to install it on the
system, you can do that by writing the configuration to the directory with the
help of using the -0 option. Let’s see the syntax to conclude the configuration
process.
# grub-mkconfig -o /boot/grub/grub.cfg.
For users of the Ubuntu Linux distribution system, things are pretty much
simple and easy. All you need is to run the following command.
install-grub
Before you move on to the installer, please read what this bootloader requires
for installation purpose. Please determine the following.
GRUB directory, as seen by the current system, should be properly
analyzed. /boot/grub is the directory’s name usually. The name can be
something else if you are installing GRUB on another system.
You have to determine what device the GRUB target disk is using.
As GRUB is a modular system, it needs to read the filesystem which is
contained in the GRUB directory.
#grub-install /dev/sda
In this command, grub-install is the command used to install the GRUB, and
/dev/sda is the current hard disk that I am using.
If you do it wrong, that can cause many problems for you because of its
power to alter the bootup sequence on the system. This increase the
importance of this command. You need to have an emergency bootup plan if
something actually goes wrong along the way.
You can also install GRUB on an external device. You have to specify the
directory first on which it will install. For example, if you have /dev/sda on
your system. Your system will see the GRUB files in /mnt/boot/grub. You
have to guide the system by entering the following command.
We now know the drill of how the Linux operating system starts. BIOS
kicks off in the start. When it has run necessary checks on the device, it
initializes the device’s hardware. After that, it tries to trace out the
boot-order in order to get the boot code.
When it has found the boot code, the next step is the loading of BIOS
or the firmware. Then comes its execution. After that is the time for the
start of GRUB.
GRUB starts to load on your Linux operating system.
First of all, the core kickstarts. Now GRUB is in a better position to
access your hard disks and the filesystems that you have stored on the
system.
Then GRUB moves on to identify the partition from which it has to
boot. Next step is to load the configuration.
Now, as a user, this is the time when you have the power to alter the
configuration if you need it to be.
If you don’t want to alter it, the timeout will end shortly. Now GRUB
will execute the default or altered configuration.
While trying to execute the configuration, GRUB loads some additional
code in the form of modules in the partition from which it has booted.
Now GRUB runs the boot command and execute the kernel.
init
low level services start like udevd as well as syslogd
The network is configured.
Services like printing go on.
Next come the login shells, the Graphical User Interface (GUI).
Other apps start running that you might have installed on the Linux
operating system.
The init
This is a user space program which can be located in the /sbin directory. You
can locate the directory if you run the PATH command. PATH is a variable.
I’ll discuss this variable in-depth in the upcoming chapters. System V init,
Upstart and System V init are implementations of init in its Linux
distributions. Android has its own unique init.
System V init triggered a sequence that required just one task while the
system starts up. This system appears easy, but seriously hampers the
performance of the kernel. In addition, advanced users abhor this kind of
simplified startup. The system starts up following a fixed set of services, so if
you want any new service to run on the system, no standardized way is
available here in order to accommodate any new components. So that’s a
minus.
On the other hand, Upstart as well as systemd rectify the performance issue.
They accommodate several services to take a start in parallel to pace up the
boot. The systemd is for the people who are looking out for a goal-oriented
system. You will have the flexibility of defining your target and the time span
you need to achieve the target. In addition, it resolves the target. Another
attractive option is that you can delay the start of a particular service until you
want it to load.
Upstart receives different events and runs different tasks that result in the
production of more events. Consequently, Upstart runs more jobs. As a user,
you can get an advanced way to track services. These latest init systems are
free of scripts.
Runlevels in Linux
When you boot a Linux operating system, it moves into the default runlevel
and also tends to run the scripts that are attached to the specific runlevel.
Runlevels are of different types and you can also switch between them off
and on. To quote an example, there is a runlevel specifically designed for
system recovery and tasks related to maintenance of the operating system. As
an example, System V init scripts are used as default runlevel. Ubuntu uses
the same.
We are well aware of the fact by now that init, that is launched by Linux in
the start, in turn, launches other system processes. If you want to control what
init should launch and what not, you should edit the startup script which the
init reads. An interesting thing about init is that it has the ability to run
different runlevels like one for networking, another for the graphical user
interface. Switching between the runlevels is also easy. All it takes is a single
command to jump from the graphical desktop to the text console. It saves lots
of time. Let’s move further into the runlevels to understand them better.
System V Runlevels
There are different sets of processes running in the system like crond. These
runlevels are categorized from 0 to 6. A system starts with a specific runlevel
but shuts down using another runlevel that has a particular set of script in
order to properly close all the programs in a proper way and also to stop the
kernel. If you have a Linux system in place you can enter the following
command and check in which runlevel you are at the moment.
Your screen will tell you the stage of your runlevel on a scale from 0 to 6,
and also the date and time at which the runlevel was created on the system.
Runlevels in the Linux operating system has many jobs to do, such as
differentiating among shutdown, the startup, the text console mode and the
different user modes. The seven numbers from 0 to 6 can be allocated to
different channels. Linux operating systems that are based on Fedora allocate
5 to the graphical user interface and 4 to the text console.
I know you are finding these runlevels quite interesting, but they are now
going obsolete.
system init
The system init can help you in better handling of the system bootup and also
aids you in handling services like cron and inetd. One feature that users love
it is its ability to delay the start of operating system and other services until
they are imperative. The full form of systemd is System Management
Daemon. The systemd is now replacing init, and there are some pretty solid
reasons behind this shakeup. Let’s see how init starts the system.
One task starts when the last startup returns successful and gets loaded up on
the memory of the operating system. This causes undue delay in the booting
time of the system, which resulted in frustration for the users. Now it is quite
logical to think that system adds some fuel to speed up the process, but sorry
you are wrong. Well, it does decrease the booting time, not by putting in
some extra gas, but by removing the speed breakers, like finishing the
necessary tasks neatly.
It does so much on your system that its responsibilities are somewhat difficult
to grasp in a single discussion, but I’ll try my best. Let’s see what it does on
your system.
It improves efficiency by properly ordering the tasks, which makes the
booting process simpler.
It loads up the configuration of your system.
The default boot goal, along with its dependencies, is determined.
Boot goal is activated.
The difference between systemd and init is the flexibility which the former
has to offer to the user. The systemd just avoids unnecessary startup
sequences.
It was the fall. Yellowish red leaves were falling from the trees like the
angels who were banished from the heavens. Pale and defeated, they rested
on the pathway and the greenbelt along the road. An old woman with a cane
in hand was trying to cross the road. Luckily, she got help from a young
blondie lady who happened to be passing by. It was Sunday. The sun was far
into the thick clouds that were hovering over the city’s skyscrapers, but it was
balmy, the kind of weather you like to visit a beach. To the beach did I go.
But wait a minute. Did I say something about the young blondie lady who
was helping the lady? Oooops! That’s Sylvia. And yes, she did accompany
me to the beach.
“Lovely weather today? Mind a drink?” and she pulled out a bottle of peach
pulp. “I don’t mind that.” I replied with a smile. “Well, well, well, finally I
have started getting acquainted myself with Linux,” she started. I stared at her
with a little frown that wasn’t matured yet on my face. “For God sake. We
are on the weekend. If by a stroke of the ill fate, you have joined me on this
beach, please don’t spoil the moment.” But she was unstoppable as are with
newbie learners of Linux and programming languages.
I gave in. She was curious about the Linux environment on that particular
day. Well, the subject was a juicy one, so I went on trying to educate her
more about it. By imaging about the environment of Linux, and the beach
where I was out sunbathing, the image before your eyes might be something
interesting. Perhaps your brain is trying to structure an analogy between the
environment for Linux and the general environment that surrounds us.
Let me explain. Yes, it is more or less the same. Just like we live in the
environment; the shell window is also surrounded by an environment of its
own. Whatever data that you store in the Linux environment is accessible to
programs for the determination of facts about the configuration of the system.
It is pertinent here to mention that the environment can be used to customize
the shell experience. You can find the environment and shell variables in the
environment. We will be using two commands to examine the environment;
the set command and the printenv command. The latter command shows only
the environment variables. I will not expand it here as you are going to see
lots of it in the next chapter. So, let’s run the set command. In order to do
away with a long list of variables and other info, I’ll pipe the output by
pairing up the set command with the less command.
The set command distinguishes itself from the rest of the lot by displaying the
information in a near alphabetical order which is quite user-friendly. In
addition, to that you can use the echo command to pull out information on the
variable you need.
When it comes to Linux distribution, most newbie or casual users are more
concerned about the color schemes and different other unnecessary features
when selecting which distribution best suits them. What they forget to take
into consideration is the fact that the most important thing is the packaging
system of Linux in combination with the support community of each
distribution system. We know it is open source. The software is dynamic and
not static like the Windows. It just keeps on changing by contributions from
the community members.
With the help of package management, we can install and maintain software
on our personal computer systems. The following section will tell you about
the tools to deals with package management. All these tools are linked to the
command line. Some people may question if Linux distributors offer a
graphical user interface why should we be forced into the command line yet
again. The answer is simple. Command line tools offer us the completion of
some impossible tools that the graphical user interface find impossible to
conclude.
The packaging systems differ from various distribution systems, and they are
also distribution specific. You cannot use one packaging system on multiple
distributions. In general, there are two main packaging systems:
How it Works
Usually software is bought from the market and then installed from a disk.
You run the installation wizard and get the app on your system. This is a
standard process on a Windows system. Linux is different. You don’t have to
buy anything as almost all software can be found on the web space for free.
The distribution vendors offer the code compiled and stuffed in package files
or they put it as source code which you have to install manually. The latter
demands pro-level skills.
A package file can be dubbed as the basic unit of Linux software. It can be
further explored into sets of files that together make up the software package.
A bigger question is, what does a package have? It has multiple programs in
addition to including metadata which shows the text description of the file as
well as the contents. There are package maintainers who create the package
files. These package maintainers can be but not always are employees of the
vendors who distribute the Linux system. The package maintainers receive
the source code for the software and produces the metadata and installation
scripts. They can opt to contribute to the source code or leave it as it is
received.
Repositories
The Dependencies
You can use the following commands to install a package in the Debian style.
apt-get install package_name
apt-get update
For Red Hat style, use the following command.
yum install package_name
Also, you can install a package right from a particular package file. Let’s see
how to do that. This is for the package files that are not part of a repository
and are installed directly. Let’s see the commands:
For the users of the Debian style, the following command is used.
dpkg –install package_file
For the users of the Red Hat style, the following command is used.
rpm -i package_file
You can also delete a particular package from the system by using these
simple tools. Let’s take a look at the commands
For the Debian style users, the following command tool is the best.
Apt-get remove package-name
For the Red Hat style users, the following command tool works well.
yum erase package-name
You can do more with the help of command tools. Suppose you have
installed a package system and now you want to update it. You can do that
with the following tools. As with the above-mentioned tasks, the tools for the
Debian and the Red Hat were different, the same is the case with the updating
tools.
For the Debian style users, the command tool for updating the package
system is apt-get update. You can also replace update with upgrade.
Both do the same job.
The Red Hate style users can with the command yum update.
If you have downloaded an updated version of a package from a non-
repository source, you can install it by replacing the previous version. The
commands for the job are as under:
The Debian style users should enter dpkg –install package-file.
The Red Hat style users should enter rpm -U package-file.
Replace package-file with the name of the file that you want to update. When
you have installed more than one packages over a particular course of time,
and you want to view them in the form of a list, you can use the following
tool to see the record.
For the Debian style users dpkg --list is the tool to view the list.
For the Red Hat style users rpm -qa should be entered.
When you have installed a package, you check its status any time to make
sure it is there on your Linux operating system. There are some handy tools
available for the purpose. Let’s check out.
The Debian style users should enter type dpkg --status package-name.
The Red Hat style users may use rpm -q package-name.
In addition to checking the status of an installed package, you also can
explore more information about it with the help of the following commands.
Storage Media
We know about hard disk drives, floppy disks, and CD drives from the
Windows operating system. If you think Windows makes it easy for you to
handle them, Linux has more to offer. You can handle all the storage devices
like the hard disk drive, the virtual storage such as the RAID, the network
storage, and Logical Volume Manager (LVM). I’ll explain some key
commands that you can use to handle all the storage devices easily as well as
efficiently.
The commands for mounting and dismounting storage devices
You can mount and dismount a storage device with the help of some simple
commands. If you are using a non-desktop system, you will have to do that
manually owing to the fact that servers have some complex requirements for
system configuration.
There are some steps involved to accomplish the objective like linking the
storage device with the filesystem tree. Programmers call it mounting. After
this, you the storage device starts participating in the functions of the
operating system. We have already learned about the filesystem tree earlier
on. All the storage devices are connected to the filesystem tree. This is where
it differs from the Windows system, which has separate trees for each device.
As an example, see the C:\ and D:\ drives. You can explore the list of the
devices with the help of the command /etc/fstab.
If you want to view the list of the filesystems that you already have mounted
on the Linux computer system. We call it the mount command. By entering it
on the command line, you will get the list of the mounted filesystems.
The listing will show the mount-point type as well as the filesystem-type.
You can determine names of the devices
/dev/hd* : These are PATA disks on the older systems. The typical
motherboards have a couple of IDE connectors. Each of the IDEs had a cable
and a couple of points for drives to get attached to them. The first drive is
dubbed as the master drive while the other one is named as the slave drive.
You can find them by referring them as /dev/hda for the master drive and as
/dev/hdb for the slave drive. These names are given to the drives in the first
channel. For the second channel, you can refer the master drive as /dev/hdc
and the list goes on. If you see a digit at the end of the titles, remember that it
refers to the partition. To quote an example, /dev/hda1 will be considered as
the 1st partition to on the first hard drive. In this scenario where you have
partitions, the name /dev/had will refer to the full drive.
You can use the fdisk program to interact with the hard disk drive and other
flash drives on your Linux computer system. The fdisk program allows you to
edit, create or delete partitions on one particular device. In this case, I’ll be
dealing with the flash drive.
When you see that on the screen, enter ‘m’ on the keyboard which will
display a menu that will prompt a command action from the user. By pressing
b, you can edit the disk. By pressing d, you can delete a particular partition of
the device. In this particular case, we are into the flash drive so the d key will
delete a partition from the flash drive. You can press l to list the partitions
that are not yet known. You can print the menu again any time by pressing m.
Also, you can add a new partition to the device by pressing n. By pressing q,
you can quit the menu without saving the necessary changes and by pressing
t, you can change the system id of a partition.
First of all, you are required to see the layout for a particular partition. Press p
to see the partition table for the flash drive device.
Now that you have learned to edit the flash drive device partition, we should
move on to create a new filesystem with the help of mkfs command. Read it
as make filesystem. An interesting thing is that you can create filesystems in
multiple formats.
If you want to create ext3, add -t option to the command mkfs. Let’s see the
syntax for the command.
There will be plenty of information displayed on your screen. You can get
back to the other types by the following commands. To reformat the device to
the FAT32 filesystem, you can draft the following command and enter on the
command line.
You can see that editing a partition, formatting it, and creating a new
filesystem are pretty easy on the Linux operating system than that of the
Windows system. You can repeat the following the above-mentioned simple
steps whenever you want to add a new device to your computer system. This
kind of freedom of altering the type of a storage device and editing it is
unavailable in the Windows operating system. In the above example, I
described the entire process with the help of a dummy flash drive. You have
the liberty to practice the above commands on a hard disk drive or any other
removable storage. Whatever suits you, you can go for it.
Chapter 5: Linux Environment Variables
When talking about the bash shell, it is relevant to know about its features
like the environment variables. These variables come handy in storing
information about a particular shell session as well as the environment in
which you have been working. With the help of these variables, you can feed
information in the memory which can be accessed through running a script or
simply by running a program.
Global Variables
This is simple to learn. These are the variables that can be accessed globally.
In simple words, you can access them at any phase of the program. When you
have declared a variable, it is fed into the memory of the system while you
run the program. You can offer alterations in any function that may affect the
entire program. Global variables are always displayed in capital letters.
[aka@localhost ~]$ printenv
You have had a full list of global environment variables. Most of them are set
during the login process. If you want to track down values of individual
variables, you can do that with the help of echo command. Just don’t forget
to add the $ sign before the variable to get its value. Let’s look at the syntax.
[aka@localhost ~]$ echo $PWD
/root
[aka@localhost ~]$
The above example is fit for assigning simple values. In order to assign a
string value with spaces between words, you need to try something different.
Please note that you have to use lower case letters in order to create a new
variable. This is important because you can get confused by seeing the
environment variables in the capital case. Let’s take a look at the following
syntax:
[aka@localhost ~]$ test=theskyisazure
[aka@localhost ~]$ echo $test
theskyisazure
[aka@localhost ~]$ test=the sky is azure
bash: sky: command not found
[aka@localhost ~]$ test='the sky is azure'
[aka@localhost ~]$ echo $test
the sky is azure
You can see that the difference lies in the use of single quotation marks.
How to remove environment variables
You can remove the variables with a simple step. Let’s see the syntax of the
unset command.
[aka@localhost ~]$ echo $test
theskyisazure
[aka@localhost ~]$ unset test
[aka@localhost ~]$ echo $ test
[aka@localhost ~]$
If you remove the environment variable from the child process, it still
remains in the parent process. You have to remove it from the parent process
separately.
[aka@localhost ~]$ test=azure
[aka@localhost ~]$ export test
[aka@localhost ~]$ bash
[aka@localhost ~]$ echo $test
azure
[aka@localhost ~]$ unset test
[aka@localhost ~]$ echo $test
You can clearly see that the environment variable was first exported so that it
may become a global variable. The unset command was applied while I was
still in the child process. When I switched to the parent shell, the command
was still valid. That’s why you need to delete it from the parent shell as well.
Check out the default shell variables
The bash shell contains environment variables that originate from the shell
itself. Let’s check out some of the variables.
You can see different directories in this command. The shell looks for
commands in these directories. More directories can be added to the
command just by inserting colon and then adding the directory. A default
value is assigned to the variables. Let’s take a look at the following table.
PATH As I have shown you by an example, it carries the list of the
directories.
HOME This refers to the home directory of the user.
IFS This brings out a list of characters
CDPATH You will get a list of directories that are separated by a colon.
Importance of the PATH Variable
This is perhaps the most important variable in the Linux system because it
guides the shell to locate the commands for execution right after you enter on
the command line. In case it fails to locate it, an error message will be
displayed which can look similar to the one given as under:
[aka@localhost ~]$ newfile
-bash : newfile: command not found
[aka@localhost ~]$
On the other hand, the PATH will turn out a series of paths distinguished by
colons and stored in the form of plain text files. For revision and explanation
purposes let’s take a look at how it is executed.
[aka@localhost ~]$ echo $PATH
/usr/local/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin
You can add new directories to this string. Let’s try it.
[aka@localhost ~]$ echo $PATH
/usr/local/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin
[aka@localhost ~]$ PATH=$PATH:/home/rich/azuresky
[root@localhost ~]# echo $PATH
/usr/local/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/home/rich/azuresky
[root@localhost ~]$
It is important to mention that each user in the Linux operating system has a
different PATH variable than others. Upon installation of an operating
system, a default variable for your PATH is set. After that, you can add more
to it or completely change as it suits you.
Note: It is important to mention that the root user’s PATH variable has more
directories than any other user. An interesting thing about the PATH variable
is that either you can change it just for the current session or on a permanent
basis.
There are multiple variables that a system uses for identification purpose in
the shell scripts. With the help of system variables, you can easily procure
important information for the programs. There are different startup methods
in the Linux system, and in each system, the bash shell executes the startup
files in a different way.
The Login Shell
This locates four distinct startup files from where it could process its
commands. The very first file that is executed is the /etc/profile. You can dub
it as the default file for the bash shell. Each you time you log in, this
command will be executed.
I have run this command while logged in as a superuser. The result on your
window can be slightly different.
Just take a look at the export command stated as under:
The $HOME
This one is a user-specific variable. Let’s run it to know what it has got.
[aka@localhost ~]# cat .bash_profile
# .bash_profile
PATH=$PATH:$HOME/bin
export PATH
Interactive Shell
If you happen to start the bash shell without first logging into the system, you
kickstart the interactive shell. You have the command line to enter command
here. With the start from an interactive shell, the system does not load the
/etc/profile file. It locates another file called .bashrc in the home directory of
the user. Let’s see how this startup file looks in the Linux operating system.
[aka@localhost ~]# cat .bashrc
It conducts a couple of key tasks; first is checking for the bashrc file in the
/etc directory. The second one is to allow the user to enter aliases.
[aka@localhost ~]$ cat /etc/bashrc
Variable Arrays
You can use variables as arrays. The fun thing with arrays is their capacity to
hold multiple values which you can reference for a complete array. You can
simply cram multiple values in a single variable by listing them with single
spacing in parenthesis. Let’s try it.
[aka@localhost ~]$ myworld=(one two three four five)
[aka@localhost ~]$ echo $myworld
one
[aka@localhost ~]$ echo ${myworld[2]}
three
[aka@localhost ~]$ echo ${myworld[1]}
two
[aka@localhost ~]$ echo ${myworld[4]}
five
You can see how easy it is to build up a variable array. You can bring out
each value in the array with a special command. Now try the following to
bring out the entire array.
[aka@localhost ~]$ echo ${myworld[*]}
one two three four five
In addition to this you can also unset a specific value from the array.
[aka@localhost ~]$ unset myworld[3]
[aka@localhost ~]$ echo ${myworld[*]}
one two three five
If you want to get rid of the entire array, you can do that by unsetting it.
[aka@localhost ~]$ unset myworld[*]
[aka@localhost ~]$ echo ${myworld[*]}
[aka@localhost ~]$
Variable arrays are not portable to other shell environments, that’s why they
are not the foremost option to be used with Linux operating system users.
Chapter 6: The Basics of Shell Scripting
Sylvia was too much excited to learn the basics of the Linux operating system
and the command line. In fact, she found it quite amazing and satisfying to
get the job done by entering a short command, getting immediate results, and
just then I told her that the major part of the Linux system has yet to come.
This is the toughest part for newbies. “So, will I have to code?” she asked
when I alluded to the hard part of Linux. I replied in the affirmative. I will
walk you through the world of shell scripting in this chapter. Shell scripting
is much like coding. It is exciting, thrilling and creative.
You have learned by now to use the command line interface (CLI). You have
entered commands and viewed the command results. Now the time is ripe for
an advanced level of activity on Linux that is shell scripting in which you
have to put in multiple commands and get results from each command. You
can also transfer the result of one command to another. In short, you can pair
up multiple commands in a single step.
Let’s how it is done.
Now run this command. You will have the date and time first and the details
of the user who is currently logged in the system right under this command.
You can add commands up to 255 characters. That’s the final limit.
A tip: A problem will definitely get in your way when you try to combine
multiple commands: remember them each time you have to enter them. An
easy way to get rid of this issue is to save multiple commands into a text file
so that you can copy and paste them when you need them. This definitely
works for sure. You can also run the text file.
How to create a text file: You can create your customized file in the shell
window and place shell command in the file. Open a text editor like vim to
create a file. We have already practiced how to create a text file in Linux
command line. Let’s revise it.
[aka@localhost ~]$ cat > file.txt
The Statue Of Liberty Was Gifted By France.
So, get ready to draft your first shell script. Let’s roll on. Open a bash text
editor and draft the script as under or something similar like that.
#!bin/bash
# Script writing is interesting.
echo ‘The sky is azure!’
The second with # in the start looks like a comment. In the third line, we can
see the echo command. Remember that you have to ignore everything after
the # sign.
Let’s try it on in the command line.
[aka@localhost ~]$ echo ‘The sky is azure!’ #I have entered a comment
The sky is azure!
Well, by now you might be thinking about the comments and their relation
with the # sign. Then your mind will be distracted and perplexed to think
about the first line of the script that also starts with the # sign. No, that’s not a
comment but rather a special line commonly known as shebang. This should
come at the start of each script. You can save your file now with any name
you desire. I saved it as azure_sky.
Displaying Messages
The command went wrong, and the display was not what we had expected.
So. We had to rectify it with the help of quotation marks.
It is time to execute the file you have just saved. Let’s run the command.
[aka@localhost ~]$ azure_sky
bash: azure_sky: command not found
$
You have to guide the bash shell in locating your shell script file.
[aka@localhost ~]$ 1s -1 azure_sky
-rw-r—r—1 me me 63 2019-08-26 03:17 azure_sky
[aka@localhost ~]$ chmod 755 azure_sky
[aka@localhost ~]$ 1s -1 azure_sky
rwxr-xr-x 1 me me 63 2019-08-26 03:17 azure_sky
But how will the command line find the script that you have just written? The
script should have an explicit path name. In case you fail to do that, you are
likely to see the following:
[aka@localhost ~]$ azure_sky
bash: azurer_sky: command not found
You must tell the command line the exact location of your script. Otherwise,
you won’t be able to execute it. There are different directories in the Linux
system, and all of them are in the PATH variable. I have already shown you
how to see what directories are on the system, explore the PATH variable.
[aka@localhost ~]$ echo $PATH
/usr/local/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/home/rich/azuresky
[aka@localhost ~]$
Now that we have got the list of the directories you can see and put your
script in the directory you want to. Follow the following steps to save your
script in a specific directory.
[aka@localhost ~]$ mkdir bin
[aka@localhost ~]$ mv azure_sky bin
[aka@localhost ~]$ azure_sky
azure_sky
If your PATH variable misses the directory, you can add it by the following
method:
[aka@localhost ~]$ export PATH=~/bin:”$PATH”
More ways to format scripts: The ultimate goal of a good script writer
should be to write a good script and maintain it smoothly. You should be able
to easily edit the script like adding more to a particular script or removing
from it. Also, it should be formatted in a way that it should be easy to read as
well as understand.
Go for long option names
We have seen many commands both with long and short options name. ls
command that is used to list directories in your Linux system has lots of
options such as the following:
[aka@localhost ~]$ ls -ad
[aka@localhost ~]$ ls –all --directory
Both these commands are equivalent when it comes to the results. Users
prefer short options for ease of use so that they may not have to remember
long option names. Almost every option that I have stated in previous
chapters has a long form. Though short form options are easy to remember, it
is always recommended to go for the full names when it comes to script
writing because it is more reader-friendly.
Indentation
Now long form commands are also not very easy to handle. You have to take
care to write them in the right format so that it may confuse the reader. You
can use commas between many lines.
Let’s see what problems may come your way when you are dealing with the
literals. They can be more complicated than you might have thought. Let’s
assume you are looking into the /etc/passwd directory in order to locate the
entries that match r.*t. This will help you locate usernames like the root and
robot. See the command you will be using to execute your search.
[aka@localhost ~]# grep r.*t /etc/passwd
root:x:0:0:root:/root:/bin/bash
operator:x:11:0:operator:/root:/sbin/nologin
ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin
systemd-coredump:x:999:996:systemd Core Dumper:/:/sbin/nologin
systemd-network:x:192:192:systemd Network Management:/:/sbin/nologin
systemd-resolve:x:193:193:systemd Resolver:/:/sbin/nologin
tss:x:59:59:Account used by the trousers package to sandbox the tcsd daemon:/dev
/null:/sbin/nologin
polkitd:x:998:995:User for polkitd:/:/sbin/nologin
sshd:x:74:74:Privilege-separated SSH:/var/empty/sshd:/sbin/nologin
pesign:x:994:991:Group for the pesign signing daemon:/var/run/pesign:/sbin/nolog
in
Well, it worked perfectly. When you execute the command, you will see that
specific files are colored orange. It means you have got the information you
were looking for. When it seems to be working fine, it fails sometimes for
reasons unknown. That’s why this trigger panic. If you are on the verge of
getting panicked as Sylvia did when she did it all wrong, stop right there. The
problem lies in your directories. Review them, and you will find the solution.
In fact, let me walk you through the solution. If your directory has names like
r.output and r.input, the command will not be able to interpret the command
and the result will be like the one given as under:
You can see that the double quotation marks have allowed the $PATH to
expand but kept the (*) from the same act. So, whatever has a $ sign before it
will be expanded when you run the command.
Food for thought: If you want only variable and command substitution to
run, you should go for the double quotation marks. Please keep in mind that
wildcards will still remain blocked.
The Backlash
The third option is the backlash. You can use it to alter the meaning that the
characters carry. In addition, you can also use the option to escape any kind
of special characters that are within the text, including the quote marks.
Most users find it tricky when they are about to pass the single quote to the
command. To avoid any unexpected result, you should insert the backlash
before the single quote marks.
[aka@localhost ~]$ "Path is \$PATH"
bash: Path is $PATH: command not found
[aka@localhost ~]$ "Path is $PATH"
bash: Path is /usr/local/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/home
/rich/azuresky: No such file or directory
[aka@localhost ~]$
The important thing to keep in mind is that the backslash, as well as the
quote, should remain outside the single quotes.
[aka@localhost ~]$ echo I don\'t like Italian food
I don't like Italian food
[aka@localhost ~]$ echo "I don't like Italian food"
I don't like Italian food
[aka@localhost ~]$ echo I don'\''t like Italian food'
I don\t like Italian food
[aka@localhost ~]$
Now you know how to get away from a syntax error with the help of
backlash.
First comes the if command. The shell runs it and collects the code you
enter in the command.
Remember the 1 and the 0. If the exit code is 0, the command stands
executed and you will see the then keyword, ending up at the else or fi
keyword. The whole process ends in fi.
Let’s take a look at the expanded version of the if command. I’ll add elif
(else-if) in the expanded version.
NAME=”AHSAN”
if [“$NAME” = “SYLVIA”]; then
echo “SYLVIA ADAMS”
elif [“$NAME” = “AHSAN”]; then
echo “AHSAN SHAH”
else
echo “It boils down to Adam and Jack”
fi
&& and ||
The first one as expected means and while the second construct means or.
Let’s see how you can run these commands.
firstcommand && secondcomman
Suppose the shell runs command1 and the exit code turns out to be zero, the
shell moves on to the second command in the construct and runs it.
The || construct is different from && in a way that it runs the second
command even if the first command returns an exit code with a nonzero
value.
[aka@localhost ~]# if [ahsan] && [sylvia]; then
> echo "both can go to the concert"
> elif [ahsan] || [sylvia]
> echo "only one can go to the concert"
> fi
If the command inside the double bracket matches the $USER variable and
finds that it starts with the letter r, which in this case is rae, then the shell
obviously executes the then section part of the script.
The Loops
Sylvia happens to be a Marvel fan. She never misses a single movie. One of
her favorite characters is Doctor Strange, the accidental wizard. Well, I don’t
like these fantasy things but why I am talking about it is the fact that one day
Sylvia half narrated and half described by gestures a scene from one his
movies in which he fabricates by his magic a time loop to defeat the villain
Dormamu. I loved the tale because of its sprightliness. Sylvia loved it more
because she connected it with the loops in Linux. Doctor Strange traps
Dormamu in a time loop in which he was supposed to remain endlessly until
he agreed to Dr. Strange’s bargain. Again, and again Dr. Strange appears and
is killed by Dormamu because of the time loop which keeps repeating itself.
Sylvia linked the two loops and fed the lesson in her mind quite successfully.
A smart thought though she could have understood it without Marvel. Let’s
see how it goes. Jump right away to the script.
[aka@localhost ~]$ #!/bin/sh
[aka@localhost ~]$ for str in earth sun mars moon saturn;do
> echo $str
> done
earth
sun
mars
moon
saturn
[aka@localhost ~]$
In the above, you can notice that the above script is a combination of
different words. While some may seem familiar to you, others are difficult to
interpret. Out of the above for, done, in, and do are shell keywords. Let me
explain in words how the loop works.
There is a variable str in the above script. When you enter the above text in
the shell window, the shell fills the variable with the value of the first of the
space-delimited values that can be seen above after the shell keyword in. The
first word here is earth. Then the shell executes the echo command. After
that, the shell once again returns back to the for line to set fill in the variable
with the next value that is in this case sun. It repeats the same exercise with
the second value. The loop goes on until there is no value left written after
the keyword in.
The for command fills in the variable with whatever the next values are in the
list. We can use the str variable like any other variable in shell scripting.
When the last of the iterations are done, the str variable keeps the last value it
was filled with. Here an interesting thing is that you can change that. Let’s
see how to do that.
$ cat testfile
#!/bin/bash
# testing the for variable after the looping
for str in earth sun mars saturn
do
echo "The next planet is $str"
done
echo "The next planet to visit is $str"
str=Uranus
echo "We are going to $str"
$ ./testfile
The next state is earth
The next state is sun
The next state is mars
The next state is saturn
The next planet to visit is Uranus
We are going to Uranus
$
The for loop is always not that easy to use. Simple problems in the syntax can
push you over the edge and you will land in utter darkness with no clue on
what to do. Do you want to see an example? Let’s do that.
$ cat testfile
#!/bin/bash
# example of the wrong use of the for command
for test in I was pretty sure it’ll work but I don’t know what went wrong
do
echo "word:$str"
done
$ ./testfile
word:I was pretty sure
word:itll work but I dont
word:know what went wrong
$
The single quote marks did the damage in the above script. This kind of error
really catches the programmer off-guard. To resolve the problem, you can use
backslash or double quotation marks to properly define the values. Another
problem is the use of multiword.
[aka@localhost ~]$ #!/bin/bash
[aka@localhost ~]$ #another wrong use of the for command
[aka@localhost ~]$ for str in The Netherlands Australia The United States Of America Sc
> do
> echo "I will visit $str"
> done
I will visit The
I will visit Netherlands
I will visit Australia
I will visit The
I will visit United
I will visit States
I will visit Of
I will visit America
I will visit Scotland
I will visit The
I will visit British
[aka@localhost ~]$
Now let’s see how to do it right by making use of the double quotation marks
in the script. You will be astonished to see how simple it is to rectify it.
So, what have we learned so far with different scripts? The above commands
are also known as structured commands. We can alter the normal program
flow in the shell script with the help of these commands. The most basic of
these commands is the if-then statement. You can evaluate a command and
move on to execute other commands on the basis of the result of the
evaluated command.
You have the option of connecting different if-then statements with the help
of elif statement. The elif is the short of else if which means it is another if-
then statement. The case command can be dubbed as the shortest route to
achieving the same results as we have with the help of using lengthy if-then
statements.
Nesting loops
Now this is simple. You can combine different loops into one or you can nest
other loops in an already established one. The good thing is that you can add
as many loops as you can for nesting.
[aka@localhost ~]$ #!/bin/bash
[aka@localhost ~]$ #Testing nested loops
[aka@localhost ~]$ for ((b=1; b<=6; b++))
> do
> echo "Starting loop $b:"
> for ((d=1; d<=6; d++))
> do
> echo "Inside loop: $d"
> done
> done
Starting loop 1:
Inside loop: 1
Inside loop: 2
Inside loop: 3
Inside loop: 4
Inside loop: 5
Inside loop: 6
Starting loop 2:
Inside loop: 1
Inside loop: 2
Inside loop: 3
Inside loop: 4
Inside loop: 5
Inside loop: 6
Starting loop 3:
Inside loop: 1
Inside loop: 2
Inside loop: 3
Inside loop: 4
Inside loop: 5
Inside loop: 6
Starting loop 4:
Inside loop: 1
Inside loop: 2
Inside loop: 3
Inside loop: 4
Inside loop: 5
Inside loop: 6
Starting loop 5:
Inside loop: 1
Inside loop: 2
Inside loop: 3
Inside loop: 4
Inside loop: 5
Inside loop: 6
Starting loop 6:
Inside loop: 1
Inside loop: 2
Inside loop: 3
Inside loop: 4
Inside loop: 5
Inside loop: 6
[aka@localhost ~]$
So you can see the nested loop inside the main loop tends to iterate through
its values each time as the outer loop or main loop iterates. Don’t get
confused between the dos and dones of the script. Bash shell differentiates
the two and refers them to the inner and outer loops. In the above script I
blended together two for loops. Now let’s move on to pair up a for and a
while loop.
#!/bin/bash
# filling in a for loop inside a while loop
a=5
while [ $a -ge 0 ]
do
echo "Outer loop: $a"
for (( b = 1; $b ‹ 3; b++ ))
do
c=$[ $a * $b ]
echo " Inner loop: $a * $b = $c"
done
a=$[ $a - 1 ]
done
$ ./test15
Outer loop: 5
Inner loop: 5 * 1 = 5
Inner loop: 5 * 2 = 10
Outer loop: 4
Inner loop: 4 * 1 = 4
Inner loop: 4 * 2 = 8
Outer loop: 3
Inner loop: 3 * 1 = 3
Inner loop: 3 * 2 = 6
Outer loop: 2
Inner loop: 2 * 1 = 2
Inner loop: 2 * 2 = 4
Outer loop: 1
Inner loop: 1 * 1 = 1
Inner loop: 1 * 2 = 2
Outer loop: 0
Inner loop: 0 * 1 = 0
Inner loop: 0 * 2 = 0
$
The break command, as is evident from the name, is used to break out of the
loop by terminating the same. It is applicable on the for, while and until loop.
It can also be dubbed as the escape command.
#!/bin/bash
# Time to break out of a loop
for a in 1 2 3 4 5 6 7 8 9 10
do
if [ $a -eq 8 ]
then
break
fi
echo "Iteration number: $a"
done
echo "The for loop is completed"
Iteration number: 1
Iteration number: 2
Iteration number: 3
Iteration number: 4
Iteration number: 5
Iteration number: 6
Iteration number: 7
The for loop is completed
$
The loop was broken from the very point I asked it to. Now I’ll break out of
the inner loop I have written.
$ cat testfile
#!/bin/bash
# I am breaking out of an inner loop
for (( x = 1; a ‹ 4; a++ ))
do
echo "Outer loop: $x"
for (( y = 1; y ‹ 100; y++ ))
do
if [ $y -eq 5 ]
then
break
fi
echo " Inner loop: $y"
done
done
$ ./testfile
Outer loop: 1
Inner loop: 1
Inner loop: 2
Inner loop: 3
Inner loop: 4
Outer loop: 2
Inner loop: 1
Inner loop: 2
Inner loop: 3
Inner loop: 4
Outer loop: 3
Inner loop: 1
Inner loop: 2
Inner loop: 3
Inner loop: 4
$
A similar technique can be used to pipe the output of your loop script to the
command you want to. Let’s experiment with this.
[aka@localhost ~]$ #!/bin/bash
[aka@localhost ~]$ #Piping out the output
[aka@localhost ~]$ for country in "The Netherlands" Australia Pakistan Spain "The Unite
> do
> echo "This summer I'll visit $country"
> done | sort
This summer I'll visit Australia
This summer I'll visit Pakistan
This summer I'll visit Spain
This summer I'll visit The Netherlands
This summer I'll visit The United States of America
[aka@localhost ~]$
Chapter 7: Moving On To The Advanced Level In
Shell Scripting
When you get used to writing shell scripts, you will be able to use your own
scripts somewhere else to execute a program. A small code can be integrated
into another script to get the desired result. Writing large scripts can be an
exhaustive exercise to do that’s why the shell offers programmers a
convenient way to do script writing. There are user-defined functions that can
make script writing easy and fun.
You can see that each time you refer back to the functest1, the function that
you have named, the bash shell gets back to the same to execute the
commands you had left in there. That means you are saved from the hassle of
repeating the script in the command line. Just remember the unique name that
you assign to the function. I hope you have understood by now why the name
is so important.
You don’t have to write the shell function in the start. Instead, you can also
write in in the middle of the script. But you have to define the function before
using it, otherwise you will get an error message. Let’s see an example.
$ cat testfile
#!/bin/bash
# put the shell function in the middle of the script
count=1
echo "This line is placed before the function definition"
function functest1 {
echo "Now this is just a function"
}
while [ $count -le 5 ]
do
functest2
count=$[ $count + 1 ]
done
echo "The loop has reached its end"
func2
echo "This heralds the end of the script"
function func2 {
echo "Now this is just a function example"
}
$ ./testfile
This line comes before the function definition
Now this is just a function example
Now this is just a function example
Now this is just a function example
Now this is just a function example
Now this is just a function example
The loop has reached its end
./testfile: functest2: command not found
This heralds the end of the script
$
I defined the functest1 later in the script and when I used it, it ran okay, but I
didn’t define functest2 before using it, so it results in an error message. A
thing which you should be careful about is the name of the functions. You
should not assign the same name to different functions. Each function must
have a unique name, otherwise, the shell will not be able to identify them
separately, and will override the previous definition with the new one.
Remember, there won’t be any error messages to alert you about the mistake
you are committing.
$ cat testfile
#!/bin/bash
# using the same name for different functions
function functest1 {
echo "I am defining the function with a unique name"
}
functest1
function functest1 {
echo "I have repeated the function name assigning it to another function"
}
functest1
echo "The script ends here"
$ ./testfile
I am defining the function with a unique name
I have repeated the function name assigning it to another function
The script ends here
$
If we talk about the default exit status, it is the exit status which is returned
by the last command of the function.
$ cat testfile
#!/bin/bash
# learning about the exit status of a shell function
functest1() {
echo "I am attempting to display a file which is non-existent"
ls -l badfile
}
echo "it is time to test the function:"
functest1
echo "The exit status is: $?"
$ ./testfile
it is time to test the function:
I am attempting to display a file which is non-existent
ls: badfile: No such file or directory
The exit status is: 1
$
The exit status turns out to be 1 since the last command has failed.
$ cat testfile
#!/bin/bash
# time to test the exit status of a shell function
func1() {
ls -l badfile
echo "We should now test a bad command"
}
echo "shall we test the function now:"
func1
echo "The exit status is: $?"
$ ./test4b
shall we test the function now:
ls: badfile: No such file or directory
We should now test a bad command
The exit status is: 0
$
$ cat testfile
#!/bin/bash
# I will use the echo command to return a value
function dbl {
read -p "Enter a value: " value
echo $[ $value * 5 ]
}
result=`dbl`
echo "The new value is $result"
$ ./testfile
Enter a value: 300
The new value is 1500
$ ./testfile
Enter a value: 500
The new value is 2500
$
The echo will display the result of the mathematical calculation. The script
gets the value of dbl function instead of locating the exit status as the final
answer. So, we have redirected the shell to capture the output of the function.
Functions can use the standard parameter environment variables for the
representation of any parameters which are passed on to the function on the
CLI. $0 variable is used to represent the definition of the function. Other
parameters are $1, $2,$3 and so on are used to define the parameters on the
command line. There also is a special variable $# in order to determine the
total number of parameters on the command line. The parameters should be
written on the same command line on which you are writing the function.
$ cat testfile
#!/bin/bash
# how to pass parameters to a function
function addem {
if [ $# -eq 0 ] || [ $# -gt 2 ]
then
echo -1
elif [ $# -eq 1 ]
then
echo $[ $1 + $1 ]
else
echo $[ $1 + $2 ]
fi
}
echo -n "Adding 20 and 30: "
value=`addem 20 30`
echo $value
echo -n "Shall we try to add only one number: "
value=`addem 20`
echo $value
echo -n "This time try to add no numbers: "
value=`addem`
echo $value
echo -n "Let’s add three numbers this time: "
value=`addem 20 30 40`
echo $value
$ ./test6
Adding 20 and 30: 50
Let’s try adding just one number: 40
Now trying adding no numbers: -1
Finally, try adding three numbers: -1
$
The shell acted as it was told. Where there were more than two parameters, it
returns the value of -1. Where there was one parameter, it added the figure to
itself. Where there were two parameters, it added them together to get the
result.
Don’t get confused I have used vle instead of value. The variable $vle is
defined here in the main script but is still valid inside the function.
$ cat testfile
#!/bin/bash
# let’s see how things can go real bad by the wrong use of variables
function functest1 {
temp=$[ $value + 5 ]
result=$[ $temp * 2 ]
}
temp=4
value=6
functest1
echo “We have the result as $result”
if [ $temp -gt $value ]
then
echo “temp is larger”
else
echo “temp is smaller”
fi
$ ./badtest2
The result is 22
temp is larger
$
Local variables are also used in functions. Local variables are mostly used
internally. Put the keyword local before the variable declaration. You can
make use of the local variable while you are assigning a value to the variable.
With the help of the keyword, it becomes easier for the shell to identify the
local variable inside the function script. If any variable of the same name
appears outside of the function script, the shell considers it of separate value.
You must allot the array variable its individual values and then use those
values as function parameters.
$ function multem {
> echo $[ $1 * $2 ]
>}
$ multem 100 100
10000
$
Shell functions are a great way to place script code in a single place so that
you can it repeatedly whenever needed. It eliminates the rewriting practice. If
you have to use a lot of functions in order to deal with some heavy workload,
you have the option to create function libraries.
With the -en option in the last line, there will be no newline character in the
end of the display. This will allow the users to enter their input. You can
retrieve and read the input left by the customer with the help of the following
command.
read -n 1 option
You can assign functions to the menu. These functions are pretty fun to do.
The key is to create separate functions for each item in your menu. In order to
save yourself the hassle, create stub functions so that you know what you
have to put in there while you work smoothly in the shell. A full function will
not be kept from running in the shell, which will interrupt your working.
Working would be smoother if you put the entire menu in a function script.
function menu {
clear
echo
echo -e "\t\t\tWelcome to the Admin Menu\n"
echo -e "\t1. Her will be displayed the disk space"
echo -e "\t2. Here will be displayed the logged on users"
echo -e "\t3. Here will be displayed the memory usage"
echo -e "\t0. Exit\n\n"
echo -en "\t\tEnter your option: "
read -n 1 option
}
As I have already discussed, this will help you to view the menu anytime by
just recalling the function command. Now you need the case command to
integrate the layout and the function to make the menu work in real time.
$ cat menu1
#!/bin/bash
# simple script menu
function diskspace {
clear
df -k
}
function whoseon {
clear
who
}
function memusage {
clear
cat /proc/meminfo
}
function themenu {
clear
echo
echo -e "\t\t\tWelcome to the Admin Menu\n"
echo -e "\t1. Her will be displayed the disk space"
echo -e "\t2. Here will be displayed the logged on users"
echo -e "\t3. Here will be displayed the memory usage"
echo -e "\t0. Exit\n\n"
echo -en "\t\tEnter your option: "
read -n 1 option
}
while [ 1 ]
do
menu
case $option in
0)
break ;;
1)
diskspace ;;
2)
whoseon ;;
3)
memusage ;;
*)
clear
echo "Sorry, you went for the wrong selection";;
esac
echo -en "\n\n\t\t\tHit any key on the keyboard to continue"
read -n 1 line
done
clear
$
Conclusion
Sylvia is an avid learner by now. She admits that what she had hated all her
life has become her lifeline. The ease of use, the speed, the power over her
computer, and the fun that Linux gave her was matchless. She has now
installed Linux on her personal computer at home. Sylvia is now on her way
to becoming a Linux pro. I wonder if she would soon be able to advise me on
the command line because of the way she practices the shell. A trick she
would never tell anyone is that she kept a diary on which she wrote all the
important commands in order to easily invoke them when she needed them.
Eventually, it helped her memorize the commands.
It is pertinent to cite an example. What if you buy a high-end phone but are
unable to see what is inside it and how does it operate? Windows operating
system is just like that phone. You can use it, enjoy it, but you cannot see
how it works, how it is powered, and how it is wired. On the contrary, Linux
is open-source. You can get into its source code any time you like.
The world is moving very fast when it comes to technology. While there are
good people across the world who are adding positive things to the cyber
world, such as operating systems and applications that can really help you
deal with day to day activities, there is no shortage of the bad ones who are
consistently trying to sneak into your personal computer to rob you of any
important piece of information that could help them get some each money. It
is now your foremost priority to protect yourself from the people who harbor
such heinous designs. The basic thing to have that protection is to get the
operating system that is secure against any such attacks.
This is absolutely not the case with Linux. All you need is to meet up the
minimum system requirements and start the operating system without any
fear of its expiration. Linux has the power to revive old computer systems. If
you have a system with 256 MB RAM and an old processor, that is enough to
run Linux. Now compare that with Windows 10 which demands 8GB RAM
for smooth functioning. If you give the same system specifications to the
Linux operating system, it would surely give you an edge over the Windows
OS.
Linux offers a wide range of software updates, which are faster to install,
unlike Windows which restarts multiple times just to install latest updates.
Apart from the all benefits, the greatest of all is the option of customization.
Whatever you get in the Windows is the ultimate truth. You cannot change it
for your ease of use. With Linux things change dramatically. You can
customize different features. You can add or delete different features of the
operating system at will. Keep what you need and delete what you don’t like
because it is an open source system. Also, you can add or dispose of various
wallpapers and icon theme on your system in order to customize its look.
Perhaps the greatest benefits with Linux is that you can get it for free online.
Unlike the Windows OS you don’t have to purchase it.
This book helps users in learning the basics of Linux command line interface,
which makes Linux different from all the other operating systems. Now that
you have made it till the end, I hope that you have learned what makes Linux
fun to use. I hope that the dread of the command line interface, the dark
screen I talked about in the introduction, has vanished into thin air.
It is not that boring. You can do more work with Linux in a short timeframe
which makes it fun to use. Linux provides you flexibility with respect to
components you need to install. You can what you require and leave what
you don’t. For example, you can do away with unnecessary programs like
paint and boring calculators in Windows. Instead, you can take any important
and relevant program from the open source, write it on the command line and
run it on the system. You can add and delete programs and applications as
many as you like.
I hope this book has provided you with the basic knowledge you need to
move further on the ladder in the world of Linux. I cannot promise that a
single read of this book will make you an expert on Linux, but it will
definitely equip you with the knowledge base needed to become a pro in the
Linux operating system.
Resources
https://www.linux.com/what-is-linux/
https://blog.learningtree.com/how-to-practice-linux-skills-for-free/
https://www.binarytides.com/linux-command-check-memory-usage/
https://www.digitalocean.com/community/tutorials/basic-linux-navigation-
and-file-management
https://www.javatpoint.com/linux-error-redirection
https://www.geeksforgeeks.org/basic-shell-commands-in-linux/
https://www.guru99.com/hacking-linux-systems.html
https://www.tecmint.com/how-to-hack-your-own-linux-system/
https://opensource.com/business/16/6/managing-passwords-security-linux
https://www.computerhope.com/jargon/i/init.htm
https://www.thegeekstuff.com/2011/02/linux-boot-process/
https://www.linuxtechi.com/10-tips-dmesg-command-linux-geeks/
https://www.techopedia.com/definition/3324/boot-loader
https://www.dedoimedo.com/computers/grub.html#mozTocId616834
https://www.tecmint.com/understand-linux-shell-and-basic-shell-scripting-
language-tips/
https://medium.com/quick-code/top-tutorials-to-learn-shell-scripting-on-
linux-platform-c250f375e0e5
https://bash.cyberciti.biz/guide/Quoting
https://askubuntu.com/questions/591787/how-can-display-a-message-on-
terminal-when-open-it
https://www.learnshell.org/en/Shell_Functions
https://linuxacademy.com/blog/linux/conditions-in-bash-scripting-if-
statements/
https://www.cyberciti.biz/faq/bash-while-loop/
https://www.geeksforgeeks.org/break-command-in-linux-with-examples/
https://likegeeks.com/bash-scripting-step-step-part2/#Nested-Loops
https://www.javatpoint.com/linux-init
https://www.howtogeek.com/119340/htg-explains-what-runlevels-are-on-
linux-and-how-to-use-them/
Linux Command Line and Shell Scripting Bible by Richard Blum
THE LINUX COMMAND LINE by William E . Shotts , J r .
How Linux Works 2nd Edition What Every Superuser Should Know by
Brian Ward