0% found this document useful (0 votes)
10 views

Unix

Uploaded by

Shimona Neethi
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Unix

Uploaded by

Shimona Neethi
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 165

UNIX

File/Directory operation related Unix Commands

 cp – copy a file
 mv – move or rename files or directories
 tar – create and use archives of files
 gzip – compress a file
 ftp – file transfer program
 lpr – print out a file
 mkdir – make a directory
 rm – remove files or directories
 rmdir – remove a directory
 mount – attaches a file system to the file system hierarchy at the mount_point, which is
the pathname of a directory.
 umount – unmounts a currently mounted file system.

Navigational type Unix Commands

 cd – change directory
 pwd – display the name of your current directory
 ls – list names of files in a directory

Disk, File and Folder Size/Usage

 du – Use this command to see the size/usage of the folder you are in. Example usage: du -
sk *
 df – Report file system disk space usage. Example usage: df -k

Display file content

 cat – concatenate and display files.


 more – The more utility is a filter that displays the contents of a text file on the terminal,
one screenful at a time.
 less – Less is a program similar to more (1), but which allows backward movement in
the file as well as forward movement. Also, less does not have to read the entire input
file before starting,so with large input files it starts up faster than text editors like vi

File Editing

 vi – The vi (visual) utility is a display-oriented text editor.


 nano – nano is a small, free and friendly editor.
1
Page

Search
 find – find files of a specified name or type.
 grep – searches files for a specified string or expression.

Administration

 top – Top displays the top 10 processes on the system and periodically updates this
information. Raw cpu percentage is used to rank the processes.
 chmod – change the permissions of a file or a directory.
 ps – The ps command prints information about active processes.
 kill – kill a process.

Information

 date – display the current date and time.


 cal – The cal utility writes a Gregorian calendar to standard output.
 diff – display differences between text files.

Help Related

 man – The man command displays information from the reference manuals.
 help – The help utility retrieves information to further explain errors messages and
warnings from SCCS commands.

UNIX BASIC COMMANDS WITH EXAMPLES

BASIC UNIX COMMANDS:

#1) cal: Displays the calendar.


Syntax: cal [[month] year]
Example: display the calendar for April 2018
$ cal 4 2018
#2) date: Displays the system date and time.
Syntax: date [+format]
Example: Display the date in dd/mm/yy format
$ date +%d/%m/%y
#3) banner: Prints a large banner on the standard output.
Syntax: banner message
Example: Print “Unix” as the banner
$ banner Unix
#4) who: Displays the list of users currently logged in
Syntax: who [option] … [file][arg1]
Example: List all currently logged in users
2

$ who
Page

#5) whoami: Displays the user id of the currently logged in user.


Syntax: whoami [option]
Example: List currently logged in user
$ whoami

UNIX FILE SYSTEM COMMANDS:

#1) touch: Create a new file or update its timestamp.


Syntax: touch [OPTION]…[FILE]
Example: Create empty files called ‘file1’ and ‘file2’
$ touch file1 file2
#2) cat: Concatenate files and print to stdout.
Syntax: cat [OPTION]…[FILE]
Example: Create file1 with entered content
$ cat > file1
Hello
^D
#3) cp: Copy files
Syntax: cp [OPTION]source destination
Example: Copies the contents from file1 to file2 and contents of file1 is retained
$ cp file1 file2
#4) mv: Move files or rename files
Syntax: mv [OPTION]source destination
Example: Create empty files called ‘file1’ and ‘file2’
$ mv file1 file2
#5) rm: Remove files and directories
Syntax: rm [OPTION]…[FILE]
Example: Delete file1
$ rm file1
#6) mkdir: Make directory
Syntax: mkdir [OPTION] directory
Example: Create directory called dir1
$ mkdir dir1
#7) rmdir: Remove a directory
Syntax: rmdir [OPTION] directory
Example: Create empty files called ‘file1’ and ‘file2’
$ rmdir dir1
#8) cd: Change directory
Syntax: cd [OPTION] directory
Example: Change working directory to dir1
$ cd dir1
#9) pwd: Print the present working directory
Syntax: pwd [OPTION]
3
Page

Example: Print ‘dir1’ if a current working directory is dir1


$ pwd
UNIX PROCESSES CONTROL COMMANDS:

Control Commands
These commands are a two-key combination where a letter is pressed simultaneously with the
‘Ctrl’ key.

 Control-C: This command terminates the currently running foreground process.


 Control-D: This command terminates the currently running login or terminal session.
 Control-Z: This command suspends the currently running foreground process to the
background.

Other commands:
#1) ps: displays a snapshot of all current processes
Syntax: $ ps [options]
Example: $ ps -ef
Show every process running, formatted as a table
#2) top - displays a live status of current processes
Syntax: $ top [options]
Example: $ top
Show a live view of all current processes
#3) bg - resume a background suspended a job
Syntax: $ bg [job_spec …]
Example: $ xterm
Ctrl-Z
$ bg
Continue running a job that was previously suspended (using Ctrl-Z) in the background
#4) fg - bring a background job to the foreground
Syntax: $ fg [job_spec]
Example: $ xterm
Ctrl-Z
$ bg
$ fg
Bring a previous background job to the foreground
#5) clear – clear a terminal screen
Syntax: $ clear
Example: $ clear
Clear all prior text from the terminal screen
#6) history – print history of commands in the current session
Syntax: $ history [options]
Example: $ history
4

Show list of previous commands that were entered


Page

Unix Utilities Programs Commands:


#1) ls: List directory contents
Syntax: ls [OPTION] [FILE]
Example: list all (including hidden files) directory contents, in long format, sorted by time,
$ ls -alt
#2) which: Locate a command
Syntax: which [-a] filename
Example: List all paths from where ‘cat’ can run
$ which -a cat
#3) man: Interface for working with the online reference manuals.
Syntax: man [-s section] item
Example: Show manual page for the ‘cat’ command
$ man cat
#4) su: Change user-id or become super-user.
Syntax: su [options] [username]
Example: Change user-id to ‘user1’ (if it exists)
$ su user1
#5) sudo: Execute a command as some other user or super-user
Syntax: sudo [options] [command]
Example: Get a file listing of an unlisted directory
$ sudo ls /usr/local/protected
#6) find: Used to search for files and directories as mentioned in the ‘expression’
Syntax: find [starting-point] [expression]
Example: In ‘/usr’ folder, find character device files, of name ‘backup’
$ find /usr -type c -name backup
#7) du: Estimate disk usage is blocks
Syntax: du [options] [file]
Example: Show number of blocks occupied by files in the current directory
$ du
#8) df: Show number of free blocks for mounted file system
Syntax: df [options] [file]
Example: Show number of free blocks in local file systems
$ df -l

File Manipulation in Unix:


In order to enable all types of information to be stored as files, Unix supports a number of file
types:

#1) Ordinary Files


These files contain binary or text information and are stored in a directory on a disk drive.
5

#2) Directory Files


Page

These are used to organize a group of files – the contained files may be of any type.
#3) Special Files
Special files, also known as device files, are used to represent physical devices such as a printer,
a disk drive, or a remote terminal.

#4) Named Pipes


Named pipes are used to allow one process to send information to another. These are
temporary files that hold information from one process until it is read by another process.

#5) Symbolic Links


These are the files that reference some other file or directory with an absolute or relative path.

The ‘ls’ command is used to list filenames and other associated data. With the option ‘ls -il’,
this command lists out a long format of file details along with its inode number.

Unix File Access Permissions:

File Manipulation
#1) chmod: Change file access permissions.
 Description: This command is used to change the file permissions. These permissions
read, write and execute permission for owner, group, and others.
 Syntax (symbolic mode): chmod [ugoa][[+-=][mode]] file
 The first optional parameter indicates who – this can be (u)ser, (g)roup, (o)thers or (a)ll.
The second optional parameter indicates opcode – this can be for adding (+), removing
(-) or assigning (=) a permission.
The third optional parameter indicates the mode – this can be (r)ead, (w)rite, or
e(x)ecute.
Example: Add write permission for user, group and others for file1.
$ chmod ugo+w file1

 Syntax (numeric mode): chmod [mode] file


 The mode is a combination of three digits – the first digit indicates the permission for
the user, the second digit for the group, and the third digit for others.
Each digit is computed by adding the associated permissions. Read permission is ‘4’,
write permission is ‘2’ and execute permission is ‘1’.
Example: Give read/write/execute permission to the user, read/execute permission to
the group, and execute permission to others.
$ chmod 751 file1

#2) chown: Change ownership of the file.


 Description: Only the owner of the file has the rights to change the file ownership.
6

 Syntax: chown [owner] [file]


Page
 Example: Change the owner of file1 to user2 assuming it is currently owned by the current
user
 $ chown user2 file1
#3) chgrp: Change the group ownership of the file
 Description: Only the owner of the file has the rights to change the file ownership
 Syntax: chgrp [group] [file]
 Example: Change group of file1 to group2 assuming it is currently owned by the current
user
 $ chgrp group2 file1

File Comparison Commands:

#1) cmp: This command is used to compare two files character by character.
Syntax: cmp [options] file1 file2
Example: Add write permission for user, group and others for file1.
$ cmp file1 file2
#2) comm: This command is used to compare two sorted files.
Syntax: comm [options] file1 file2
One set of options allows selection of ‘columns’ to suppress.
-1: suppress lines unique to file1 (column 1)
-2: suppress lines unique to file2 (column 2)
-3: suppress lines common to file1 and file2 (column3)
Example: Only show column-3 that contains lines common between file1 and file2
$ comm -12 file1 file2
#3) diff: This command is used to compare two files line by line.
Syntax: diff [options] file1 file2
Example: Add write permission for user, group and others for file1
$ diff file1 file2

#4) dircmp: This command is used to compare the contents of directories.


Description: This command works on older versions of Unix. In order to compare the
directories in the newer versions of Unix, we can use diff -r
Syntax: dircmp [options] dir1 dir2
Example: Compare contents of dir1 and dir2
$ dircmp dir1 dir2
#5) uniq: This command is used to filter the repeated lines in a file which are adjacent to each
other
Syntax: uniq [options] [input [output]]
Example: Omit repeated lines which are adjacent to each other in file1 and print the
repeated lines only once
$ uniq file1
7
Page

Change commands
< lines from file1

> lines from file2
The change commands are in the format [range][acd][range]. The range on the left may be a
line number or a comma-separated range of line numbers referring to file1, and the range on
the right similarly refers to file2. The character in the middle indicates the action i.e. add,
change or delete.

 ‘LaR’ – Add lines in range ‘R’ from file2 after line ‘L’ in file1.
 ‘FcT’ – Change lines in range ‘F’ of file1 to lines in range ‘T’ of file2.
 ‘RdL’ – Delete lines in range ‘R’ from file1 that would have appeared at line ‘L’ in file2

Unix Special Characters or Metacharacters for File Manipulation

Unix Filename Wildcards – Metacharacters


#1) ‘*’ – any number of characters:
This wild-card selects all the files that matches the expression by replacing the asterisk-
mark with any set of zero or more characters.
Example1: List all files that start with the name ‘file’. g. file, file1, file2, filenew.
$ ls file*
Example2: List all files that end with the name ‘file’. g. file, afile, bfile, newfile
$ ls *file

#2) ‘?’ – single character:


This wild-card selects all the files that matches the expression by replacing the
question-mark with any one character.
Example1: List all files that have one character after ‘file’. g. file1, file2, filea
$ ls file?
Example2: List all files that have two characters before ‘file’. g. dofile, tofile, a1file
$ ls ??file

#3) ‘[’ range ‘]’ – single character from a range:


This wild-card selects all the files that matches the expression by replacing the
marked range with any one character in the range.
Example1: List all files that have a single digit after ‘file’. g. file1, file2
$ ls file[0-9]
Example2: List all files that have anyone letter before ‘file’. g. afile, zfile
$ ls [a-z]file

# 4) ‘[’ range ‘]*’ – multiple characters from a range:


8

This wild-card selects all the files that matches the expression by replacing
Page

the marked range with one or more characters from the range.
Example1: List all files that have digits after ‘file’. g. file1, file2, file33
$ ls file[0-9]*

Unix Regular Expressions


Regular expressions can be used with text processing commands like vi, grep, sed, awk, and
others. Note that although some regular-expression patterns look similar to filename-matching
patterns – the two are unrelated.

#1) ‘^’ – anchor character for start of line:


If the carat is the first character in an expression, it anchors the remainder of the
expression to the start of the line.
Example1: Match all lines that start with ‘A’. g. “A plane”
Pattern: ‘^A’
Example2: Match all lines that start with ‘hello’. g. “hello there”
$ grep “^hello” file1

#2) ‘$’ – anchor character for end of line:


If the carat is the last character in an expression, it anchors the remainder of the
expression to the end of the line.
Example1: Match all lines that end with ‘Z’. g. “The BUZZ”
Pattern: ‘Z$’
Example2: Match all lines that end with ‘done’. g. “well done”
$ grep “done$” file1

#3) ‘.’ – any single character:


The ‘.’ character matches any character except the end-of-line.
Example1: Match all lines that contain a single character. g. “a”
Pattern: ‘^.$’
Example2: Match all lines that end with ‘done’. g. “well done”
$ grep “done$” file1

#4) ‘[’ range ‘]’ – a range of characters:


This pattern matches the set of characters specified between the square
brackets.
Example1: Match all lines that contain a single digit. g. “8”
Pattern: ‘^[0-9]$’
Example2: Match all lines that contain any of the letters ‘a’, ‘b’, ‘c’, ‘d’ or ‘e’
$ grep “[abcde]”
Example3: Match all lines that contain any of the letters ‘a’, ‘b’, ‘c’, ‘d’ or ‘e’.
$ grep “[a-e]” file1

#5) ‘[^’ range ‘]’ – a range of characters to be excluded:


9

This pattern matches any pattern except the set of characters specified
Page

between the square brackets.


Example1: Match all lines that do not contain a digit. g. “hello”
Pattern: ‘[^0-9]’
Example2: Match all lines that do not contain a vowel
$ grep “[^aeiou]” file1

#6) ‘*’ – ‘zero or more’ modifier:


This modifier matches with zero or more instances of the preceding character-set.
Example1: Match all lines that contain ‘ha’ followed by zero or more instances of ‘p’ and then
followed by ‘y’. g. “happpy” or “hay”
Pattern: ‘hap*y’
Example2: Match all lines that start with a digit following zero or more spaces E.g. “ ” or “2.”
$ grep “ *[0-9]” file1

#7) ‘?’ – ‘zero or one’ modifier:


This modifier matches with zero or one instances of the preceding character-set.
Example1: Match all lines that contain ‘hap’ followed by zero or one instances of ‘p’ and then
followed by ‘y’. g. “hapy” or “happy”
Pattern: ‘happ?y’
Example2: Match all lines that start with a digit followed by zero or one ‘:’ characters E.g. “1”
or “2:”
$ grep “^[0-9]:?” file1

Commands using various options:

Ls command
The Ls command is used to get a list of files and directories. Options can be used to get
additional information about the files.

ls Syntax:
ls [options] [paths]
The ls command supports the following options:
 ls -a: list all files including hidden files. These are files that start with “.”.
 ls -A: list all files including hidden files except for “.” and “..” – these refer to the entries for
the current directory, and for the parent directory.
 ls -R: list all files recursively, descending down the directory tree from the given path.
 ls -l: list the files in long format i.e. with an index number, owner name, group name, size,
and permissions.
 ls – o: list the files in long format but without the group name.
 ls -g: list the files in long format but without the owner name.
10

 ls -i: list the files along with their index number.


 ls -s: list the files along with their size.
Page

 ls -t: sort the list by time of modification, with the newest at the top.
 ls -S: sort the list by size, with the largest at the top.
 ls -r: reverse the sorting order.

Grep Command:
The grep filter searches a file for a particular pattern of characters, and displays all lines that
contain that pattern. The pattern that is searched in the file is referred to as the regular
expression (grep stands for globally search for regular expression and print out).

Syntax:
grep [options] pattern [files]
Options Description
-c : This prints only a count of the lines that match a pattern
-h : Display the matched lines, but do not display the filenames.
-i : Ignores, case for matching
-l : Displays list of a filenames only.
-n : Display the matched lines and their line numbers.
-v : This prints out all the lines that do not matches the pattern
-e exp : Specifies expression with this option. Can use multiple times.
-f file : Takes patterns from file, one per line.
-E : Treats pattern as an extended regular expression (ERE)
-w : Match whole word
-o : Print only the matched parts of a matching line, with each such part on a separate output
line.

df Command:
The ‘df‘ command stand for “disk filesystem“, it is used to get full summary of available and
used disk space usage of file system on Linux system.

Using ‘-h‘ parameter with (df -h) will shows the file system disk space statistics in “human
readable” format, means it gives the details in bytes, mega bytes and gigabyte.

1). Check File System Disk Space Usage


df : The “df” command displays the information of device name, total blocks, total disk space,
used disk space, available disk space and mount points on a file system.

2). Display Information of all File System Disk Space Usage


df -a : The same as above, but it also displays information of dummy file systems along with all
the file system disk usage and their memory utilization.
11
Page
3. Show Disk Space Usage in Human Readable Format
df -h : The df command provides an option to display sizes in Human Readable formats by using
‘-h’ (prints the results in human readable format (e.g., 1K 2M 3G)).

4. Display Information of /home File System


df -hT /home : The -T option to display the type of each filesystems listed

5. Display Information of File System in Bytes


df -k : To display all file system information and usage in 1024-byte blocks, use the option ‘-k‘
(e.g. –block-size=1K)

6. Display Information of File System in MB


df -m : To display information of all file system usage in MB (Mega Byte) use the option as ‘-m‘.

7. Display Information of File System in GB


df -h : To display information of all file system statistics in GB (Gigabyte) use the option as ‘df -
h‘.

8. Display File System Inodes


df -I : Using ‘-i‘ switch will display the information of number of used inodes and their
percentage for the file system.

9. Display File System Type


df -T : To check the file system type of your system use the option ‘T‘. It will display file system
type along with other information.

10. Include Certain File System Type


df -t ext3 : If you want to display certain file system type use the ‘-t‘ option. For example, the
following command will only display ext3 file system.

11. Exclude Certain File System Type


df -x ext3 : If you want to display file system type that doesn’t belongs to ext3 type use the
option as ‘-x‘.

12. Display Information of df Command.


df –help : Using ‘–help‘ switch will display a list of available option that are used with df
command.

Options for df command

-a : -all : It includes all the dummy files also in the output which are actually having zero
12

block sizes.
Page
-B : -block-size=S : This is the option we were talking in the above para which is used to scale
sizes by SIZE like -BM prints sizes in units of 1,048,576 bytes.
-total : It is used to display the grand total for size.
-h : -human-readable : It print sizes in human readable format.
-H : -si : This option is same as -h but it use powers of 1000 instead of 1024.
-I : -inodes : This option is used when you want to display the inode information instead of
block usage.
-k : Its use is like –block-size-1k.
-l : -local : This will display the disk usage of only local file systems.
-P : -portability : It uses the POSIX output format.
-t : -type=TYPE : It will only show the output of file systems having type TYPE.
-T : -print-type : This option is used to print file system type shown in the output.
-x : -exclude-type=TYPE : It will exclude all the file systems having type TYPE from the output.
-v : Ignored, included for compatibility reasons.
-no-sync : This is the default setting i.e not to invoke sync before getting usage info.
-sync : It invokes a sync before getting usage info.
-help : It displays a help message and exit.
-version : It displays version information and exit.

du Command

The “du” (Disk Usage) is a standard Unix/Linux command, used to check the information of disk
usage of files and directories on a machine. The du command has many parameter options that
can be used to get the results in many formats. The du command also displays the files and
directory sizes in a recursively manner.

1. To find out the disk usage summary of a /home/niki directory tree and each of its sub
directories. Enter the command as:
du /home/niki

2. Using “-h” option with “du” command provides results in “Human Readable Format“. Means
you can see sizes in Bytes, Kilobytes, Megabytes, Gigabytes etc.
du -h /home/niki

3. To get the summary of a grand total disk usage size of an directory use the option “-s” as
follows.
du -sh /home/niki

4. Using “-a” flag with “du” command displays the disk usage of all the files and directories.
13

du -a /home/niki
Page
5. Using “-a” flag along with “-h” displays disk usage of all files and folders in human readeable
format. The below output is more easy to understand as it shows the files in Kilobytes,
Megabytes etc.
du -ah /home/niki

6. Find out the disk usage of a directory tree with its subtress in Kilobyte blcoks. Use the “-k”
(displays size in 1024 bytes units).
du -k /home/niki

7. To get the summary of disk usage of directory tree along with its subtrees in Megabytes (MB)
only. Use the option “-mh” as follows. The “-m” flag counts the blocks in MB units and “-h”
stands for human readable format.
du -mh /home/niki

8. The “-c” flag provides a grand total usage disk space at the last line. If your directory taken
674MB space, then the last last two line of the output would be.
du -ch /home/niki

9. The below command calculates and displays the disk usage of all files and directories, but
excludes the files that matches given pattern. The below command excludes the “.txt” files
while calculating the total size of diretory. So, this way you can exclude any file formats by using
flag “-–exclude“. See the output there is no txt files entry.
du -ah --exclude="*.txt" /home/niki

10. Display the disk usage based on modification of time, use the flag “–time” as shown below.
du -ha --time /home/niki

Options :

-0, –null : end each output line with NULL


-a, –all : write count of all files, not just directories
–apparent-size : print apparent sizes, rather than disk usage.
-B, –block-size=SIZE : scale sizes to SIZE before printing on console
-c, –total : produce grand total
-d, –max-depth=N : print total for directory only if it is N or fewer levels below command line
argument
-h, –human-readable : print sizes in human readable format
-S, -separate-dirs : for directories, don’t include size of subdirectories
-s, –summarize : display only total for each directory
14

–time : show time of of last modification of any file or directory.


–exclude=PATTERN : exclude files that match PATTERN
Page
1.How to Find Biggest Files and Directories
Run the following command to find out top biggest directories under /home partition.
du -a /home | sort -n -r | head -n 5

2.Find Largest Directories


If you want to display the biggest directories in the current working directory, run:
du -a | sort -n -r | head -n 5

Let us break down the command and see what says each parameter.

1. du command: Estimate file space usage.


2. a : Displays all files and folders.
3. sort command : Sort lines of text files.
4. -n : Compare according to string numerical value.
5. -r : Reverse the result of comparisons.
6. head : Output the first part of files.
7. -n : Print the first ‘n’ lines. (In our case, We displayed first 5 lines).

Some of you would like to display the above result in human readable format. i.e you might
want to display the largest files in KB, MB, or GB.
du -hs * | sort -rh | head -5
The above command will show the top directories, which are eating up more disk space. If you
feel that some directories are not important, you can simply delete few sub-directories or
delete the entire folder to free up some space.

3.To display the largest folders/files including the sub-directories, run:


du -Sh | sort -rh | head -5
Find out the meaning of each options using in above command:

1. du command: Estimate file space usage.


2. -h : Print sizes in human readable format (e.g., 10MB).
3. -S : Do not include size of subdirectories.
4. -s : Display only a total for each argument.
5. sort command : sort lines of text files.
6. -r : Reverse the result of comparisons.
7. -h : Compare human readable numbers (e.g., 2K, 1G).
8. head : Output the first part of files.
15

4.Find Out Top File Sizes Only


Page

If you want to display the biggest file sizes only, then run the following command:
find -type f -exec du -Sh {} + | sort -rh | head -n 5
5.To find the largest files in a particular location, just include the path besides the find
command:
find /home/niki/Downloads/ -type f -exec du -Sh {} + | sort -rh | head -n 5
OR
find /home/niki/Downloads/ -type f -printf "%s %p\n" | sort -rn | head -n 5
The above command will display the largest file from /home/niki/Downloads directory.

Find command
Find Command is one of the most important and much used command in Linux sytems. Find
command used to search and locate list of files and directories based on conditions you specify
for files that match the arguments. Find can be used in variety of conditions like you can find
files by permissions, users, groups, file type, date, size and other possible criteria.

Basic Find Commands for Finding Files with Names


1. Find Files Using Name in Current Directory
Find all the files whose name is niki.txt in a current working directory.
# find . -name niki.txt

2. Find Files Under Home Directory


Find all the files under /home directory with name niki.txt.
# find /home -name niki.txt

3. Find Files Using Name and Ignoring Case


Find all the files whose name is niki.txt and contains both capital and small letters in /home
directory.
# find /home -iname niki.txt

4. Find Directories Using Name


Find all directories whose name is Niki in / directory.
# find / -type d -name Niki

5. Find PHP Files Using Name


Find all php files whose name is niki.php in a current working directory.
# find . -type f -name niki.php

6. Find all PHP Files in Directory


Find all php files in a directory.
16

# find . -type f -name "*.php"


Page
Find Files Based on their Permissions
7. Find Files With 777 Permissions
Find all the files whose permissions are 777.
# find . -type f -perm 0777 -print

8. Find Files Without 777 Permissions


Find all the files without permission 777.
# find / -type f ! -perm 777

9. Find SGID Files with 644 Permissions


Find all the SGID bit files whose permissions set to 644.
# find / -perm 2644

10. Find Sticky Bit Files with 551 Permissions


Find all the Sticky Bit set files whose permission are 551.
# find / -perm 1551

11. Find SUID Files


Find all SUID set files.
# find / -perm /u=s

12. Find SGID Files


Find all SGID set files.
# find / -perm /g=s

13. Find Read Only Files


Find all Read Only files.
# find / -perm /u=r

14. Find Executable Files


Find all Executable files.
# find / -perm /a=x

15. Find Files with 777 Permissions and Chmod to 644


Find all 777 permission files and use chmod command to set permissions to 644.
# find / -type f -perm 0777 -print -exec chmod 644 {} \;

16. Find Directories with 777 Permissions and Chmod to 755


Find all 777 permission directories and use chmod command to set permissions to 755.
17

# find / -type d -perm 777 -print -exec chmod 755 {} \;


Page
17. Find and remove single File
To find a single file called niki.txt and remove it.
# find . -type f -name "niki.txt" -exec rm -f {} \;

18. Find and remove Multiple File


To find and remove multiple files such as .mp3 or .txt, then use.
# find . -type f -name "*.txt" -exec rm -f {} \;
OR
# find . -type f -name "*.mp3" -exec rm -f {} \;

19. Find all Empty Files


To find all empty files under certain path.
# find /tmp -type f -empty

20. Find all Empty Directories


To file all empty directories under certain path.
# find /tmp -type d -empty

21. File all Hidden Files


To find all hidden files, use below command.
# find /tmp -type f -name ".*"

Search Files Based On Owners and Groups


22. Find Single File Based on User
To find all or single file called niki.txt under / root directory of owner root.
# find / -user root -name niki.txt

23. Find all Files Based on User


To find all files that belongs to user Niki under /home directory.
# find /home -user niki

24. Find all Files Based on Group


To find all files that belongs to group Developer under /home directory.
# find /home -group developer

25. Find Particular Files of User


To find all .txt files of user Niki under /home directory.
# find /home -user niki -iname "*.txt"
18
Page
Find Files and Directories Based on Date and Time
26. Find Last 50 Days Modified Files
To find all the files which are modified 50 days back.
# find / -mtime 50

27. Find Last 50 Days Accessed Files


To find all the files which are accessed 50 days back.
# find / -atime 50

28. Find Last 50-100 Days Modified Files


To find all the files which are modified more than 50 days back and less than 100 days.
# find / -mtime +50 –mtime -100

29. Find Changed Files in Last 1 Hour


To find all the files which are changed in last 1 hour.
# find / -cmin -60

30. Find Modified Files in Last 1 Hour


To find all the files which are modified in last 1 hour.
# find / -mmin -60

31. Find Accessed Files in Last 1 Hour


To find all the files which are accessed in last 1 hour.
# find / -amin -60

Find Files and Directories Based on Size


32. Find 50MB Files
To find all 50MB files, use.
# find / -size 50M

33. Find Size between 50MB – 100MB


To find all the files which are greater than 50MB and less than 100MB.
# find / -size +50M -size -100M

34. Find and Delete 100MB Files


To find all 100MB files and delete them using one single command.
# find / -size +100M -exec rm -rf {} \;

35. Find Specific Files and Delete


19

Find all .mp3 files with more than 10MB and delete them using one single command.
# find / -type f -name *.mp3 -size +10M -exec rm {} \;
Page
36.How to Use ‘find’ Command to Search for Multiple Filenames (Extensions)

The simplest and general syntax of the find utility is as follows:

# find directory options [ expression ]

1. Assuming that you want to find all files in the current directory with .sh and .txt file
extensions, you can do this by running the command below:
# find . -type f \( -name "*.sh" -o -name "*.txt" \)

Interpretation of the command above:

1. . means the current directory


2. -type option is used to specify file type and here, we are searching for regular files as
represented by f
3. -name option is used to specify a search pattern in this case, the file extensions
4. -o means “OR”

It is recommended that you enclose the file extensions in a bracket, and also use the \ ( back
slash) escape character as in the command.

2. To find three filenames with .sh, .txt and .c extensions, issues the command below:

# find . -type f \( -name "*.sh" -o -name "*.txt" -o -name "*.c" \)

3. Here is another example where we search for files with .png, .jpg, .deb and .pdf extensions:

# find /home/aaronkilik/Documents/ -type f \( -name "*.png" -o -name "*.jpg" -o -name


"*.deb" -o -name ".pdf" \)
When you critically observe all the commands above, the little trick is using the -o option in the
find command, it enables you to add more filenames to the search array, and also knowing the
filenames or file extensions you are searching for.

How to Find Number of Files in a Directory and Subdirectories

Following are the options that we can use with find command as follows:

1. -type – specifies the file type to search for, in the case above, the f means find all regular
files.
2. -print – an action to print the absolute path of a file.
20

3. -l – this option prints the total number of newlines, which is equals to the total number
Page

of absolute file paths output by find command.


The general syntax of find command.
# find . -type f -print | wc -l
$ sudo find . -type f -print | wc -l

Important: Use sudo command to read all files in the specified directory including those in the
subdirectories with superuser privileges, in order to avoid “Permission denied” errors.

Find Number of Files in Linux

You can see that in the first command above, not all files in the current working directory are
read by find command.

The following are extra examples to show total number of regular files in /var/log and /etc
directories respectively:

$ sudo find /var/log/ -type f -print | wc -l


$ sudo find /etc/ -type f -print | wc -l

1. How to Add a New User in Linux


To add/create a new user, all you’ve to follow the command ‘useradd‘ or ‘adduser‘ with
‘username’. The ‘username’ is a user login name, that is used by user to login into the system.
Only one user can be added and that username must be unique (different from other username
already exists on the system).
# useradd cshimona

When we add a new user in Linux with ‘useradd‘ command it gets created in locked state and
to unlock that user account, we need to set a password for that account with ‘passwd‘
command.
# passwd cshimona
Changing password for user cshimona.
New UNIX password:
Retype new UNIX password:
21

passwd: all authentication tokens updated successfully.


Page
Once a new user created, it’s entry automatically added to the ‘/etc/passwd‘ file. The file is
used to store users information and the entry should be.
cshimona:x:504:504:cshimona:/home/cshimona:/bin/bash

The above entry contains a set of seven colon-separated fields, each field has it’s own meaning.
Let’s see what are these fields:

1. Username: User login name used to login into system. It should be between 1 to 32
charcters long.
2. Password: User password (or x character) stored in /etc/shadow file in encrypted
format.
3. User ID (UID): Every user must have a User ID (UID) User Identification Number. By
default UID 0 is reserved for root user and UID’s ranging from 1-99 are reserved for
other predefined accounts. Further UID’s ranging from 100-999 are reserved for system
accounts and groups.
4. Group ID (GID): The primary Group ID (GID) Group Identification Number stored in
/etc/group file.
5. User Info: This field is optional and allow you to define extra information about the user.
For example, user full name. This field is filled by ‘finger’ command.
6. Home Directory: The absolute location of user’s home directory.
7. Shell: The absolute location of a user’s shell i.e. /bin/bash.

2. Create a User with Different Home Directory


By default ‘useradd‘ command creates a user’s home directory under /home directory with
username. Thus, for example, we’ve seen above the default home directory for the user
‘cshimona‘ is ‘/home/cshimona‘.
However, this action can be changed by using ‘-d‘ option along with the location of new home
directory (i.e. /data/projects). For example, the following command will create a user ‘niki‘
with a home directory ‘/data/projects‘.
# useradd -d /data/projects niki
22

You can see the user home directory and other user related information like user id, group id,
shell and comments.
Page
# cat /etc/passwd | grep niki
niki:x:505:505::/data/projects:/bin/bash

3. Create a User with Specific User ID


In Linux, every user has its own UID (Unique Identification Number). By default, whenever we
create a new user accounts in Linux, it assigns userid 500, 501, 502 and so on…
But, we can create user’s with custom userid with ‘-u‘ option. For example, the following
command will create a user ‘niki‘ with custom userid ‘999‘.
# useradd -u 999 niki

Now, let’s verify that the user created with a defined userid (999) using following command.
# cat /etc/passwd | grep cshimonaniki:x:999:999::/home/cshimona:/bin/bash

NOTE: Make sure the value of a user ID must be unique from any other already created users
on the system.

4. Create a User with Specific Group ID


Similarly, every user has its own GID (Group Identification Number). We can create users with
specific group ID’s as well with -g option.
Here in this example, we will add a user ‘niki ‘ with a specific UID and GID simultaneously with
the help of ‘-u‘ and ‘-g‘ options.
# useradd -u 1000 -g 500 niki

Now, see the assigned user id and group id in ‘/etc/passwd‘ file.


# cat /etc/passwd | grep niki
niki:x:1000:500::/home/niki:/bin/bash

5. Add a User to Multiple Groups


The ‘-G‘ option is used to add a user to additional groups. Each group name is separated by a
comma, with no intervening spaces.
Here in this example, we are adding a user ‘cshimona‘ into multiple groups like admins,
23

webadmin and developer.


Page

# useradd -G admins,webadmin,developers cshimona


Next, verify that the multiple groups assigned to the user with id command.
# id cshimona

uid=1001(cshimona) gid=1001(cshimona)
groups=1001(cshimona),500(admins),501(webadmin),502(developers)
context=root:system_r:unconfined_t:SystemLow-SystemHigh

6. Add a User without Home Directory


In some situations, where we don’t want to assign a home directories for a user’s, due to some
security reasons. In such situation, when a user logs into a system that has just restarted, its
home directory will be root. When such user uses su command, its login directory will be the
previous user home directory.

To create user’s without their home directories, ‘-M‘ is used. For example, the following
command will create a user ‘niki‘ without a home directory.
# useradd -M niki

Now, let’s verify that the user is created without home directory, using ls command.
# ls -l /home/niki
ls: cannot access /home/niki: No such file or directory

7. Create a User with Account Expiry Date


By default, when we add user’s with ‘useradd‘ command user account never get expires i.e
their expiry date is set to 0 (means never expired).
However, we can set the expiry date using ‘-e‘ option, that sets date in YYYY-MM-DD format.
This is helpful for creating temporary accounts for a specific period of time.
Here in this example, we create a user ‘niki‘ with account expiry date i.e. 27th April 2014 in
YYYY-MM-DD format.
# useradd -e 2014-03-27 niki

Next, verify the age of account and password with ‘chage‘ command for user ‘niki‘ after setting
24

account expiry date.


Page

# chage -l niki
Last password change : Mar 28, 2019
Password expires : never
Password inactive : never
Account expires : Mar 27, 2019
Minimum number of days between password change :0
Maximum number of days between password change : 99999
Number of days of warning before password expire :7

8. Create a User with Password Expiry Date


The ‘-f‘ argument is used to define the number of days after a password expires. A value of 0
inactive the user account as soon as the password has expired. By default, the password expiry
value set to -1 means never expire.
Here in this example, we will set a account password expiry date i.e. 45 days on a user
‘cshimona’ using ‘-e‘ and ‘-f‘ options.
# useradd -e 2014-04-27 -f 45 cshimona

9. Add a User with Custom Comments


The ‘-c‘ option allows you to add custom comments, such as user’s full name, phone number,
etc to /etc/passwd file. The comment can be added as a single line without any spaces.
For example, the following command will add a user ‘niki‘ and would insert that user’s full
name, Nikila Neethi, into the comment field.
# useradd -c "Nikila Neethi" niki

You can see your comments in ‘/etc/passwd‘ file in comments section.


# tail -1 /etc/passwd

niki:x:1006:1008:Nikila Neethi:/home/mansi:/bin/sh

10. Change User Login Shell:


Sometimes, we add users which has nothing to do with login shell or sometimes we require to
25

assign different shells to our users. We can assign different login shells to a each user with ‘-s‘
Page

option.
Here in this example, will add a user ‘cshimona‘ without login shell i.e. ‘/sbin/nologin‘ shell.
# useradd -s /sbin/nologin cshimona

You can check assigned shell to the user in ‘/etc/passwd‘ file.


# tail -1 /etc/passwd
cshimona:x:1002:1002::/home/cshimona:/sbin/nologin

Advance Usage of useradd Commands

11. Add a User with Specific Home Directory, Default Shell and Custom Comment
The following command will create a user ‘niki‘ with home directory ‘/var/www/cshimona‘,
default shell /bin/bash and adds extra information about user.
# useradd -m -d /var/www/niki -s /bin/bash -c "CShimona Owner" -U niki

In the above command ‘-m -d‘ option creates a user with specified home directory and the ‘-s‘
option set the user’s default shell i.e. /bin/bash. The ‘-c‘ option adds the extra information
about user and ‘-U‘ argument create/adds a group with the same name as the user.

12. Add a User with Home Directory, Custom Shell, Custom Comment and UID/GID
The command is very similar to above, but here we defining shell as ‘/bin/zsh‘ and custom UID
and GID to a user ‘niki‘. Where ‘-u‘ defines new user’s UID (i.e. 1000) and whereas ‘-g‘ defines
GID (i.e. 1000).
# useradd -m -d /var/www/niki -s /bin/zsh -c "CShimona Technical Writer" -u 1000 -g 1000
niki

13. Add a User with Home Directory, No Shell, Custom Comment and User ID
The following command is very much similar to above two commands, the only difference is
here, that we disabling login shell to a user called ‘niki‘ with custom User ID (i.e. 1019).

Here ‘-s‘ option adds the default shell /bin/bash, but in this case we set login to
‘/usr/sbin/nologin‘. That means user ‘avishek‘ will not able to login into the system.
26

# useradd -m -d /var/www/niki -s /usr/sbin/nologin -c "CShimona Sr. Technical Writer" -u


Page

1019 niki
14. Add a User with Home Directory, Shell, Custom Skell/Comment and User ID
The only change in this command is, we used ‘-k‘ option to set custom skeleton directory i.e.
/etc/custom.skell, not the default one /etc/skel. We also used ‘-s‘ option to define different
shell i.e. /bin/tcsh to user ‘niki‘.
# useradd -m -d /var/www/niki -k /etc/custom.skell -s /bin/tcsh -c "No Active Member of
CShimona" -u 1027 niki

15. Add a User without Home Directory, No Shell, No Group and Custom Comment
This following command is very different than the other commands explained above. Here we
used ‘-M‘ option to create user without user’s home directory and ‘-N‘ argument is used that
tells the system to only create username (without group). The ‘-r‘ arguments is for creating a
system user.
# useradd -M -N -r -s /bin/false -c "Disabled CShimona Member" clayton

For more information and options about useradd, run ‘useradd‘ command on the terminal to
see available options.

After creating user accounts, in some scenarios where we need to change the attributes of an
existing user such as, change user’s home directory, login name, login shell, password expiry
date, etc, where in such case ‘usermod’ command is used.

When we execute ‘usermod‘ command in terminal, the following files are used and affected.

1. /etc/passwd – User account information.


2. /etc/shadow – Secure account information.
3. /etc/group – Group account information.
4. /etc/gshadow – Secure group account information.
5. /etc/login.defs – Shadow password suite configuration..

Basic syntax of command is:


usermod [options] username
27
Page
Requirements
*We must have existing user accounts to execute usermod command.
*Only superuser (root) is allowed to execute usermod command.

Options of Usermod

1. -c = We can add comment field for the useraccount.


2. -d = To modify the directory for any existing user account.
3. -e = Using this option we can make the account expiry in specific period.
4. -g = Change the primary group for a User.
5. -G = To add a supplementary groups.
6. -a = To add anyone of the group to a secondary group.
7. -l = To change the login name from tecmint to tecmint_admin.
8. -L = To lock the user account. This will lock the password so we can’t use the account.
9. -m = moving the contents of the home directory from existing home dir to new dir.
10. -p = To Use un-encrypted password for the new password. (NOT Secured).
11. -s = Create a Specified shell for new accounts.
12. -u = Used to Assigned UID for the user account between 0 to 999.
13. -U = To unlock the user accounts. This will remove the password lock and allow us to use
the user account.

1. Adding Information to User Account


The ‘-c‘ option is used to set a brief comment (information) about the user account. For
example, let’s add information on ‘cshimona‘ user, using the following command.
# usermod -c "This is CShimona" cshimona

After adding information on user, the same comment can be viewed in /etc/passwd file.
# grep -E ‘cshimona' /etc/passwd

2. Change User Home Directory


In the above step we can see that our home directory is under /home/cshimona, If we need to
28

change it to some other directory we can change it using -d option with usermod command.
For example, I want to change our home directory to /var/www/, but before changing, let’s
Page

check the current home directory of a user, using the following command.
# grep -E --color '/home/cshimona' /etc/passwd
# usermod -d /var/www/ cshimona
# grep -E --color '/var/www/' /etc/passwd

3. Set User Account Expiry Date


The option ‘-e‘ is used to set expiry date on a user account with the date format YYYY-MM-DD.
Before, setting up an expiry date on a user, let’s first check the current account expiry status
using the ‘chage‘ (change user password expiry information) command.
# chage -l cshimona

Last password change : Nov 02, 2014


Password expires : never
Password inactive : never
Account expires : Dec 01, 2014
Minimum number of days between password change :0
Maximum number of days between password change : 99999
Number of days of warning before password expires :7

The expiry status of a ‘cshimona‘ user is Dec 1 2014, let’s change it to Nov 1 2014 using
‘usermod -e‘ option and confirm the expiry date with ‘chage‘ command.

# usermod -e 2014-11-01 cshimona


# chage -l cshimona

Last password change : Nov 02, 2014


Password expires : never
Password inactive : never
Account expires : Nov 01, 2014
Minimum number of days between password change :0
Maximum number of days between password change : 99999
Number of days of warning before password expires :7
29

4. Change User Primary Group


Page
To set or change a user primary group, we use option ‘-g‘ with usermod command. Before,
changing user primary group, first make sure to check the current group for the user
cshimona_test.

# id cshimona_test
uid=501(cshimona_test) gid=502(cshimona_test) groups=502(cshimona_test)

Now, set the niki group as a primary group to user shimona_test and confirm the changes.
# usermod -g niki cshimona_test
# id cshimona_test
uid=501(cshimona_test) gid=502(niki) groups=502(cshimona_test)

5. Adding Group to an Existing User


If you want to add a new group called ‘cshimona_test0‘ to ‘cshimona‘ user, you can use option
‘-G‘ with usermod command as shown below.
# usermod -G cshimona_test0 cshimona
# id cshimona

Note: Be careful, while adding a new groups to an existing user with ‘-G’ option alone, will
remove all existing groups that user belongs. So, always add the ‘-a‘ (append) with ‘-G‘ option
to add or append new groups.

6. Adding Supplementary and Primary Group to User


If you need to add a user to any one of the supplementary group, you can use the options ‘-a‘
and ‘-G‘. For example, here we going to add a user account tecmint_test0 with the wheel user.
# usermod -a -G wheel cshimona_test0
# id cshimona_test0

So, user cshimona_test0 remains in its primary group and also in secondary group (wheel). This
will make my normal user account to execute any root privileged commands in Linux box.
30

eg : sudo service httpd restart


Page
7. Change User Login Name
To change any existing user login name, we can use ‘-l‘ (new login) option. In the example
below, we changing login name cshimona to cshimona_admin. So the username cshimona has
been renamed with the new name cshimona_admin.
# usermod -l cshimona_admin cshimona

8. Lock User Account


To Lock any system user account, we can use ‘-L‘ (lock) option, After the account is locked we
can’t login by using the password and you will see a ! added before the encrypted password in
/etc/shadow file, means password disabled.
# usermod -L niki

Check for the locked account.


# grep -E --color 'niki' cat /etc/shadow
Lock User Account

9. Unlock User Account


The ‘-U‘ option is used to unlock any locked user, this will remove the ! before the encrypted
password.
# grep -E --color 'niki' /etc/shadow
# usermod -U babin

Verify the user after unlock.


# grep -E --color 'niki' /etc/shadow
Unlock User Account

10. Move User Home Directory to New location


Let’s say you’ve a user account as ‘niki‘ with home directory ‘/home/niki‘, you want to move to
new location say ‘/var/niki‘. You can use the options ‘-d‘ and ‘-m‘ to move the existing user files
from current home directory to a new home directory.
31

 Check for the account and it’s current home directory.


Page

# grep -E --color 'niki' /etc/passwd


 Then list the files which is owned by user niki.
# ls -l /home/niki/

 Now we have to move the home directory from /home/niki to /var/niki.


# usermod -d /var/pinky/ -m niki

 Next, verify the directory change.


# grep -E --color 'niki' /etc/passwd

Check for the files under ‘/home/niki‘. Here we have moved the files using -m option so there
will be no files. The niki user files will be now under /var/niki.
# ls -l /home/pinky/
# ls -l /var/pinky/
Move User Home Directory

11. Create Un-encrypted Password for User


To create an un-encrypted password, we use option ‘-p‘ (password). For demonstration
purpose, I’m setting a new password say ‘shinik’ on a user niki.
# usermod -p shinik niki

After setting password, now check the shadow file to see whether its in encrypted format or un-
encrypted.

# grep -E --color 'niki' /etc/shadow


Create Unencrypted User Password

Note: The password is clearly visible to everyone. So, this option is not recommended to use,
because the password will be visible to all users.

12. Change User Shell


The user login shell can be changed or defined during user creation with useradd command or
32

changed with ‘usermod‘ command using option ‘-s‘ (shell). For example, the user ‘niki‘ has the
/bin/bash shell by default, now I want to change it to /bin/sh.
Page
# grep -E --color 'niki' /etc/passwd
# usermod -s /bin/sh niki

After changing user shell, verify the user shell using the following command.
# grep -E --color 'niki' /etc/passwd
Change User Login Shell

13. Change User ID (UID)


In the example below, you can see that my user account ‘niki‘ holds the UID of 502, now I want
to change it to 888 as my UID. We can assign UID between 0 to 999.
# grep -E --color 'niki' /etc/passwd
OR
# id niki
Now, let’s change the UID for user niki using ‘-u‘ (uid) option and verify the changes.
# usermod -u 888 niki
# id niki
Change User UID

14. Modifying User Account with Multiple Options


Here we have a user niki and now I want to modify her home directory, shell, expiry date, label,
UID and group at once using one single command with all options as we discussed above.

The user niki has the default home directory /home/niki, Now I want to change it to
/var/www/html and assign his shell as bash, set expiry date as September 3rd 2019, add new
label as This is niki, change UID to 555 and he will be member of apple group.

 Let we see how to modify the niki account using multiple option now.
# usermod -d /var/www/html/ -s /bin/bash -e 2019-09-03 -c "This is Niki" -u 555 -aG
apple

 Then check for the UID & home directory changes.


33

# grep -E --color 'niki' /etc/passwd


Page
 Account expire check.
# chage -l niki

 Check for the group which all niki have been member.
# grep -E --color 'niki' /etc/group

15. Change UID and GID of a User


We can change UID and GID of a current user. For changing to a New GID we need an existing
group. Here already there is an account named as orange with GID of 777.
Now niki user account want to be assigned with UID of 666 and GID of Orange (777).

 Check for the current UID and GID before modifying.


# id niki

 Modify the UID and GID.


# usermod -u 666 -g 777 niki

 Check for the changes.


# id niki

Commands in alphabetical order:

Find the actual description of each Linux command in their manual page:
$ man command-name

1.adduser/addgroup Command:
The adduser and addgroup commands are used to add a user and group to the system
respectively according to the default configuration specified in /etc/adduser.conf file.
34

$ sudo adduser niki


Page
2.agetty Command:
agetty is a program which manages physical or virtual terminals and is invoked by init. Once it
detects a connection, it opens a tty port, asks for a user’s login name and calls up the /bin/login
command. Agetty is a substitute of Linux getty:
$ agetty -L 9600 ttyS1 vt100

3.alias Command
alias is a useful shell built-in command for creating aliases (shortcut) to a Linux command on a
system. It is helpful for creating new/custom commands from existing Shell/Linux commands
(including options):
$ alias home='cd /home/niki/public_html'

The above command will create an alias called home for /home/tecmint/public_html
directory, so whenever you type home in the terminal prompt, it will put you in the
/home/niki/public_html directory.

4.anacron Command
anacron is a Linux facility used to run commands periodically with a frequency defined in days,
weeks and months.

Unlike its sister cron; it assumes that a system will not run continuously, therefore if a
scheduled job is due when the system is off, it’s run once the machine is powered on.

anacron and cron:

To schedule a task on given or later time, you can use the ‘at’ or ‘batch’ commands and to set
up commands to run repeatedly, you can employ the cron and anacron facilities.

Cron – is a daemon used to run scheduled tasks such as system backups, updates and many
more. It is suitable for running scheduled tasks on machines that will run continuously 24X7
such as servers.
35

The commands/tasks are scripted into cron jobs which are scheduled in crontab files. The
Page
default system crontab file is /etc/crontab, but each user can also create their own crontab file
that can launch commands at times that the user defines.

To create a personal crontab file, simply type the following:


$ crontab -e

How to Setup Anacron in Linux


Anacron is used to run commands periodically with a frequency defined in days. It works a little
different from cron; assumes that a machine will not be powered on all the time.

It is appropriate for running daily, weekly, and monthly scheduled jobs normally run by cron, on
machines that will not run 24-7 such as laptops and desktops machines.

Assuming you have a scheduled task (such as a backup script) to be run using cron every
midnight, possibly when your asleep, and your desktop/laptop is off by that time. Your backup
script will not be executed.

However, if you use anacron, you can be assured that the next time you power on the
desktop/laptop again, the backup script will be executed.

How Anacron Works in Linux


anacron jobs are listed in /etc/anacrontab and jobs can be scheduled using the format below
(comments inside anacrontab file must start with #).

period delay job-identifier command


From the above format:

 period – this is the frequency of job execution specified in days or as @daily, @weekly,
or @monthly for once per day, week, or month. You can as well use numbers: 1 – daily,
7 – weekly, 30 – monthly and N – number of days.
 delay – it’s the number of minutes to wait before executing a job.
 job-id – it’s the distinctive name for the job written in log files.
36
Page

To view example files, type:


$ ls -l /var/spool/anacron/

total 12
-rw------- 1 root root 9 Jun 1 10:25 cron.daily
-rw------- 1 root root 9 May 27 11:01 cron.monthly
-rw------- 1 root root 9 May 30 10:28 cron.weekly

 command – it’s the command or shell script to be executed.

This is what practically happens:

 Anacron will check if a job has been executed within the specified period in the period
field. If not, it executes the command specified in the command field after waiting the
number of minutes specified in the delay field.
 Once the job has been executed, it records the date in a timestamp file in the
/var/spool/anacron directory with the name specified in the job-id (timestamp file
name) field.

Let’s now look at an example. This will run the /home/aaronkilik/bin/backup.sh script
everyday:
@daily 10 example.daily /bin/bash /home/aaronkilik/bin/backup.sh

If the machine is off when the backup.sh job is expected to run, anacron will run it 10 minutes
after the machine is powered on without having to wait for another 7 days.

There are two important variables in the anacrontab file that you should understand:

 START_HOURS_RANGE – this sets time range in which jobs will be started (i.e execute
jobs during the following hours only).
 RANDOM_DELAY – this defines the maximum random delay added to the user defined
delay of a job (by default it’s 45).
37

The following is a comparison of cron and anacron to help you understand when to use either
Page

of them.
Cron Anacron

It’s a daemon It’s not a daemon

Appropriate for server machines Appropriate for desktop/laptop machines

Enables you to run scheduled jobs every Only enables you to run scheduled jobs on daily
minute basis

Doesn’t executed a scheduled job when the If the machine if off when a scheduled job is due, it
machine if off will execute a scheduled job when the machine is
powered on the next time

Can be used by both normal users and root Can only be used by root unless otherwise
(enabled for normal users with specific configs)

The major difference between cron and anacron is that cron works effectively on machines that
will run continuously while anacron is intended for machines that will be powered off in a day
or week.

5.apropos Command
apropos command is used to search and display a short man page description of a
command/program as follows.
$ apropos adduser

6.apt Command
38

apt tool is a relatively new higher-level package manager for Debian/Ubuntu systems:
Page

$ sudo apt update


APT(Advanced Package Tool) is a command-line based tool that is used for dealing with
packages on a Ubuntu based Linux systems. It presents a command line interface to the
package management on your system.

1. Installing a Package
You can install a package as follows by specify a single package name or install many packages
at once by listing all their names.
$ sudo apt install glances

2. Find Location of Installed Package


The following command will help you to list all the files that are contained in a package called
glances (advance Linux monitoring tool).
$ sudo apt content glances

3. Check All Dependencies of a Package


This will help you to display raw information about dependencies of a particular package that
you specify.
$ sudo apt depends glances

4. Search for a Package


The search option searches for the given package name and show all the matching packages.
$ sudo apt search apache2

5. View Information About Package


This will help you display information about package or packages, run the command below by
specifying all the packages that you want to display information about.
$ sudo apt show firefox

6. Verify a Package for any Broken Dependencies


39

Sometimes during package installation, you may get errors concerning broken package
Page

dependencies, to check that you do not have these problems run the command below with the
package name.
$ sudo apt check firefox

7. List Recommended Missing Packages of Given Package


$ sudo apt recommends apache2

8. Check Installed Package Version


The ‘version’ option will show you the installed package version.
$ sudo apt version firefox

9. Update System Packages


This will help you to download a list of packages from different repositories included on your
system and updates them when there are new versions of packages and their dependencies.
$ sudo apt update

10. Upgrade System


This helps you to install new versions of all the packages on your system.
$ sudo apt upgrade

11. Remove Unused Packages


When you install a new package on your system, it’s dependencies are also installed and they
use some system libraries with other packages. The after removing that particular package, it’s
dependencies will remain on the system, therefore to remove them use autoremove as follows:
$ sudo apt autoremove

12. Clean Old Repository of Downloaded Packages


The option ‘clean’ or ‘autoclean’ remove all old local repository of downloaded package files.
$ sudo apt autoclean or $ sudo apt clean

13. Remove Packages with its Configuration Files


40

When you run apt with remove, it only removes the package files but configuration files remain
Page

on the system. Therefore to remove a package and it’s configuration files, you will have to use
purge.
$ sudo apt purge glances

14. Install .Deb Package


To install a .deb file, run the command below with the filename as an argument as follows:
$ sudo apt deb atom-amd64.deb

15. Find Help While Using APT


The following command will list you all the options with it’s description on how to use APT on
your system.
$ apt help

7.apt-get Command
apt-get is a powerful and free front-end package manager for Debian/Ubuntu systems. It is
used to install new software packages, remove available software packages, upgrade existing
software packages as well as upgrade entire operating system.
$ sudo apt-get update

What is apt-get?

The apt-get utility is a powerful and free package management command line program, that is
used to work with Ubuntu’s APT (Advanced Packaging Tool) library to perform installation of
new software packages, removing existing software packages, upgrading of existing software
packages and even used to upgrading the entire operating system.

What is apt-cache?

The apt-cache command line tool is used for searching apt software package cache. In simple
words, this tool is used to search software packages, collects information of packages and also
used to search for what available packages are ready for installation on Debian or Ubuntu
41

based systems.
Page
APT-CACHE

1. How Do I List All Available Packages?


To list all the available packages, type the following command.
$ apt-cache pkgnames

2. How Do I Find Out Package Name and Description of Software?


To find out the package name and with it description before installing, use the ‘search‘ flag.
Using “search” with apt-cache will display a list of matched packages with short description.
Let’s say you would like to find out description of package ‘vsftpd‘, then command would be.
$ apt-cache search vsftpd

To find and list down all the packages starting with ‘vsftpd‘, you could use the following
command.
$ apt-cache pkgnames vsftpd

3. How Do I Check Package Information?


For example, if you would like to check information of package along with it short description
say (version number, check sums, size, installed size, category etc). Use ‘show‘ sub command as
shown below.
$ apt-cache show netcat

4. How Do I Check Dependencies for Specific Packages?


Use the ‘showpkg‘ sub command to check the dependencies for particular software packages.
whether those dependencies packages are installed or not. For example, use the ‘showpkg‘
command along with package-name.
$ apt-cache showpkg vsftpd

5. How Do I Check statistics of Cache


The ‘stats‘ sub command will display overall statistics about the cache. For example, the
following command will display Total package names is the number of packages have found in
42

the cache.
$ apt-cache stats
Page
APT-GET

6. How to Update System Packages


The ‘update‘ command is used to resynchronize the package index files from the their sources
specified in /etc/apt/sources.list file. The update command fetched the packages from their
locations and update the packages to newer version.
$ sudo apt-get update

7. How to Upgrade Software Packages


The ‘upgrade‘ command is used to upgrade all the currently installed software packages on the
system. Under any circumstances currently installed packages are not removed or packages
which are not already installed neither retrieved and installed to satisfy upgrade dependencies.
$ sudo apt-get upgrade

However, if you want to upgrade, unconcerned of whether software packages will be added or
removed to fulfill dependencies, use the ‘dist-upgrade‘ sub command.
$ sudo apt-get dist-upgrade

8. How Do I Install or Upgrade Specific Packages?


The ‘install‘ sub command is tracked by one or more packages wish for installation or
upgrading.
$ sudo apt-get install netcat

9. How I can Install Multiple Packages?


You can add more than one package name along with the command in order to install multiple
packages at the same time. For example, the following command will install packages ‘nethogs‘
and ‘goaccess‘.
$ sudo apt-get install nethogs goaccess

10. How to Install Several Packages using Wildcard


With the help of regular expression you can add several packages with one string. For example,
43

we use * wildcard to install several packages that contains the ‘*name*‘ string, name would be
‘package-name’.
Page

$ sudo apt-get install '*name*'


11. How to install Packages without Upgrading
Using sub ‘–no-upgrade‘ command will prevent already installed packages from upgrading.
$ sudo apt-get install packageName --no-upgrade

12. How to Upgrade Only Specific Packages


The ‘–only-upgrade‘ command do not install new packages but it only upgrade the already
installed packages and disables new installation of packages.
$ sudo apt-get install packageName --only-upgrade

13. How Do I Install Specific Package Version?


Let’s say you wish to install only specific version of packages, simply use the ‘=‘ with the
package-name and append desired version.
$ sudo apt-get install vsftpd=2.3.5-3ubuntu1

14. How Do I Remove Packages Without Configuration


To un-install software packages without removing their configuration files (for later re-use the
same configuration). Use the ‘remove‘ command as shown.
$ sudo apt-get remove vsftpd

15. How Do I Completely Remove Packages


To remove software packages including their configuration files, use the ‘purge‘ sub command
as shown below.
$ sudo apt-get purge vsftpd

Alternatively, you can combine both the commands together as shown below.
$ sudo apt-get remove --purge vsftpd

16. How I Can Clean Up Disk Space


The ‘clean‘ command is used to free up the disk space by cleaning retrieved (downloaded) .deb
files (packages) from the local repository.
44

$ sudo apt-get clean


Page

17. How Do I Download Only Source Code of Package


To download only source code of particular package, use the option ‘–download-only source‘
with ‘package-name’ as shown.
$ sudo apt-get --download-only source vsftpd

18. How Can I Download and Unpack a Package


To download and unpack source code of a package to a specific directory, type the following
command.
$ sudo apt-get source vsftpd

19. How Can I Download, Unpack and Compile a Package


You can also download, unpack and compile the source code at the same time, using option ‘–
compile‘ as shown below.
$ sudo apt-get --compile source goaccess

20. How Do I Download a Package Without Installing


Using ‘download‘ option, you can download any given package without installing it. For
example, the following command will only download ‘nethogs‘ package to current working
directory.
$ sudo apt-get download nethogs

21. How Do I Check Change Log of Package?


The ‘changelog‘ flag downloads a package change-log and shows the package version that is
installed.
$ sudo apt-get changelog vsftpd

22. How Do I Check Broken Dependencies?


The ‘check‘ command is a diagnostic tool. It used to update package cache and checks for
broken dependencies.
$ sudo apt-get check
45

23. How Do I Search and Build Dependencies?


Page
This ‘build-dep‘ command searches the local repositories in the system and install the build
dependencies for package. If the package does not exists in the local repository it will return an
error code.
$ sudo apt-get build-dep netcat

24. How I Can Auto clean Apt-Get Cache?


The ‘autoclean‘ command deletes all .deb files from /var/cache/apt/archives to free-up
significant volume of disk space.
$ sudo apt-get autoclean

25. How I Can Auto remove Installed Packages?


The ‘autoremove‘ sub command is used to auto remove packages that were certainly installed
to satisfy dependencies for other packages and but they were now no longer required. For
example, the following command will remove an installed package with its dependencies.
$ sudo apt-get autoremove vsftpd

Still there are more options available, you can check them out using ‘man apt-get‘ or ‘man apt-
cache‘ from the terminal.

What is YUM?
YUM (Yellowdog Updater Modified) is an open source command-line as well as graphical based
package management tool for RPM (RedHat Package Manager) based Linux systems. It allows
users and system administrator to easily install, update, remove or search software packages on
a systems. It was developed and released by Seth Vidal under GPL (General Public License) as
an open source, means anyone can allowed to download and access the code to fix bugs and
develop customized packages. YUM uses numerous third party repositories to install packages
automatically by resolving their dependencies issues.

1. Install a Package with YUM


To install a package called Firefox 14, just run the below command it will automatically find and
46

install all required dependencies for Firefox.


Page

# yum install firefox


The above command will ask confirmation before installing any package on your system. If you
want to install packages automatically without asking any confirmation, use option -y as shown
in below example.
# yum -y install firefox

2. Removing a Package with YUM


To remove a package completely with their all dependencies, just run the following command
as shown below.
# yum remove firefox

Same way the above command will ask confirmation before removing a package. To disable
confirmation prompt just add option -y as shown in below.
# yum -y remove firefox

3. Updating a Package using YUM


Let’s say you have outdated version of MySQL package and you want to update it to the latest
stable version. Just run the following command it will automatically resolves all dependencies
issues and install them.
# yum update mysql

4. List a Package using YUM


Use the list function to search for the specific package with name. For example to search for a
package called openssh, use the command.
# yum list openssh

To make your search more accurate, define package name with their version, in case you know.
For example to search for a specific version openssh-4.3p2 of the package, use the command.
# yum list openssh-4.3p2

5. Search for a Package using YUM


If you don’t remember the exact name of the package, then use search function to search all
the available packages to match the name of the package you specified. For example, to search
47

all the packages that matches the word .


Page

# yum search vsftpd


6. Get Information of a Package using YUM
Say you would like to know information of a package before installing it. To get information of a
package just issue the below command.
# yum info firefox

7. List all Available Packages using YUM


To list all the available packages in the Yum database, use the below command.
# yum list | less

8. List all Installed Packages using YUM


To list all the installed packages on a system, just issue below command, it will display all the
installed packages.
# yum list installed | less

9. Yum Provides Function


Yum provides function is used to find which package a specific file belongs to. For example, if
you would like to know the name of the package that has the /etc/httpd/conf/httpd.conf.
# yum provides /etc/httpd/conf/httpd.conf

10. Check for Available Updates using Yum


To find how many of installed packages on your system have updates available, to check use
the following command.
# yum check-update

11. Update System using Yum


To keep your system up-to-date with all security and binary package updates, run the following
command. It will install all latest patches and security updates to your system.
# yum update

12. List all available Group Packages


In Linux, number of packages are bundled to particular group. Instead of installing individual
packages with yum, you can install particular group that will install all the related packages that
48

belongs to the group. For example to list all the available groups, just issue following command.
Page

# yum grouplist
13. Install a Group Packages
To install a particular package group, we use option as groupinstall. Fore example, to install
“MySQL Database“, just execute the below command.
# yum groupinstall 'MySQL Database'

14. Update a Group Packages


To update any existing installed group packages, just run the following command as shown
below.
# yum groupupdate 'DNS Name Server'

15. Remove a Group Packages


To delete or remove any existing installed group from the system, just use below command.
# yum groupremove 'DNS Name Server'

16. List Enabled Yum Repositories


To list all enabled Yum repositories in your system, use following option.
# yum repolist

17. List all Enabled and Disabled Yum Repositories


The following command will display all enabled and disabled yum repositories on the system.
# yum repolist all

18. Install a Package from Specific Repository


To install a particular package from a specific enabled or disabled repository, you must use –
enablerepo option in your yum command. For example to Install PhpMyAdmin 3.5.2 package,
just execute the command.
# yum --enablerepo=epel install phpmyadmin
49
Page
19. Interactive Yum Shell
Yum utility provides a custom shell where you can execute multiple commands.
# yum shell

20. Clean Yum Cache


By default yum keeps all the repository enabled package data in /var/cache/yum/ with each
sub-directory, to clean all cached files from enabled repository, you need to run the following
command regularly to clean up all the cache and make sure that there is nothing unnecessary
space is using. We don’t want to give the output of the below command, because we like to
keep cached data as it is.
# yum clean all

21. View History of Yum


To view all the past transactions of yum command, just use the following command.
# yum history

8.aptitude Command
aptitude is a powerful text-based interface to the Debian GNU/Linux package management
system. Like apt-get and apt; it can be used to install, remove or upgrade software packages on
a system.
$ sudo aptitude update

Package Management
In few words, package management is a method of installing and maintaining (which includes
updating and probably removing as well) software on the system.

In the early days of Linux, programs were only distributed as source code, along with the
required man pages, the necessary configuration files, and more. Nowadays, most Linux
50

distributors use by default pre-built programs or sets of programs called packages, which are
Page

presented to users ready for installation on that distribution. However, one of the wonders of
Linux is still the possibility to obtain source code of a program to be studied, improved, and
compiled.

How package management systems work


If a certain package requires a certain resource such as a shared library, or another package, it
is said to have a dependency. All modern package management systems provide some method
of dependency resolution to ensure that when a package is installed, all of its dependencies are
installed as well.

Packaging Systems
Almost all the software that is installed on a modern Linux system will be found on the Internet.
It can either be provided by the distribution vendor through central repositories (which can
contain several thousands of packages, each of which has been specifically built, tested, and
maintained for the distribution) or be available in source code that can be downloaded and
installed manually.
Because different distribution families use different packaging systems (Debian: *.deb / CentOS:
*.rpm / openSUSE: *.rpm built specially for openSUSE), a package intended for one distribution
will not be compatible with another distribution. However, most distributions are likely to fall
into one of the three distribution families covered by the LFCS certification.

High and low-level package tools

In order to perform the task of package management effectively, you need to be aware that
you will have two types of available utilities: low-level tools (which handle in the backend the
actual installation, upgrade, and removal of package files), and high-level tools (which are in
charge of ensuring that the tasks of dependency resolution and metadata searching -”data
about the data”- are performed).

DISTRIBUTION LOW-LEVEL TOOL HIGH-LEVEL TOOL

Debian and derivatives dpkg apt-get / aptitude


51
Page
CentOS rpm yum

openSUSE rpm zypper

Let us see the descrption of the low-level and high-level tools.

dpkg is a low-level package manager for Debian-based systems. It can install, remove, provide
information about and build *.deb packages but it can’t automatically download and install
their corresponding dependencies.

Debian GNU/Linux, the mother Operating System of a number of Linux distributions including
Knoppix, Kali, Ubuntu, Mint, etc. uses various package Manager like dpkg, apt, aptitude,
synaptic, tasksel, deselect, dpkg-deb and dpkg-split.

APT Command - Apt stands for Advanced Package Tool. It doesn’t deal with ‘deb‘ package and
works directly, but works with ‘deb‘ archive from the location specified in the
“/etc/apt/sources.list” file.

Aptitude - Aptitude is a text based package manager for Debian which is front-end to ‘apt‘,
which enables user to manage packages easily.

Synaptic - Graphical package manager which makes it easy to install, upgrade and uninstall
packages even to novice.

Tasksel - Tasksel lets the user to install all the relevant packages related to a specific task, viz.,
Desktop-environment.

Deselect - A menu-driven package management tool, initially used during the first time install
and now is replaced with aptitude.

Dpkg-deb - Interacts with Debian archive.


52

Dpkg-split - Useful in splitting and merging large file into chunks of small files to be stored on
Page

media of smaller size like floppy-disk.


Dpkg Command:

dpkg is the main package management program in Debian and Debian based System. It is used
to install, build, remove, and manage packages. Aptitude is the primary front-end to dpkg.

1. Install a Package
For installing an “.deb” package, use the command with “-i” option. For example, to install an
“.deb” package called “flashpluginnonfree_2.8.2+squeeze1_i386.deb” use the following
command.
dpkg -i flashpluginnonfree_2.8.2+squeeze1_i386.deb

2. List all the installed Packages


To view and list all the installed packages, use the “-l” option along with the command.
# dpkg -l

To view a specific package installed or not use the option “-l” along with package-name. For
example, check whether apache2 package installed or not.
# dpkg -l apache2

3. Remove a Package
To remove the “.deb” package, we must specify the package name “flashpluginnonfree“, not
the original name “flashplugin-nonfree_3.2_i386.deb“. The “-r” option is used to
remove/uninstall a package.
# dpkg -r flashpluginnonfree

You can also use ‘p‘ option in place of ‘r’ which will remove the package along with
configuration file. The ‘r‘ option will only remove the package and not configuration files.
# dpkg -p flashpluginnonfree
53

4. View the Content of a Package


Page
To view the content of a particular package, use the “-c” option as shown. The command will
display the contents of a “.deb” package in long-list format.
# dpkg -c flashplugin-nonfree_3.2_i386.deb

5. Check a Package is installed or not


Using “-s” option with package name, will display whether an deb package installed or not.
# dpkg -s flashplugin-nonfree

6. Check the location of Packages installed


To list location of files to be installed to your system from package-name.
# dpkg -L flashplugin-nonfree

7. Install all Packages from a Directory


Recursively, install all the regular files matching pattern “*.deb” found at specified directories
and all of its subdirectories. This can be used with “-R” and “–install” options. For example, I
will install all the “.deb” packages from the directory called “debpackages“.
# dpkg -R --install debpackages/

8. Unpack the Package but dont’ Configure


Using action “–unpack” will unpack the package, but it will don’t install or configure it.
# dpkg --unpack flashplugin-nonfree_3.2_i386.deb

9. Reconfigure a Unpacked Package


The option “–configure” will reconfigure a already unpacked package.
# dpkg --configure flashplugin-nonfree

10. Replace available Package information


The “–-update-avail” option replace the old information with the available information in the
Packages file.
# dpkg –-update-avail package_name
54

11. Erase Existing Available information of Package


Page
The action “–clear-avaial” will erase the current information about what packages are available.
# dpkg –-clear-avail

12. Forget Uninstalled and Unavailable Packages


The dpkg command with option “–forget-old-unavail” will automatically forget uninstalled and
unavailable packages .
# dpkg --forget-old-unavail

13. Display dpkg Licence


# dpkg --licence
14. Display dpkg Version
The “–version” argument will display dpkg version information.
# dpkg –version

15. Get all the Help about dpkg


The “–help” option will display a list of available options of dpkg command.
# dpkg –help

RPM Command:

RPM (Red Hat Package Manager) is an default open source and most popular package
management utility for Red Hat based systems like (RHEL, CentOS and Fedora). The tool allows
system administrators and users to install, update, uninstall, query, verify and manage system
software packages in Unix/Linux operating systems. The RPM formerly known as .rpm file, that
includes compiled software programs and libraries needed by the packages. This utility only
works with packages that built on .rpm format.

Some Facts about RPM (RedHat Package Manager)

1. RPM is free and released under GPL (General Public License).


2. RPM keeps the information of all the installed packages under /var/lib/rpm database.
55

3. RPM is the only way to install packages under Linux systems, if you’ve installed packages
Page

using source code, then rpm won’t manage it.


4. RPM deals with .rpm files, which contains the actual information about the packages
such as: what it is, from where it comes, dependencies info, version info etc.

There are five basic modes for RPM command

1. Install : It is used to install any RPM package.


2. Remove : It is used to erase, remove or un-install any RPM package.
3. Upgrade : It is used to update the existing RPM package.
4. Verify : It is used to verify an RPM packages.
5. Query : It is used query any RPM package.

Where to find RPM packages


Below is the list of rpm sites, where you can find and download all RPM packages.

1. http://rpmfind.net
2. http://www.redhat.com
3. http://freshrpms.net/
4. http://rpm.pbone.net/

Please remember you must be root user when installing packages in Linux, with the root
privileges you can manage rpm commands with their appropriate options.

1. How to Check an RPM Signature Package


Always check the PGP signature of packages before installing them on your Linux systems and
make sure its integrity and origin is OK. Use the following command with –checksig (check
signature) option to check the signature of a package called pidgin.

# rpm --checksig pidgin-2.7.9-5.el6.2.i686.rpm

2. How to Install an RPM Package


For installing an rpm software package, use the following command with -i option. For example,
56

to install an rpm package called pidgin-2.7.9-5.el6.2.i686.rpm.


Page

# rpm -ivh pidgin-2.7.9-5.el6.2.i686.rpm


RPM command and options

1. -i : install a package
2. -v : verbose for a nicer display
3. -h: print hash marks as the package archive is unpacked.

3. How to check dependencies of RPM Package before Installing


Let’s say you would like to do a dependency check before installing or upgrading a package. For
example, use the following command to check the dependencies of BitTorrent-5.2.2-1-
Python2.4.noarch.rpm package. It will display the list of dependencies of package.
# rpm -qpR BitTorrent-5.2.2-1-Python2.4.noarch.rpm

RPM command and options

1. -q : Query a package
2. -p : List capabilities this package provides.
3. -R: List capabilities on which this package depends..

4. How to Install a RPM Package Without Dependencies


If you know that all needed packages are already installed and RPM is just being stupid, you can
ignore those dependencies by using the option –nodeps (no dependencies check) before
installing the package.
# rpm -ivh --nodeps BitTorrent-5.2.2-1-Python2.4.noarch.rpm

The above command forcefully install rpm package by ignoring dependencies errors, but if
those dependency files are missing, then the program will not work at all, until you install them.

5. How to check an Installed RPM Package


Using -q option with package name, will show whether an rpm installed or not.
# rpm -q BitTorrent
57

6. How to List all files of an installed RPM package


Page
To view all the files of an installed rpm packages, use the -ql (query list) with rpm command.
# rpm -ql BitTorrent

7. How to List Recently Installed RPM Packages


Use the following rpm command with -qa (query all) option, will list all the recently installed
rpm packages.
# rpm -qa --last

8. How to List All Installed RPM Packages


Type the following command to print the all the names of installed packages on your Linux
system.
# rpm -qa

9. How to Upgrade a RPM Package


If we want to upgrade any RPM package “–U” (upgrade) option will be used. One of the major
advantages of using this option is that it will not only upgrade the latest version of any package,
but it will also maintain the backup of the older package so that in case if the newer upgraded
package does not run the previously installed package can be used again.
# rpm -Uvh nx-3.5.0-2.el6.centos.i686.rpm

10. How to Remove a RPM Package


To un-install an RPM package, for example we use the package name nx, not the original
package name nx-3.5.0-2.el6.centos.i686.rpm. The -e (erase) option is used to remove
package.
# rpm -evv nx

11. How to Remove an RPM Package Without Dependencies


The –nodeps (Do not check dependencies) option forcefully remove the rpm package from the
system. But keep in mind removing particular package may break other working applications.
# rpm -ev --nodeps vsftpd
58

12. How to Query a file that belongs which RPM Package


Page
Let’s say, you have list of files and you would like to find out which package belongs to these
files. For example, the following command with -qf (query file) option will show you a file
/usr/bin/htpasswd is own by package httpd-tools-2.2.15-15.el6.centos.1.i686.
# rpm -qf /usr/bin/htpasswd

13. How to Query a Information of Installed RPM Package


Let’s say you have installed an rpm package and want to know the information about the
package. The following -qi (query info) option will print the available information of the
installed package.
# rpm -qi vsftpd

14. Get the Information of RPM Package Before Installing


You have download a package from the internet and want to know the information of a
package before installing. For example, the following option -qip (query info package) will print
the information of a package sqlbuddy.
# rpm -qip sqlbuddy-1.3.3-1.noarch.rpm

15. How to Query documentation of Installed RPM Package


To get the list of available documentation of an installed package, use the following command
with option -qdf (query document file) will display the manual pages related to vmstat
package.
# rpm -qdf /usr/bin/vmstat

16. How to Verify a RPM Package


Verifying a package compares information of installed files of the package against the rpm
database. The -Vp (verify package) is used to verify a package.
# rpm -Vp sqlbuddy-1.3.3-1.noarch.rpm
59
Page

17. How to Verify all RPM Packages


Type the following command to verify all the installed rpm packages.
# rpm -Va

18. How to Import an RPM GPG key


To verify RHEL/CentOS/Fedora packages, you must import the GPG key. To do so, execute the
following command. It will import CentOS 6 GPG key.
# rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6

19. How to List all Imported RPM GPG keys


To print all the imported GPG keys in your system, use the following command.
# rpm -qa gpg-pubkey*

20. How To rebuild Corrupted RPM Database


Sometimes rpm database gets corrupted and stops all the functionality of rpm and other
applications on the system. So, at the time we need to rebuild the rpm database and restore it
with the help of following command.
# cd /var/lib
# rm __db*
# rpm –rebuilddb
# rpmdb_verify Packages

Common Usage of Low-Level Tools


The most frequent tasks that you will do with low level tools are as follows:

1. Installing a package from a compiled (*.deb or *.rpm) file

The downside of this installation method is that no dependency resolution is provided. You will
most likely choose to install a package from a compiled file when such package is not available
in the distribution’s repositories and therefore cannot be downloaded and installed through a
high-level tool. Since low-level tools do not perform dependency resolution, they will exit with
an error if we try to install a package with unmet dependencies.
60

# dpkg -i file.deb [Debian and derivative]


Page

# rpm -i file.rpm [CentOS / openSUSE]


Note: Do not attempt to install on CentOS a *.rpm file that was built for openSUSE, or vice-
versa!

2. Upgrading a package from a compiled file

Again, you will only upgrade an installed package manually when it is not available in the
central repositories.
# dpkg -i file.deb [Debian and derivative]
# rpm -U file.rpm [CentOS / openSUSE]

3. Listing installed packages

When you first get your hands on an already working system, chances are you’ll want to know
what packages are installed.
# dpkg -l [Debian and derivative]
# rpm -qa [CentOS / openSUSE]

If you want to know whether a specific package is installed, you can pipe the output of the
above commands to grep.Suppose we need to verify if package mysql-common is installed on
an Ubuntu system.
# dpkg -l | grep mysql-common

Another way to determine if a package is installed.


# dpkg --status package_name [Debian and derivative]
# rpm -q package_name [CentOS / openSUSE]

For example, let’s find out whether package sysdig is installed on our system.
# rpm -qa | grep sysdig

4. Finding out which package installed a file


# dpkg --search file_name
# rpm -qf file_name
61

For example, which package installed pw_dict.hwm?


Page

# rpm -qf /usr/share/cracklib/pw_dict.hwm


Common Usage of High-Level Tools
The most frequent tasks that you will do with high level tools are as follows.

1. Searching for a package


Aptitude update will update the list of available packages, and aptitude search will perform the
actual search for package_name.
# aptitude update && aptitude search package_name

In the search all option, yum will search for package_name not only in package names, but also
in package descriptions.
# yum search package_name
# yum search all package_name
# yum whatprovides “*/package_name”

Let’s supposed we need a file whose name is sysdig. To know that package we will have to
install, let’s run.
# yum whatprovides “*/sysdig”

whatprovides tells yum to search the package the will provide a file that matches the above
regular expression.
# zypper refresh && zypper search package_name [On openSUSE]

2. Installing a package from a repository

While installing a package, you may be prompted to confirm the installation after the package
manager has resolved all dependencies. Note that running update or refresh (according to the
package manager being used) is not strictly necessary, but keeping installed packages up to
date is a good sysadmin practice for security and dependency reasons.
# aptitude update && aptitude install package_name [Debian and derivatives]
# yum update && yum install package_name [CentOS]
# zypper refresh && zypper install package_name [openSUSE]
62

3. Removing a package
Page
The option remove will uninstall the package but leaving configuration files intact, whereas
purge will erase every trace of the program from your system.
# aptitude remove / purge package_name
# yum erase package_name

---Notice the minus sign in front of the package that will be uninstalled, openSUSE ---
# zypper remove -package_name
Most (if not all) package managers will prompt you, by default, if you’re sure about proceeding
with the uninstallation before actually performing it. So read the onscreen messages carefully
to avoid running into unnecessary trouble!

4. Displaying information about a package

The following command will display information about the birthday package.
# aptitude show birthday
# yum info birthday
# zypper info birthday

9.arch Command
arch is a simple command for displaying machine architecture or hardware name (similar to
uname -m):
$ arch

10.arp Command
ARP (Address Resolution Protocol) is a protocol that maps IP network addresses of a network
neighbor with the hardware (MAC) addresses in an IPv4 network.
You can use it as below to find all alive hosts on a network:
$ sudo arp-scan --interface=enp2s0 --localnet

11.at Command
at command is used to schedule tasks to run in a future time. It’s an alternative to cron and
63

anacron, however, it runs a task once at a given future time without editing any config files:
Page
For example, to shutdown the system at 23:55 today, run:
$ sudo echo "shutdown -h now" | at -m 23:55

As an alternative to cron job scheduler, the at command allows you to schedule a command to
run once at a given time without editing a configuration file.

The only requirement consists of installing this utility and starting and enabling its execution:

# yum install at [on CentOS based systems]


$ sudo apt-get install at [on Debian and derivatives]
Next, start and enable the at service at the boot time.

--------- On SystemD ---------


# systemctl start atd
# systemctl enable atd

--------- On SysVinit ---------


# service atd start
# chkconfig --level 35 atd on

Once atd is running, you can schedule any command or task as follows. We want to send 4 ping
probes to www.google.com when the next minute starts (i.e. if it’s 22:20:13, the command will
be executed at 22:21:00) and report the result through an email (-m, requires Postfix or
equivalent) to the user invoking the command:
# echo "ping -c 4 www.google.com" | at -m now + 1 minute

If you choose to not use the -m option, the command will be executed but nothing will be
printed to standard output. You can, however, choose to redirect the output to a file instead.
In addition, please note that at not only allows the following fixed times: now, noon (12:00),
and midnight (00:00), but also custom 2-digit (representing hours) and 4-digit times (hours and
minutes).
64
Page
For example,
To run updatedb at 11 pm today (or tomorrow if the current date is greater than 11 pm), do:
# echo "updatedb" | at -m 23

To shutdown the system at 23:55 today (same criteria as in the previous example applies):
# echo "shutdown -h now" | at -m 23:55

You can also delay the execution by minutes, hours, days, weeks, months, or years using the +
sign and the desired time specification as in the first example.

12.atq Command
atq command is used to view jobs in at command queue:
$ atq

13.atrm Command
atrm command is used to remove/deletes jobs (identified by their job number) from at
command queue:
$ atrm 2

14.awk Command
Awk is a powerful programming language created for text processing and generally used as a
data extraction and reporting tool.
$ awk '//{print}'/etc/hosts

15.batch Command
batch is also used to schedule tasks to run a future time, similar to the at command.

16.basename Command
basename command helps to print the name of a file stripping of directories in the absolute
65

path:
Page

$ basename bin/findhosts.sh
17.bc Command
bc is a simple yet powerful and arbitrary precision CLI calculator language which can be used
like this:
$ echo 20.05 + 15.00 | bc

18.bg Command
bg is a command used to send a process to the background.
$ tar -czf home.tar.gz .
$ bg
$ jobs
When a process is associated with a terminal, two problems might occur:
1. your controlling terminal is filled with so much output data and error/diagnostic messages.
2. in the event that the terminal is closed, the process together with its child processes will be
terminated.

To deal with these two issues, you need to totally detach a process from a controlling terminal.
Before we actually move to solve the problem, let us briefly cover how to run processes in the
background.

How to Start a Linux Process or Command in Background


If a process is already in execution, such as the tar command example below, simply press
Ctrl+Z to stop it then enter the command bg to continue with its execution in the background as
a job.

You can view all your background jobs by typing jobs. However, its stdin, stdout, stderr are still
joined to the terminal.
$ tar -czf home.tar.gz .
$ bg
$ jobs

You can as well run a process directly from the background using the ampersand, & sign.
66

$ tar -czf home.tar.gz . &


Page

$ jobs
Take a look at the example below, although the tar command was started as a background job,
an error message was still sent to the terminal meaning the process is still connected to the
controlling terminal.
$ tar -czf home.tar.gz . &
$ jobs

We will use disown command, it is used after the a process has been launched and put in the
background, it’s work is to remove a shell job from the shell’s active list jobs, therefore you will
not use fg, bg commands on that particular job anymore.

In addition, when you close the controlling terminal, the job will not hang or send a SIGHUP to
any child jobs.

Let’s take a look at the below example of using diswon bash built-in function.
$ sudo rsync Templates/* /var/www/html/files/ &
$ jobs
$ disown -h %1
$ jobs

You can also use nohup command, which also enables a process to continue running in the
background when a user exits a shell.
$ nohup tar -czf iso.tar.gz Templates/* &
$ jobs

Detach a Linux Processes From Controlling Terminal


Therefore, to completely detach a process from a controlling terminal, use the command
format below, this is more effective for graphical user interface (GUI) applications such as
firefox:
$ firefox </dev/null &>/dev/null &

In Linux, /dev/null is a special device file which writes-off (gets rid of) all data written to it, in
the command above, input is read from, and output is sent to /dev/null.
67
Page
SSH or Secure Shell in simple terms is a way by which a person can remotely access another
user on other system but only in command line i.e. non-GUI mode. In more technical terms,
when we ssh on to other user on some other system and run commands on that machine, it
actually creates a pseudo-terminal and attaches it to the login shell of the user logged in.

When we log out of the session or the session times out after being idle for quite some time,
the SIGHUP signal is send to the pseudo-terminal and all the jobs that have been run on that
terminal, even the jobs that have their parent jobs being initiated on the pseudo-terminal are
also sent the SIGHUP signal and are forced to terminate.

Practices to Keep SSH Server Secure and Protected

Only the jobs that have been configured to ignore this signal are the ones that survive the
session termination. On Linux systems, we can have many ways to make these jobs running on
the remote server or any machine even after user logout and session termination.

Normal Process
Normal processes are those which have life span of a session. They are started during the
session as foreground processes and end up in certain time span or when the session gets
logged out. These processes have their owner as any of the valid user of the system, including
root.

Orphan Process
Orphan processes are those which initially had a parent which created the process but after
some time, the parent process unintentionally died or crashed, making init to be the parent of
that process. Such processes have init as their immediate parent which waits on these
processes until they die or end up.

Daemon Process
These are some intentionally orphaned processes, such processes which are intentionally left
running on the system are termed as daemon or intentionally orphaned processes. They are
usually long-running processes which are once initiated and then detached from any controlling
68

terminal so that they can run in background till they do not get completed, or end up throwing
Page

an error. Parent of such processes intentionally dies making child execute in background.
Techniques to Keep SSH Session Running After Disconnection
There can be various ways to leave ssh sessions running after disconnection as described below:

1. Using screen Command to Keep SSH Sessions Running


screen is a text Window Manager for Linux which allows user to manage multiple terminal
sessions at same time, switching between sessions, session logging for the running sessions on
screen, and even resuming the session at any time we desire without worrying about the
session being logged out or terminal being closed.

screen sessions can be started and then detached from the controlling terminal leaving them
running in background and then be resumed at any time and even at any place. Just you need
to start your session on the screen and when you want, detach it from pseudo-terminal (or the
controlling terminal) and logout. When you feel, you can re-login and resume the session.

Starting a screen Session

After typing ‘screen’ command, you will be in a new screen session, within this session you can
create new windows, traverse between windows, lock the screen, and do many more stuff
which you can do on a normal terminal.
$ screen

Once screen session started, you can run any command and keep the session running by
detaching the session.

Detaching a Screen

Just when you want to log out of the remote session, but you want to keep the session you
created on that machine alive, then just what you need to do is detach the screen from the
terminal so that it has no controlling terminal left. After doing this, you can safely logout.

To detach a screen from the remote terminal, just press “Ctrl+a” immediately followed by “d”
and you will be back to the terminal seeing the message that the Screen is detached. Now you
69

can safely logout and your session will be left alive.


Page
Resuming Detached Screen Session

If you want to Resume a detached screen session which you left before logging out, just re-login
to remote terminal again and type “screen -r” in case if only one screen is opened, and if
multiple screen sessions are opened run “screen -r <pid.tty.host>”.
$ screen -r
$ screen -r <pid.tty.host>

Screen is a full-screen software program that can be used to multiplexes a physical console
between several processes (typically interactive shells). It offers a user to open several separate
terminal instances inside a one single terminal window manager.

The screen application is very useful, if you are dealing with multiple programs from a
command line interface and for separating programs from the terminal shell. It also allows you
to share your sessions with others users and detach/attach terminal sessions.

follow your distribution installation procedure to install screen.


# apt-get install screen (On Debian based Systems)
# yum install screen (On RedHat based Systems)

Actually, Screen is a very good command in Linux which is hidden inside hundreds of Linux
commands. Let’s start to see the function of Screen.

Start screen for the first time


Just type screen at the command prompt. Then the screen will show with interface exactly as
the command prompt.
$ screen

Show screen parameter


When you enter the screen, you can do all your work as you are in the normal CLI environment.
But since the screen is an application, so it have command or parameters.
70

Type “Ctrl-A” and “?” without quotes. Then you will see all commands or parameters on screen.
Page
To get out of the help screen, you can press “space-bar” button or “Enter“. (Please note that all
shortcuts which use “Ctrl-A” is done without quotes).

Detach the screen


One of the advantages of screen that is you can detach it. Then, you can restore it without
losing anything you have done on the screen. Here’s the sample scenario:

You are in the middle of SSH-on your server. Let’s say that you are downloading 400MB patch
for your system using wget command.

The download process is estimated to take 2 hours long. If you disconnect the SSH session, or
suddenly the connection lost by accident, then the download process will stop. You have to
start from the beginning again. To avoid that, we can use screen and detach it.

Take a look at this command. First, you have to enter the screen.
~ $ screen

Then you can do the download process. For examples,I am upgrading my dpkg package using
apt-get command.
~ $ sudo apt-get install dpkg
While downloading in progress, you can press “Ctrl-A” and “d“. You will not see anything when
you press those buttons.

Re-attach the screen


After you detach the screen, let say you are disconnecting your SSH session and going home. In
your home, you start to SSH again to your server and you want to see the progress of your
download process. To do that, you need to restore the screen. You can run this command:
~ $ screen -r

And you will see that the process you left is still running.

When you have more than 1 screen session, you need to type the screen session ID. Use screen
71

-ls to see how many screen are available.


Page

~ $ screen -ls
Using Multiple Screen
When you need more than 1 screen to do your job, is it possible? Yes it is. You can run multiple
screen window at the same time. There are 2 (two) ways to do it.
First, you can detach the first screen and the run another screen on the real terminal. Second,
you do nested screen.

Switching between screens


When you do nested screen, you can switch between screen using command “Ctrl-A” and “n“.
It will be move to the next screen. When you need to go to the previous screen, just press “Ctrl-
A” and “p“.
To create a new screen window, just press “Ctrl-A” and “c“.

Logging whatever you do


Sometimes it is important to record what you have done while you are in the console. Let say
you are a Linux Administrator who manage a lot of Linux servers.

With this screen logging, you don’t need to write down every single command that you have
done. To activate screen logging function, just press “Ctrl-A” and “H“. (Please be careful, we use
capital ‘H’ letter. Using non capital ‘h’, will only create a screenshot of screen in another file
named hardcopy).

At the bottom left of the screen, there will be a notification that tells you like: Creating logfile
“screenlog.0“. You will find screenlog.0 file in your home directory.

This feature will append everything you do while you are in the screen window. To close screen
to log running activity, press “Ctrl-A” and “H” again.

Another way to activate logging feature, you can add the parameter “-L” when the first time
running screen. The command will be like this.
~ $ screen -L
72

Lock screen
Page
Screen also have shortcut to lock the screen. You can press “Ctrl-A” and “x” shortcut to lock the
screen. This is handy if you want to lock your screen quickly. Here’s a sample output of lock
screen after you press the shortcut.

Add password to lock screen


For security reason, you may want to put the password to your screen session. A Password will
be asked whenever you want to re-attach the screen. This password is different with Lock
Screen mechanism above.

To make your screen password protected, you can edit “$HOME/.screenrc” file. If the file
doesn’t exist, you can create it manually. The syntax will be like this.
password crypt_password

To create “crypt_password” above, you can use “mkpasswd” command on Linux.

mkpasswd will generate a hash password as shown above. Once you get the hash password,
you can copy it into your “.screenrc” file and save it. So the “.screenrc” file will be like this.

Next time you run screen and detach it, password will be asked when you try to re-attach it, as
shown below:
~$ screen -r 5741

After you implement this screen password and you press “Ctrl-A” and “x”.

A Password will be asked to you twice. First password is your Linux password, and the second
password is the password that you put in your .screenrc file.

Leaving Screen
There are 2 (two) ways to leaving the screen. First, we are using “Ctrl-A” and “d” to detach the
screen. Second, we can use the exit command to terminating screen. You also can use “Ctrl-A”
and “K” to kill the screen.
73
Page
2. Using Tmux (Terminal Multiplexer) to Keep SSH Sessions Running
Tmux is another software which is created to be a replacement for screen. It has most of the
capabilities of screen, with few additional capabilities which make it more powerful than
screen.

It allows, apart from all options offered by screen, splitting panes horizontally or vertically
between multiple windows, resizing window panes, session activity monitoring, scripting using
command line mode etc. Due to these features of tmux, it has been enjoying wide adoption by
nearly all Unix distributions and even it has been included in the base system of OpenBSD.

Start a Tmux Session

After doing ssh on the remote host and typing tmux, you will enter into a new session with a
new window opening in front of you, wherein you can do anything you do on a normal
terminal.
$ tmux

After performing your operations on the terminal, you can detach that session from the
controlling terminal so that it goes into background and you can safely logout.

Detach Tmux Session from Terminal

Either you can run “tmux detach” on running tmux session or you can use the shortcut (Ctrl+b
then d). After this your current session will be detached and you will come back to your
terminal from where you can log out safely.m
$ tmux detach

Resuming the Closed Tmux Session

To re-open the session which you detached and left as is when you logged out of the system,
just re-login to the remote machine and type “tmux attach” to reattach to the closed session
and it will be still be there and running.
$ tmux attach
74
Page

Installing tmux Terminal Multiplexer in Linux


To install tmux, you can use your standard package management system.

For CentOS/RHEL/Fedora (included in the base repository):


# yum update && yum install tmux

Debian (from the admin packages section of the stable version) and derivatives:
# aptitude update && aptitude install tmux

Once you have installed tmux, let’s take a look at what it has to offer.

Getting Started with tmux Terminal Multiplexer


To start a new tmux session (a container for individual consoles being managed by tmux)
named dev, type:
# tmux new -s dev

At the bottom of the screen you will see an indicator of the session you’re currently in:

Next, you can:

1. divide the terminal into as many panes as you want with Ctrl+b+" to split
horizontally and Ctrl+b+% to split vertically. Each pane will represent a separate
console.
2. move from one to another with Ctrl+b+left, +up, +right, or +down keyboard
arrow, to move in the same direction.
3. resize a pane, by holding Ctrl+b while you press one of the keyboard arrows in
the direction where you want to move the boundaries of the active pane.
4. show the current time inside the active pane by pressing Ctrl+b+t.
5. close a pane, by placing the cursor inside the pane that you want to remove and
pressing Ctrl+b+x. You will be prompted to confirm this operation.
6. detach from the current session (thus returning to the regular terminal) by
pressing Ctrl+b+d.
7. create a new session named admin with
75

# tmux new -s admin


Page
1. detach from the session named admin
2. reattach to the session named dev with
# tmux attach -t dev

1. Switch to admin again with


# tmux switch -t admin

Changing tmux Terminal Key Bindings


In tmux, the combination of keys used to perform a certain action is called key bindings. By
default, key bindings consists of a combination of the Ctrl key and other(s) key(s), as we
explained in the previous section.

If you find the default key bindings used in the preceding examples inconvenient, you can
change it and customize it on either 1) a per-user basis (by creating a file named .tmux.conf
inside each user’s home directory – do not omit the leading dot in the filename) or 2) system-
wide (through /etc/tmux.conf, not present by default).

If both methods are used, the system-wide configuration is overridden by each user’s
preferences.

For example, let’s say you want to use Alt+a instead of Ctrl+b, insert the following contents in
one of the files mentioned earlier as needed:
unbind C-b
set -g prefix M-a

After saving changes and restarting tmux, you will be able to use Alt+a+" and Alt+a+t to split
the window horizontally and to show the current time inside the active pane, respectively.

3. Using nohup command to Keep Running SSH Sessions


If you are not that familiar with screen or tmux, you can use nohup and send your long running
command to background so that you can continue while the command will keep on executing in
background. After that you can safely log out.
76
Page
With nohup command we tell the process to ignore the SIGHUP signal which is sent by ssh
session on termination, thus making the command persist even after session logout. On session
logout the command is detched from controlling terminal and keeps on running in background
as daemon process.

Executing command using nohup in background

Here, is a simple scenario wherein, we have run find command to search for files in background
on ssh session using nohup, after which the task was sent to background with prompt returning
immediately giving PID and job ID of the process ([JOBID] PID).
# nohup find / -type f $gt; files_in_system.out 2>1 &

Resuming the session to view if job is still running

When you re-login again, you can check the status of command, bring it back to foreground
using 'fg %JOBID' to monitor its progress and so on. Below, the output shows that the job was
completed as it doesn’t show on re-login, and has given the output which is displayed.
# fg %JOBID

4. Using disown Command to Keep SSH Sessions Running


Another elegant way of letting your command or a single task run in background and remain
alive even after session logout or disconnection is by using disown.

Disown, removes the job from the process job list of the system, so the process is shielded from
being killed during session disconnection as it won’t receive SIGHUP by the shell when you
logout.

Disadvantage of this method is that, it should be used only for the jobs that do not need any
input from the stdin and neither need to write to stdout, unless you specifically redirect jobs
input and output, because when job will try to interact with stdin or stdout, it will halt.

Executing command using disown in background


77

Below, we sent ping command to background so that ut keeps on running and gets removed
Page

from job list. As seen, the job was first suspended, after which it was still in the job list as
Process ID: 15368.
$ ping tecmint.com > pingout &
$ jobs -l
$ diswon -h %1
$ ps -ef | grep ping

After that disown signal was passed to the job, and it was removed from job list, though was
still running in background. The job would still be running when you would re-login to the
remote server as seen below.
$ ps -ef | grep ping

5. Using setsid Command to Put SSH Sessions Running


Another utility to achieve the required behavior is setsid. Nohup has a disadvantage in the
sense that the process group of the process remains the same so the process running with
nohup is vulnerable to any signal sent to the whole process group (like Ctrl + C).

setsid on other hand allocates a new process group to the process being executed and hence,
the process created is totally in a newly allocated process group and can execute safely without
fear of being killed even after session logout.

Execute any command using setsid

Here, it shows that the process ‘sleep 10m’ has been detached from the controlling terminal,
since the time it has been created.
$ setsid sleep 10m
$ ps -ef | grep sleep

Now, when you would re-login the session, you will still find this process running.
$ ps -ef | grep [s]leep

19.bzip2 Command
78
Page
bzip2 command is used to compress or decompress file(s).
$ bzip2 -z filename #Compress
$ bzip2 -d filename.bz2 #Decompress

To compress a file(s), is to significantly decrease the size of the file(s) by encoding data in the
file(s) using less bits, and it is normally a useful practice during backup and transfer of a file(s)
over a network. On the other hand, decompressing a file(s) means restoring data in the file(s) to
its original state.

There are several file compression and decompression tools available in Linux such as gzip, 7-
zip, Lrzip, PeaZip and many more.

Bzip2 is a well known compression tool and it’s available on most if not all the major Linux
distributions, you can use the appropriate command for your distribution to install it.
$ sudo apt install bzip2 [On Debian/Ubuntu]
$ sudo yum install bzip2 [On CentOS/RHEL]
$ sudo dnf install bzip2 [On Fedora 22+]

The conventional syntax of using bzip2 is:


$ bzip2 option(s) filenames

How to Use “bzip2” to Compress Files in Linux


You can compress a file as below, where the flag -z enables file compression:
$ bzip2 filename OR $ bzip2 -z filename

To compress a .tar file, use the command format:


$ bzip2 -z backup.tar

Important: By default, bzip2 deletes the input files during compression or decompression, to
keep the input files, use the -k or --keep option.

In addition, the -f or --force flag will force bzip2 to overwrite an existing output file.
79
Page

------ To keep input file ------


$ bzip2 -zk filename
$ bzip2 -zk backup.tar

You can as well set the block size to 100k upto 900k, using -1 or --fast to -9 or –best as shown in
the below examples:
$ bzip2 -k1 Etcher-linux-x64.AppImage
$ ls -lh Etcher-linux-x64.AppImage.bz2
$ bzip2 -k9 Etcher-linux-x64.AppImage
$ bzip2 -kf9 Etcher-linux-x64.AppImage
$ ls -lh Etcher-linux-x64.AppImage.bz2

How to Use “bzip2” to Decompress Files in Linux


To decompress a .bz2 file, make use of the -d or --decompress option like so:
$ bzip2 -d filename.bz2

Note: The file must end with a .bz2 extension for the command above to work.
$ bzip2 -vd Etcher-linux-x64.AppImage.bz2
$ bzip2 -vfd Etcher-linux-x64.AppImage.bz2
$ ls -l Etcher-linux-x64.AppImage

To view the bzip2 help page and man page, type the command below:
$ bzip2 -h
$ man bzip2

Rsync (Remote Sync) is a most commonly used command for copying and synchronizing files
and directories remotely as well as locally in Linux/Unix systems. With the help of rsync
command you can copy and synchronize your data remotely and locally across directories,
across disks and networks, perform data backups and mirroring between two Linux machines.

You don’t need to be root user to run rsync command.


80

Some advantages and features of Rsync command


Page
1. It efficiently copies and sync files to or from a remote system.
2. Supports copying links, devices, owners, groups and permissions.
3. It’s faster than scp (Secure Copy) because rsync uses remote-update protocol which
allows to transfer just the differences between two sets of files. First time, it copies the
whole content of a file or a directory from source to destination but from next time, it
copies only the changed blocks and bytes to the destination.
4. Rsync consumes less bandwidth as it uses compression and decompression method
while sending and receiving data both ends.

Basic syntax of rsync command

# rsync options source destination

Some common options used with rsync commands

1. -v : verbose
2. -r : copies data recursively (but don’t preserve timestamps and permission while
transferring data
3. -a : archive mode, archive mode allows copying files recursively and it also preserves
symbolic links, file permissions, user & group ownerships and timestamps
4. -z : compress file data
5. -h : human-readable, output numbers in a human-readable format

Install rsync in your Linux machine


We can install rsync package with the help of following command.
# yum install rsync (On Red Hat based systems)
# apt-get install rsync (On Debian based systems)

1. Copy/Sync Files and Directory Locally

Copy/Sync a File on a Local Computer

This following command will sync a single file on a local machine from one location to another
81

location. Here in this example, a file name backup.tar needs to be copied or synced to
Page
/tmp/backups/ folder.
# rsync -zvh backup.tar /tmp/backups/

Copy/Sync a Directory on Local Computer

The following command will transfer or sync all the files of from one directory to a different
directory in the same machine. Here in this example, /root/rpmpkgs contains some rpm
package files and you want that directory to be copied inside /tmp/backups/ folder.
# rsync -avzh /root/rpmpkgs /tmp/backups/

2. Copy/Sync Files and Directory to or From a Server

Copy a Directory from Local Server to a Remote Server

This command will sync a directory from a local machine to a remote machine. For example:
There is a folder in your local computer “rpmpkgs” which contains some RPM packages and you
want that local directory’s content send to a remote server, you can use following command.
$ rsync -avz rpmpkgs/ [email protected]:/home/

Copy/Sync a Remote Directory to a Local Machine

This command will help you sync a remote directory to a local directory. Here in this example, a
directory /home/niki/rpmpkgs which is on a remote server is being copied in your local
computer in /tmp/myrpms.
# rsync -avzh [email protected]:/home/niki/rpmpkgs /tmp/myrpms

3. Rsync Over SSH


With rsync, we can use SSH (Secure Shell) for data transfer, using SSH protocol while
transferring our data you can be ensured that your data is being transferred in a secured
connection with encryption so that nobody can read your data while it is being transferred over
82

the wire on the internet.


Page
Also when we use rsync we need to provide the user/root password to accomplish that
particular task, so using SSH option will send your logins in an encrypted manner so that your
password will be safe.

Copy a File from a Remote Server to a Local Server with SSH

To specify a protocol with rsync you need to give “-e” option with protocol name you want to
use. Here in this example, We will be using “ssh” with “-e” option and perform data transfer.
# rsync -avzhe ssh [email protected]:/root/install.log /tmp/

Copy a File from a Local Server to a Remote Server with SSH


# rsync -avzhe ssh backup.tar [email protected]:/backups/

4. Show Progress While Transferring Data with rsync


To show the progress while transferring the data from one machine to a different machine, we
can use ‘–progress’ option for it. It displays the files and the time remaining to complete the
transfer.
# rsync -avzhe ssh --progress /home/rpmpkgs [email protected]:/root/rpmpkgs

5. Use of –include and –exclude Options


These two options allows us to include and exclude files by specifying parameters with these
option helps us to specify those files or directories which you want to include in your sync and
exclude files and folders with you don’t want to be transferred.

Here in this example, rsync command will include those files and directory only which starts
with ‘R’ and exclude all other files and directory.
# rsync -avze ssh --include 'R*' --exclude '*' [email protected]:/var/lib/rpm/ /root/rpm

6. Use of –delete Option


83

If a file or directory not exist at the source, but already exists at the destination, you might want
Page

to delete that existing file/directory at the target while syncing .


We can use ‘–delete‘ option to delete files that are not there in source directory.
Source and target are in sync. Now creating new file test.txt at the target.
# touch test.txt
# rsync -avz --delete [email protected]:/var/lib/rpm/ .

Target has the new file called test.txt, when synchronize with the source with ‘–delete‘ option,
it removed the file test.txt.

7. Set the Max Size of Files to be Transferred


You can specify the Max file size to be transferred or sync. You can do it with “–max-size”
option. Here in this example, Max file size is 200k, so this command will transfer only those files
which are equal or smaller than 200k.
# rsync -avzhe ssh --max-size='200k' /var/lib/rpm/ [email protected]:/root/tmprpm

8. Automatically Delete source Files after successful Transfer


Now, suppose you have a main web server and a data backup server, you created a daily
backup and synced it with your backup server, now you don’t want to keep that local copy of
backup in your web server.

So, will you wait for transfer to complete and then delete those local backup file manually? Of
Course NO. This automatic deletion can be done using ‘–remove-source-files‘ option.
# rsync --remove-source-files -zvh backup.tar /tmp/backups/

9. Do a Dry Run with rsync


If you are a newbie and using rsync and don’t know what exactly your command going do.
Rsync could really mess up the things in your destination folder and then doing an undo can be
a tedious job.

Use of this option will not make any changes only do a dry run of the command and shows the
84

output of the command, if the output shows exactly same you want to do then you can remove
Page
‘–dry-run‘ option from your command and run on the terminal.
# rsync --dry-run --remove-source-files -zvh backup.tar /tmp/backups/

10. Set Bandwidth Limit and Transfer File


You can set the bandwidth limit while transferring data from one machine to another machine
with the the help of ‘–bwlimit‘ option. This options helps us to limit I/O bandwidth.
# rsync --bwlimit=100 -avzhe ssh /var/lib/rpm/ [email protected]:/root/tmprpm/

Also, by default rsync syncs changed blocks and bytes only, if you want explicitly want to sync
whole file then you use ‘-W‘ option with it.
# rsync -zvhW backup.tar /tmp/backups/backup.tar

How to Use Rsync to Sync New or Changed/Modified Files

To start with, you need remember that the conventional and simplest form of using rsync is as
follows:
# rsync options source destination

That said, let us dive into some examples to uncover how the concept above actually works.

Syncing Files Locally Using Rsync


Using the command below, am able to copy files from my Documents directory to
/tmp/documents directory locally:
$ rsync -av Documents/* /tmp/documents

In the command above, the option:


1. -a – means archive mode
2. -v – means verbose, showing details of ongoing operations

By default, rsync only copies new or changed files from a source to destination, when I add a
new file into my Documents directory, this is what happens after running the same command
85

second time:
Page

$ rsync -av Documents/* /tmp/documents


As you can observe and notice from the output of the command, only the new file is copied to
the destination directory.

The --update or -u option allows rsync to skip files that are still new in the destination directory,
and one important option, --dry-run or -n enables us to execute a test operation without
making any changes. It shows us what files are to be copied.
$ rsync -aunv Documents/* /tmp/documents

After executing a test run, we can then do away with the -n and perform a real operation:
$ rsync -auv Documents/* /tmp/documents

Syncing Files From Local to Remote Linux


In the example below, I am copying files from my local machine to a remote sever with the IP
address – 10.42.1.5. So as to only sync new files on the local machine, that do not exist on the
remote machine, we can include the --ignore-existing option:
$ rsync -av --ignore-existing Documents/* [email protected]:~/all/

Subsequently, to sync only updated or modified files on the remote machine that have changed
on the local machine, we can perform a dry run before copying files as below:
$ rsync -av --dry-run --update Documents/* [email protected]:~/all/
$ rsync -av --update Documents/* [email protected]:~/all/

To update existing files and prevent creation of new files in the destination, we utilize the
--existing option.

How to Sync Two Apache Web Servers/Websites Using Rsync


The purpose of creating a mirror of your Web Server with Rsync is if your main web server fails,
your backup server can take over to reduce downtime of your website. This way of creating a
web server backup is very good and effective for small and medium size web businesses.

Advantages of Syncing Web Servers


86

The main advantages of creating a web server backup with rsync are as follows:
Page

1. Rsync syncs only those bytes and blocks of data that have changed.
2. Rsync has the ability to check and delete those files and directories at backup server
that have been deleted from the main web server.
3. It takes care of permissions, ownerships and special attributes while copying data
remotely.
4. It also supports SSH protocol to transfer data in an encrypted manner so that you will be
assured that all data is safe.
5. Rsync uses compression and decompression method while transferring data which
consumes less bandwidth.

How To Sync Two Apache Web Servers


Let’s proceed with setting up rsync to create a mirror of your web server. Here, I’ll be using two
servers.

Main Server

1. IP Address: 192.168.0.100
2. Hostname: webserver.example.com

Backup Server

1. IP Address: 192.168.0.101
2. Hostname: backup.example.com

Step 1: Install Rsync Tool


Here in this case web server data of webserver.example.com will be mirrored on
backup.example.com. And to do so first, we need to install Rsync on both the server with the
help of following command.
# yum install rsync [On Red Hat based systems]
# apt-get install rsync [On Debian based systems]

Step 2: Create a User to run Rsync


We can setup rsync with root user, but for security reasons, you can create an unprivileged user
87

on main webserver i.e webserver.example.com to run rsync.


# useradd niki
Page

# passwd niki
Here I have created a user “niki” and assigned a password to user.

Step 3: Test Rsync Setup


It’s time to test your rsync setup on your backup server (i.e. backup.example.com) and to do
so, please type following command.
# rsync -avzhe ssh [email protected]:/var/www/ /var/www

You can see that your rsync is now working absolutely fine and syncing data. I have used
“/var/www” to transfer; you can change the folder location according to your needs.

Step 4: Automate Sync with SSH Passwordless Login


Now, we are done with rsync setups and now its time to setup a cron for rsync. As we are going
to use rsync with SSH protocol, ssh will be asking for authentication and if we won’t provide a
password to cron it will not work. In order to work cron smoothly, we need to setup
passwordless ssh logins for rsync.

Here in this example, I am doing it as root to preserve file ownerships as well, you can do it for
alternative users too.

First, we’ll generate a public and private key with following commands on backups server (i.e.
backup.example.com).
# ssh-keygen -t rsa -b 2048

When you enter this command, please don’t provide passphrase and click enter for Empty
passphrase so that rsync cron will not need any password for syncing data.

Now, our Public and Private key has been generated and we will have to share it with main
server so that main web server will recognize this backup machine and will allow it to login
without asking any password while syncing data.
# ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]

Now try logging into the machine, with “ssh ‘[email protected]‘”, and check
88

in .ssh/authorized_keys.
Page

# [email protected]
Now, we are done with sharing keys. To know more in-depth about SSH password less login,
you can read our article on it.

Step 5: Schedule Cron To Automate Sync


Let’s setup a cron for this. To setup a cron, please open crontab file with the following
command.
# crontab –e

It will open up /etc/crontab file to edit with your default editor. Here In this example, I am
writing a cron to run it every 5 minutes to sync the data.

The above cron and rsync command simply syncing “/var/www/” from the main web server to
a backup server in every 5 minutes. You can change the time and folder location configuration
according to your needs.

SSH Passwordless Login Using SSH Keygen

SSH (Secure SHELL) is an open source and most trusted network protocol that is used to login
into remote servers for execution of commands and programs. It is also used to transfer files
from one computer to another computer over the network using secure copy (SCP) Protocol.

In this example we will setup SSH password-less automatic login from server 192.168.0.12 as
user shimon to 192.168.0.11 with user niki.

Step 1: Create Authentication SSH-Kegen Keys on – (192.168.0.12)


First login into server 192.168.0.12 with user shimon and generate a pair of public keys using
following command.

$ ssh-keygen -t rsa

Step 2: Create .ssh Directory on – 192.168.0.11


Use SSH from server 192.168.0.12 to connect server 192.168.0.11 using niki as user and create
89

.ssh directory under it, using following command.


Page

$ ssh [email protected] mkdir -p .ssh


Step 3: Upload Generated Public Keys to – 192.168.0.11
Use SSH from server 192.168.0.12 and upload new generated public key (id_rsa.pub) on server
192.168.0.11 under sheena‘s .ssh directory as a file name authorized_keys.
$ cat .ssh/id_rsa.pub | ssh [email protected] 'cat >> .ssh/authorized_keys'

Step 4: Set Permissions on – 192.168.0.11


Due to different SSH versions on servers, we need to set permissions on .ssh directory and
authorized_keys file.
$ ssh [email protected] "chmod 700 .ssh; chmod 640 .ssh/authorized_keys"

Step 5: Login from 192.168.0.12 to 192.168.0.11 Server without Password


From now onwards you can log into 192.168.0.11 as niki user from server 192.168.0.12 as
shimon user without password.
$ ssh [email protected]

Cron Scheduling

Cron is a daemon to run schedule tasks. Cron wakes up every minute and checks schedule tasks
in crontable. Crontab (CRON TABle) is a table where we can schedule such kind of repeated
tasks.

Tips: Each user can have their own crontab to create, modify and delete tasks. By default cron
is enable to users, however we can restrict adding entry in /etc/cron.deny file.

Crontab file consists of command per line and have six fields actually and separated either of
space or tab. The beginning five fields represent time to run tasks and last field is for command.

1. Minute (hold values between 0-59)


2. Hour (hold values between 0-23)
3. Day of Month (hold values between 1-31)
4. Month of the year (hold values between 1-12 or Jan-Dec, you can use first three letters
of each month’s name i.e Jan or Jun.)
90

5. Day of week (hold values between 0-6 or Sun-Sat, Here also you can use first three
Page

letters of each day’s name i.e Sun or Wed. )


6. Command

1. List Crontab Entries


List or manage the task with crontab command with -l option for current user.
# crontab -l

2. Edit Crontab Entries


To edit crontab entry, use -e option as shown below. In the below example will open schedule
jobs in VI editor. Make a necessary changes and quit pressing :wq keys which saves the setting
automatically.
# crontab -e

3. List Scheduled Cron Jobs


To list scheduled jobs of a particular user called tecmint using option as -u (User) and -l (List).
# crontab -u tecmint -l

Note: Only root user have complete privileges to see other users crontab entry. Normal user
can’t view it others.

4. Remove Crontab Entry


Caution: Crontab with -r parameter will remove complete scheduled jobs without confirmation
from crontab. Use -i option before deleting user’s crontab.
# crontab -r

5. Prompt Before Deleting Crontab


crontab with -i option will prompt you confirmation from user before deleting user’s crontab.
# crontab -i -r

6. Allowed special character (*, -, /, ?, #)

1. Asterik(*) – Match all values in the field or any possible value.


2. Hyphen(-) – To define range.
91

3. Slash (/) – 1st field /10 meaning every ten minute or increment of range.
4. Comma (,) – To separate items.
Page
7. System Wide Cron Schedule
System administrator can use predefine cron directory as shown below.

1. /etc/cron.d
2. /etc/cron.daily
3. /etc/cron.hourly
4. /etc/cron.monthly
5. /etc/cron.weekly

8. Schedule a Jobs for Specific Time


The below jobs delete empty files and directory from /tmp at 12:30 am daily. You need to
mention user name to perform crontab command. In below example root user is performing
cron job.
# crontab -e
30 0 * * * root find /tmp -type f -empty -delete

9. Special Strings for Common Schedule

Strings Meanings

@reboot Command will run when the system reboot.

@daily Once per day or may use @midnight.

@weekly Once per week.

@yearly Once per year. we can use @annually keyword also.


92

Need to replace five fields of cron command with keyword if you want to use the same.
Page
10. Multiple Commands with Double amper-sand(&&)
# crontab -e
@daily <command1> && <command2>

11. Disable Email Notification.


By default cron send mail to user account executing cronjob. If you want to disable it add your
cron job similar to below example. Using >/dev/null 2>&1 option at the end of the file will
redirect all the output of the cron results under /dev/null.
# crontab -e
* * * * * >/dev/null 2>&1

Command Line Archive Tools

What is Archived file?


An archive file is a compressed file which is composed of one or more than one computer files
along with metadata.

Features of Archiving

1. Data Compression
2. Encryption
3. File Concatenation
4. Automatic Extraction
5. Automatic Installation
6. Source Volume and Media Information
7. File Spanning
8. Checksum
9. Directory Structure Information
10. Other Metadata (Data About Data)
11. Error discovery

Area of Application
93

1. Store Computer Files System along with Metadata.


Page

2. Useful in transferring file locally.


3. Useful in transferring file over web.
4. Software Packaging Application.

The useful archiving application on standard Linux distribution follows:

1. tar Command
tar is the standard UNIX/Linux archiving application tool. In its early stage it used to be a Tape
Archiving Program which gradually is developed into General Purpose archiving package which
is capable of handling archive files of every kind. tar accepts a lot of archiving filter with
options.

tar options

1. -A : Append tar files to existing archives.


2. -c : Create a new archive file.
3. -d : Compare archive with Specified filesystem.
4. -j : bzip the archive
5. -r : append files to existing archives.
6. -t : list contents of existing archives.
7. -u : Update archive
8. -x : Extract file from existing archive.
9. -z : gzip the archive
10. –delete : Delete files from existing archive.

tar Examples

Create a tar archive file.


# tar -zcvf name_of_tar.tar.gz /path/to/folder

Decompress an tar archive file.


# tar -zxvf Name_of_tar_file.tar.gz
94

The Linux “tar” stands for tape archive, which is used by large number of Linux/Unix system
administrators to deal with tape drives backup. The tar command used to rip a collection of files
Page

and directories into highly compressed archive file commonly called tarball or tar, gzip and bzip
in Linux. The tar is most widely used command to create compressed archive files and that can
be moved easily from one disk to another disk or machine to machine.

The main purpose of this guide is to provide various tar command examples that might be
helpful for you to understand and become expert in tar archive manipulation.

1. Create tar Archive File


The below example command will create a tar archive file niki-14-09-12.tar for a directory
/home/niki in current working directory. See the example command in action.
# tar -cvf niki-14-09-12.tar /home/niki/
Let’s discuss each option that we have used in the above command for creating a tar archive
file.
1. c – Creates a new .tar archive file.
2. v – Verbosely show the .tar file progress.
3. f – File name type of the archive file.

2. Create tar.gz Archive File


To create a compressed gzip archive file we use the option as z. For example the below
command will create a compressed MyImages-14-09-12.tar.gz file for the directory
/home/MyImages. (Note : tar.gz and tgz both are similar).

# tar cvzf MyImages-14-09-12.tar.gz /home/MyImages


OR
# tar cvzf MyImages-14-09-12.tgz /home/MyImages
3. Create tar.bz2 Archive File
The bz2 feature compress and create archive file less than the size of the gzip. The bz2
compression takes more time to compress and decompress files as compared to gzip which
takes less time. To create highly compressed tar file we use option as j. The following example
command will create a Phpfiles-org.tar.bz2 file for a directory /home/php. (Note: tar.bz2 and
tbz is similar as tb2).
95

# tar cvfj Phpfiles-org.tar.bz2 /home/php


Page

OR
# tar cvfj Phpfiles-org.tar.tbz /home/php
OR
# tar cvfj Phpfiles-org.tar.tb2 /home/php

4. Untar tar Archive File


To untar or extract a tar file, just issue following command using option x (extract). For example
the below command will untar the file public_html-14-09-12.tar in present working directory. If
you want to untar in a different directory then use option as -C (specified directory).

## Untar files in Current Directory ##


# tar -xvf public_html-14-09-12.tar
## Untar files in specified Directory ##
# tar -xvf public_html-14-09-12.tar -C /home/public_html/videos/

5. Uncompress tar.gz Archive File


To Uncompress tar.gz archive file, just run following command. If would like to untar in
different directory just use option -C and the path of the directory, like we shown in the above
example.
# tar -xvf thumbnails-14-09-12.tar.gz

6. Uncompress tar.bz2 Archive File


To Uncompress highly compressed tar.bz2 file, just use the following command. The below
example command will untar all the .flv files from the archive file.
# tar -xvf videos-14-09-12.tar.bz2

7. List Content of tar Archive File


To list the contents of tar archive file, just run the following command with option t (list
content). The below command will list the content of uploadprogress.tar file.
# tar -tvf uploadprogress.tar

8. List Content tar.gz Archive File


96

Use the following command to list the content of tar.gz file.


# tar -tvf staging.niki.com.tar.gz
Page
9. List Content tar.bz2 Archive File
To list the content of tar.bz2 file, issue the following command.
# tar -tvf Phpfiles-org.tar.bz2

10. Untar Single file from tar File


To extract a single file called cleanfiles.sh from cleanfiles.sh.tar use the following command.
# tar -xvf cleanfiles.sh.tar cleanfiles.sh
OR
# tar --extract --file=cleanfiles.sh.tar cleanfiles.sh

11. Untar Single file from tar.gz File


To extract a single file nikibackup.xml from nikibackup.tar.gz archive file, use the command as
follows.
# tar -zxvf nikibackup.tar.gz nikibackup.xml
OR
# tar --extract --file=nikibackup.tar.gz nikibackup.xml

12. Untar Single file from tar.bz2 File


To extract a single file called index.php from the file Phpfiles-org.tar.bz2 use the following
option.
# tar -jxvf Phpfiles-org.tar.bz2 home/php/index.php
OR
# tar --extract --file=Phpfiles-org.tar.bz2 /home/php/index.php

13. Untar Multiple files from tar, tar.gz and tar.bz2 File
To extract or untar multiple files from the tar, tar.gz and tar.bz2 archive file. For example the
below command will extract “file 1” “file 2” from the archive files.
# tar -xvf niki-14-09-12.tar "file 1" "file 2"
# tar -zxvf MyImages-14-09-12.tar.gz "file 1" "file 2"
# tar -jxvf Phpfiles-org.tar.bz2 "file 1" "file 2"
97
Page
14. Extract Group of Files using Wildcard
To extract a group of files we use wildcard based extracting. For example, to extract a group of
all files whose pattern begins with .php from a tar, tar.gz and tar.bz2 archive file.
# tar -xvf Phpfiles-org.tar --wildcards '*.php'
# tar -zxvf Phpfiles-org.tar.gz --wildcards '*.php'
# tar -jxvf Phpfiles-org.tar.bz2 --wildcards '*.php'

15. Add Files or Directories to tar Archive File


To add files or directories to existing tar archived file we use the option r (append). For example
we add file xyz.txt and directory php to existing niki-14-09-12.tar archive file.
# tar -rvf niki-14-09-12.tar xyz.txt
# tar -rvf niki-14-09-12.tar php

16. Add Files or Directories to tar.gz and tar.bz2 files


The tar command don’t have a option to add files or directories to an existing compressed
tar.gz and tar.bz2 archive file. If we do try will get the following error.
# tar -rvf MyImages-14-09-12.tar.gz xyz.txt
# tar -rvf Phpfiles-org.tar.bz2 xyz.txt

17. How To Verify tar, tar.gz and tar.bz2 Archive File


To verfify any tar or compressed archived file we use option as W (verify). To do, just use the
following examples of command. (Note : You cannot do verification on a compressed ( *.tar.gz,
*.tar.bz2 ) archive file).
# tar tvfW niki-14-09-12.tar

18. Check the Size of the tar, tar.gz and tar.bz2 Archive File
To check the size of any tar, tar.gz and tar.bz2 archive file, use the following command. For
example the below command will display the size of archive file in Kilobytes (KB).
# tar -czf - tecmint-14-09-12.tar | wc -c
# tar -czf - MyImages-14-09-12.tar.gz | wc -c
# tar -czf - Phpfiles-org.tar.bz2 | wc -c
98
Page
Tar Usage and Options

1. c – create a archive file.


2. x – extract a archive file.
3. v – show the progress of archive file.
4. f – filename of archive file.
5. t – viewing content of archive file.
6. j – filter archive through bzip2.
7. z – filter archive through gzip.
8. r – append or update files or directories to existing archive file.
9. W – Verify a archive file.
10. wildcards – Specify patterns in unix tar command.

2.shar Command
shar which stands for Shell archive is a shell script, the execution of which will create the files.
shar is a self-extracting archive file which is a legacy utility and needs Unix Bourne Shell to
extract the files. shar has an advantage of being plain text however it is potentially dangerous,
since it outputs an executable.

shar options

1. -o : Save output to archive files as specified, in the option.


2. -l : Limit the output size, as specified, in the option but do not split it.
3. -L : Limit the output size, as specified, in the option and split it.
4. -n : Name of Archive to be included in the header of the shar files.
5. -a : Allow automatic generation of headers.

Note: The ‘-o‘ option is required if the ‘-l‘ or ‘-L‘ option is used and the ‘-n‘ option is required if
the ‘-a‘ option is used.

shar Examples
99
Page
Create a shar archive file.
# shar file_name.extension > filename.shar

Extract an shar archive file.


# unshar file_name.shar

3. ar Command
ar is the creation and manipulation utility for archives, mainly used for binary object file
libraries. ar stands for archiver which can be used to create archive of any kind for any purpose
but has largely been replaced by ‘tar’ and now-a-days it is used only to create and update static
library files.

ar options

1. -d : Delete modules from the archive.


2. -m : Move Members in the archive.
3. -p : Print specified members of the archive.
4. -q : Quick Append.
5. -r : Insert file member to archive.
6. -s : Add index to archive.
7. -a : Add a new file to the existing members of archive.

ar Examples

Create an archive using ‘ar‘ tool with a static library say ‘libmath.a‘ with the objective files
‘substraction’ and ‘division’ as.
# ar cr libmath.a substraction.o division.o

To extract an ‘ar’ archive file.


# ar x libmath.a
100
Page
4. cpio
cpio stands for Copy in and out. Cpio is a general purpose file archiver for Linux. It is actively
used by RedHat Package Manager (RPM) and in the initramfs of Linux Kernel as well as an
important archiving tool in Apple Computer’s Installer (pax).

cpio options

1. -0 : Read a list of filenames terminated by a null character instead of a newline.


2. -a : Reset Access time.
3. -A : Append.
4. -b : swap.
5. -d : Make Directories.

cpio Examples

Create an ‘cpio’ archive file.


# cd tecmint
# ls
file1.o file2.o file3.o
# ls | cpio -ov > /path/to/output_folder/obj.cpio

To extract a cpio archive file.


# cpio -idv < /path/to folder/obj.cpio

5. Gzip
gzip is standard and widely used file compression and decompression utility. Gzip allows file
concatenation. Compressing the file with gzip, outputs the tarball which is in the format of
‘*.tar.gz‘ or ‘*.tgz‘.

gzip options

1. –stdout : Produce output on standard output.


101

2. –to-stdout : Produce output on standard output.


3. –decompress : Decompress File.
Page

4. –uncompress : Decompress File.


5. -d : Decompress File.
6. -f : Force Compression/Decompression.

gzip Examples

Create an ‘gzip’ archive file.


# tar -cvzf name_of_archive.tar.gz /path/to/folder

To extract a ‘gzip’ archive file.


# gunzip file_name.tar.gz

The above command must be passed followed with below command.


# tar -xvf file_name.tar

Note: The architecture and functionality of ‘gzip’ makes it difficult to recover corrupted
‘gzipped tar archive’ file. It is advised to make several backups of gzipped Important files, at
different Locations.

20.cal Command
The cal command print a calendar on the standard output.
$ cal

21.cat Command
cat command is used to view contents of a file or concatenate files, or data provided on
standard input, and display it on the standard output.
$ cat file.txt

The cat (short for “concatenate“) command is one of the most frequently used command in
Linux/Unix like operating systems. cat command allows us to create single or multiple files, view
contain of file, concatenate files and redirect output in terminal or files. In this article, we are
going to find out handy use of cat commands with their examples in Linux.
102

General Syntax
Page

cat [OPTION] [FILE]...


1. Display Contents of File
In the below example, it will show contents of /etc/passwd file.
# cat /etc/passwd

2. View Contents of Multiple Files in terminal


In below example, it will display contents of test and test1 file in terminal.
# cat test test1

3. Create a File with Cat Command


We will create a file called test2 file with below command.
# cat >test2

Awaits input from user, type desired text and press CTRL+D (hold down Ctrl Key and type ‘d‘) to
exit. The text will be written in test2 file. You can see content of file with following cat
command.
# cat test2

4. Use Cat Command with More & Less Options


If file having large number of content that won’t fit in output terminal and screen scrolls up
very fast, we can use parameters more and less with cat command as show above.
# cat song.txt | more
# cat song.txt | less

5. Display Line Numbers in File


With -n option you could see the line numbers of a file song.txt in the output terminal.
# cat -n song.txt

6. Display $ at the End of File


In the below, you can see with -e option that ‘$‘ is shows at the end of line and also in space
showing ‘$‘ if there is any gap between paragraphs. This options is useful to squeeze multiple
lines in a single line.
103

# cat -e test
Page
7. Display Tab separated Lines in File
In the below output, we could see TAB space is filled up with ‘^I‘ character.
# cat -T test

8. Display Multiple Files at Once


In the below example we have three files test, test1 and test2 and able to view the contents of
those file as shown above. We need to separate each file with ; (semi colon).
# cat test; cat test1; cat test2

9. Use Standard Output with Redirection Operator


We can redirect standard output of a file into a new file else existing file with ‘>‘ (greater than)
symbol. Careful, existing contents of test1 will be overwritten by contents of test file.
# cat test > test1

10. Appending Standard Output with Redirection Operator


Appends in existing file with ‘>>‘ (double greater than) symbol. Here, contents of test file will be
appended at the end of test1 file.
# cat test >> test1

11. Redirecting Standard Input with Redirection Operator


When you use the redirect with standard input ‘<‘ (less than symbol), it use file name test2 as a
input for a command and output will be shown in a terminal.
# cat < test2

12. Redirecting Multiple Files Contain in a Single File


This will create a file called test3 and all output will be redirected in a newly created file.
# cat test test1 test2 > test3

13. Sorting Contents of Multiple Files in a Single File


This will create a file test4 and output of cat command is piped to sort and result will be
redirected in a newly created file.
104

# cat test test1 test2 test3 | sort > test4


Page
cat command is to read or combine multiple files together and send the output to a monitor as
illustrated in the below examples.
# cat file1.txt file2.txt file3.txt

The command can also be used to concatenate (join) multiple files into one single file using the
“>” Linux redirection operator.
# cat file1.txt file2.txt file3.txt > file-all.txt

By using the append redirector you can add the content of a new file to the bottom of the file-
all.txt with the following syntax.
# cat file4.txt >> file-all.txt

The cat command can be used to copy the content of file to a new file. The new file can be
renamed arbitrary. For example, copy the file from the current location to /tmp/ directory.
# cat file1.txt > /tmp/file1.txt

Copy the file from the current location to /tmp/ directory and change its name.
# cat file1.txt > /tmp/newfile.cfg

A less usage of the cat command is to create a new file with the below syntax. When finished
editing the file hit CTRL+D to save and exit the new file.
# cat > new_file.txt

In order to number all output lines of a file, including empty lines, use the -n switch.
# cat -n file-all.txt

To display only the number of each non-empty line use the -b switch.
# cat -b file-all.txt

How to Use Tac Command in Linux


On the other hand, a lesser known and less used command in *nix systems is tac command. Tac
105

is practically the reverse version of cat command (also spelled backwards) which prints each
line of a file starting from the bottom line and finishing on the top line to your machine
Page
standard output.
# tac file-all.txt

One of the most important option of the command is represented by the -s switch, which
separates the contents of the file based on a string or a keyword from the file.
# tac file-all.txt --separator "two"

Next, most important usage of tac command is, that it can provide a great help in order to
debug log files, reversing the chronological order of log contents.
$ tac /var/log/auth.log
Or to display the last lines
$ tail /var/log/auth.log | tac

Same as cat command, tac does an excellent job in manipulating text files, but it should be
avoided in other type of files, especially binary files or on files where the first line denotes the
program that will run it.

22.chgrp Command
chgrp command is used to change the group ownership of a file. Provide the new group name
as its first argument and the name of file as the second argument like this:
$ chgrp niki users.txt

23.chmod Command
chmod command is used to change/update file access permissions like this.
$ chmod +x sysinfo.sh

24.chown Command
chown command changes/updates the user and group ownership of a file/directory like this.
$ chmod -R www-data:www-data /var/www/html
106

Managing Users & Groups, File Permissions & Attributes and Enabling sudo Access on
Accounts
Page
Adding User Accounts
To add a new user account, you can run either of the following two commands as root.
# adduser [new_account]
# useradd [new_account]

When a new user account is added to the system, the following operations are performed.

1. His/her home directory is created (/home/username by default).

2. The following hidden files are copied into the user’s home directory, and will be used to
provide environment variables for his/her user session.
.bash_logout .b
ash_profile
.bashrc

3. A mail spool is created for the user at /var/spool/mail/username.

4. A group is created and given the same name as the new user account.

Understanding /etc/passwd

The full account information is stored in the /etc/passwd file. This file contains a record per
system user account and has the following format (fields are delimited by a colon).

[username]:[x]:[UID]:[GID]:[Comment]:[Home directory]:[Default shell]

1. Fields [username] and [Comment] are self explanatory.


2. The x in the second field indicates that the account is protected by a shadowed
password (in /etc/shadow), which is needed to logon as [username].
3. The [UID] and [GID] fields are integers that represent the User IDentification and the
primary Group IDentification to which [username] belongs, respectively.
4. The [Home directory] indicates the absolute path to [username]’s home directory, and
107

5. The [Default shell] is the shell that will be made available to this user when he or she
Page

logins the system.


Understanding /etc/group

Group information is stored in the /etc/group file. Each record has the following format.

[Group name]:[Group password]:[GID]:[Group members]

1. [Group name] is the name of group.


2. An x in [Group password] indicates group passwords are not being used.
3. [GID]: same as in /etc/passwd.
4. [Group members]: a comma separated list of users who are members of [Group name].

After adding an account, you can edit the following information (to name a few fields) using the
usermod command, whose basic syntax of usermod is as follows.
# usermod [options] [username]

Setting the expiry date for an account


Use the –expiredate flag followed by a date in YYYY-MM-DD format.
# usermod --expiredate 2019-10-25 niki

Adding the user to supplementary groups


Use the combined -aG, or –append –groups options, followed by a comma separated list of
groups.
# usermod --append --groups root,users niki

Changing the default location of the user’s home directory


Use the -d, or –home options, followed by the absolute path to the new home directory.
# usermod --home /tmp niki

Changing the shell the user will use by default


Use –shell, followed by the path to the new shell.
# usermod --shell /bin/sh niki

Displaying the groups an user is a member of


108

# groups niki
Page

# id niki
Now let’s execute all the above commands in one go.
# usermod --expiredate 2019-10-25--append --groups root,users --home /tmp --shell /bin/sh
niki

In the example above, we will set the expiry date of the niki user account to October 25th,
2019. We will also add the account to the root and users group. Finally, we will set sh as its
default shell and change the location of the home directory to /tmp:

For existing accounts, we can also do the following.

Disabling account by locking password


Use the -L (uppercase L) or the –lock option to lock a user’s password.
# usermod --lock niki

Unlocking user password


Use the –u or the –unlock option to unlock a user’s password that was previously blocked.
# usermod --unlock niki

Creating a new group for read and write access to files that need to be accessed by several
users

Run the following series of commands to achieve the goal.


# groupadd common_group # Add a new group
# chown :common_group common.txt # Change the group owner of common.txt to
common_group
# usermod -aG common_group user1 # Add user1 to common_group
# usermod -aG common_group user2 # Add user2 to common_group
# usermod -aG common_group user3 # Add user3 to common_group

Deleting a group
You can delete a group with the following command.
# groupdel [group_name]
109

If there are files owned by group_name, they will not be deleted, but the group owner will be
Page

set to the GID of the group that was deleted.


Linux File Permissions
Like the basic permissions discussed earlier, they are set using an octal file or through a letter
(symbolic notation) that indicates the type of permission.

Deleting user accounts


You can delete an account (along with its home directory, if it’s owned by the user, and all the
files residing therein, and also the mail spool) using the userdel command with the –remove
option.
# userdel --remove [username]

Group Management
Every time a new user account is added to the system, a group with the same name is created
with the username as its only member. Other users can be added to the group later. One of the
purposes of groups is to implement a simple access control to files and other system resources
by setting the right permissions on those resources.

For example, suppose you have the following users.

1. user1 (primary group: user1)


2. user2 (primary group: user2)
3. user3 (primary group: user3)

All of them need read and write access to a file called common.txt located somewhere on your
local system, or maybe on a network share that user1 has created. You may be tempted to do
something like,
# chmod 660 common.txt
OR
# chmod u=rw,g=rw,o= common.txt [notice the space between the last equal sign and the file
name]

However, this will only provide read and write access to the owner of the file and to those users
110

who are members of the group owner of the file (user1 in this case). Again, you may be
tempted to add user2 and user3 to group user1, but that will also give them access to the rest
Page

of the files owned by user user1 and group user1.


This is where groups come in handy, and here’s what you should do in a case like this.

Understanding Setuid

When the setuid permission is applied to an executable file, an user running the program
inherits the effective privileges of the program’s owner. Since this approach can reasonably
raise security concerns, the number of files with setuid permission must be kept to a minimum.
You will likely find programs with this permission set when a system user needs to access a file
owned by root.

Summing up, it isn’t just that the user can execute the binary file, but also that he can do so
with root’s privileges. For example, let’s check the permissions of /bin/passwd. This binary is
used to change the password of an account, and modifies the /etc/shadow file. The superuser
can change anyone’s password, but all other users should only be able to change their own.

Thus, any user should have permission to run /bin/passwd, but only root will be able to specify
an account. Other users can only change their corresponding passwords.

Understanding Setgid

When the setgid bit is set, the effective GID of the real user becomes that of the group owner.
Thus, any user can access a file under the privileges granted to the group owner of such file. In
addition, when the setgid bit is set on a directory, newly created files inherit the same group as
the directory, and newly created subdirectories will also inherit the setgid bit of the parent
directory. You will most likely use this approach whenever members of a certain group need
access to all the files in a directory, regardless of the file owner’s primary group.
# chmod g+s [filename]

To set the setgid in octal form, prepend the number 2 to the current (or desired) basic
permissions.
111

# chmod 2755 [directory]


Page

Understanding Sticky Bit


When the “sticky bit” is set on files, Linux just ignores it, whereas for directories it has the effect
of preventing users from deleting or even renaming the files it contains unless the user owns
the directory, the file, or is root.
# chmod o+t [directory]

To set the sticky bit in octal form, prepend the number 1 to the current (or desired) basic
permissions.
# chmod 1755 [directory]

Without the sticky bit, anyone able to write to the directory can delete or rename files. For that
reason, the sticky bit is commonly found on directories, such as /tmp, that are world-writable.

Special Linux File Attributes


There are other attributes that enable further limits on the operations that are allowed on files.
For example, prevent the file from being renamed, moved, deleted, or even modified. They are
set with the chattr command and can be viewed using the lsattr tool, as follows.
# chattr +i file1
# chattr +a file2

After executing those two commands, file1 will be immutable (which means it cannot be
moved, renamed, modified or deleted) whereas file2 will enter append-only mode (can only be
open in append mode for writing).

chattr (Change Attribute) is a command line Linux utility that is used to set/unset certain
attributes to a file in Linux system to secure accidental deletion or modification of important
files and folders, even though you are logged in as a root user.

In Linux native filesystems i.e. ext2, ext3, ext4, btrfs, etc. supports all the flags, though all the
flags won’t support to all non-native FS. One cannot delete or modify file/folder once attributes
are sets with chattr command, even though one have full permissions on it.
112

Syntax of chattr
# chattr [operator] [flags] [filename]
Page
Attributes and Flags

Following are the list of common attributes and associated flags can be set/unset using the
chattr command.

1. If a file is accessed with ‘A‘ attribute set, its atime record is not updated.
2. If a file is modified with ‘S‘ attribute set, the changes are updates synchronously on the
disk.
3. A file is set with ‘a‘ attribute, can only be open in append mode for writing.
4. A file is set with ‘i‘ attribute, cannot be modified (immutable). Means no renaming, no
symbolic link creation, no execution, no writable, only superuser can unset the attribute.
5. A file with the ‘j‘ attribute is set, all of its information updated to the ext3 journal before
being updated to the file itself.
6. A file is set with ‘t‘ attribute, no tail-merging.
7. A file with the attribute ‘d‘, will no more candidate for backup when the dump process is
run.
8. When a file has ‘u‘ attribute is deleted, its data are saved. This enables the user to ask
for its undeletion.

Operator

1. + : Adds the attribute to the existing attribute of the files.


2. – : Removes the attribute to the existing attribute of the files.
3. = : Keep the existing attributes that the files have.

Here, we are going to demonstrate some of the chattr command examples to set/unset
attributes to a file and folders.

1. How to add attributes on files to secure from deletion


For demonstration purpose, we’ve used folder demo and file important_file.conf respectively.
Before setting up attributes, make sure to verify that the existing files have any attributes set
using ‘ls -l‘ command. Did you see the results, currently no attribute are set.
113

# ls -l
Page
To set attribute, we use the + sign and to unset use the – sign with the chattr command. So,
let’s set immutable bit on the files with +i flags to prevent anyone from deleting a file, even a
root user don’t have permission to delete it.
# chattr +i demo/
# chattr +i important_file.conf

Note: The immutable bit +i can only be set by superuser (i.e root) user or a user with sudo
privileges can able to set.

After setting immutable bit, let’s verify the attribute with command ‘lsattr‘.
# lsattr

Now, tried to delete forcefully, rename or change the permissions, but it won’t allowed says
“Operation not permitted“.

# rm -rf demo/
# mv demo/ demo_alter
# chmod 755 important_file.conf

2. How to unset attribute on Files


In the above example, we’ve seen how to set attribute to secure and prevent files from a
accidental deletion, here in this example, we will see how to reset (unset attribute) permissions
and allows to make a files changeable or alterable using -i flag.
# chattr -i demo/ important_file.conf

After resetting permissions, verify the immutable status of files using ‘lsattr‘ command.
# lsattr

You see in the above results that the ‘-i‘ flag removed, that means you can safely remove all the
file and folder reside in niki folder.
# rm -rf *
114
Page

3. How to Secure /etc/passwd and /etc/shadow files


Setting immutable attribute on files /etc/passwd or /etc/shadow, makes them secure from an
accidental removal or tamper and also it will disable user account creation.
# chattr +i /etc/passwd
# chattr +i /etc/shadow

Now try to create a new system user, you will get error message saying ‘cannot open
/etc/passwd‘.

This way you can set immutable permissions on your important files or system configuration
files to prevent from deletion.

4. Append data without Modifying existing data on a File


Suppose, you only want to allow everyone to just append data on a file without changing or
modifying already entered data, you can use the ‘a‘ attribute as follows.
# chattr +a example.txt
# lsattr example.txt

After setting append mode, the file can be opened for writing data in append mode only. You
can unset the append attribute as follows.
# chattr -a example.txt

Now try to replace already existing content on a file example.txt, you will get error saying
‘Operation not permitted‘.

# echo "replace contain on file." > example.txt

Now try to append new content on a existing file example.txt and verify it.
# echo "replace contain on file." >> example.txt
# cat example.txt

5. How to Secure Directories


115

To secure entire directory and its files, we use ‘-R‘ (recursively) switch with ‘+i‘ flag along with
full path of the folder.
Page

# chattr -R +i myfolder
After setting recursively attribute, try to delete the folder and its files.
# rm -rf myfolder/

To unset permission, we use same ‘-R’ (recursively) switch with ‘-i’ flag along with full path of
the folder.
# chattr -R -i myfolder

Accessing the root Account and Using sudo


One of the ways users can gain access to the root account is by typing.
$ su
and then entering root’s password.

If authentication succeeds, you will be logged on as root with the current working directory as
the same as you were before. If you want to be placed in root’s home directory instead, run.
$ su -
and then enter root’s password.

The above procedure requires that a normal user knows root’s password, which poses a serious
security risk. For that reason, the sysadmin can configure the sudo command to allow an
ordinary user to execute commands as a different user (usually the superuser) in a very
controlled and limited way. Thus, restrictions can be set on a user so as to enable him to run
one or more specific privileged commands and no others.

To authenticate using sudo, the user uses his/her own password. After entering the command,
we will be prompted for our password (not the superuser’s) and if the authentication succeeds
(and if the user has been granted privileges to run the command), the specified command is
carried out.

To grant access to sudo, the system administrator must edit the /etc/sudoers file. It is
recommended that this file is edited using the visudo command instead of opening it directly
with a text editor.
116

# visudo
Page

This opens the /etc/sudoers file using vim


These are the most relevant lines.
Defaults secure_path="/usr/sbin:/usr/bin:/sbin"
root ALL=(ALL) ALL
niki ALL=/bin/yum update
shimon ALL=NOPASSWD:/bin/updated
%admin ALL=(ALL) ALL

Let’s take a closer look at them.

Defaults secure_path="/usr/sbin:/usr/bin:/sbin:/usr/local/bin"
This line lets you specify the directories that will be used for sudo, and is used to prevent using
user-specific directories, which can harm the system.

The next lines are used to specify permissions.

root ALL=(ALL) ALL

1. The first ALL keyword indicates that this rule applies to all hosts.
2. The second ALL indicates that the user in the first column can run commands with the
privileges of any user.
3. The third ALL means any command can be run.

niki ALL=/bin/yum update


If no user is specified after the = sign, sudo assumes the root user. In this case, user niki will be
able to run yum update as root.

shimon ALL=NOPASSWD:/bin/updatedb
The NOPASSWD directive allows user shimon to run /bin/updatedb without needing to enter
his password.

%admin ALL=(ALL) ALL


The % sign indicates that this line applies to a group called “admin”. The meaning of the rest of
117

the line is identical to that of an regular user. This means that members of the group “admin”
can run all commands as any user on all hosts.
Page
To see what privileges are granted to you by sudo, use the “-l” option to list them.

PAM (Pluggable Authentication Modules)


Pluggable Authentication Modules (PAM) offer the flexibility of setting a specific authentication
scheme on a per-application and / or per-service basis using modules. This tool present on all
modern Linux distributions overcame the problem often faced by developers in the early days
of Linux, when each program that required authentication had to be compiled specially to know
how to get the necessary information.

For example, with PAM, it doesn’t matter whether your password is stored in /etc/shadow or
on a separate server inside your network.

For example, when the login program needs to authenticate a user, PAM provides dynamically
the library that contains the functions for the right authentication scheme. Thus, changing the
authentication scheme for the login application (or any other program using PAM) is easy since
it only involves editing a configuration file (most likely, a file named after the application,
located inside /etc/pam.d, and less likely in /etc/pam.conf).

Files inside /etc/pam.d indicate which applications are using PAM natively. In addition, we can
tell whether a certain application uses PAM by checking if it the PAM library (libpam) has been
linked to it:
# ldd $(which login) | grep libpam # login uses PAM
# ldd $(which top) | grep libpam # top does not use PAM

Let’s examine the PAM configuration file for passwd – yes, the well-known utility to change
user’s passwords. It is located at /etc/pam.d/passwd:
# cat /etc/passwd

The following authentication types are available:

1. account: this module type checks if the user or service has supplied valid credentials to
118

authenticate.
2. auth: this module type verifies that the user is who he / she claims to be and grants any
Page

needed privileges.
3. password: this module type allows the user or service to update their password.
4. session: this module type indicates what should be done before and/or after the
authentication succeeds.

Control indicates what should happen if the authentication with this module fails:

1. requisite: if the authentication via this module fails, overall authentication will be
denied immediately.
2. required is similar to requisite, although all other listed modules for this service will be
called before denying authentication.
3. sufficient: if the authentication via this module fails, PAM will still grant authentication
even if a previous marked as required failed.
4. optional: if the authentication via this module fails or succeeds, nothing happens unless
this is the only module of its type defined for this service.
5. include means that the lines of the given type should be read from another file.
6. substack is similar to includes but authentication failures or successes do not cause the
exit of the complete module, but only of the substack.

‘su’ Vs ‘sudo’
‘su‘ forces you to share your root password to other users whereas ‘sudo‘ makes it possible to
execute system commands without root password. ‘sudo‘ lets you use your own password to
execute system commands i.e., delegates system responsibility without root password.

What is ‘sudo’?
‘sudo‘ is a root binary setuid, which executes root commands on behalf of authorized users
and the users need to enter their own password to execute system command followed by
‘sudo‘.

Who can execute ‘sudo’?


We can run ‘/usr/sbin/visudo‘ to add/remove the list of users who can execute ‘sudo‘.
$ sudo /usr/sbin/visudo
119

The sudo list looks like the below string, by default:


Page

root ALL=(ALL) ALL


Note: You must be root to edit /usr/sbin/visudo file.

Granting sudo Access


In many situation, System Administrator, specially new to the field finds the string “root
ALL=(ALL) ALL” as a template and grants unrestricted access to others which may be potentially
very harmful.

Editing ‘/usr/sbin/visudo’ file to something like the below pattern may really be very
dangerous, unless you believe all the listed users completely.

Parameters of sudo
A properly configured ‘sudo‘ is very flexible and number of commands that needs to be run
may be precisely configured.

The Syntax of configured ‘sudo‘ line is:


User_name Machine_name=(Effective_user) command

The above Syntax can be divided into four parts:

1. User_name: This is the name of ‘sudo‘ user.


2. Machine_name: This is the host name, in which ‘sudo‘ command is valid. Useful when
you have lots of host machines.
3. (Effective_user): The ‘Effective user’ that are allowed to execute the commands. This
column lets you allows users to execute System Commands.
4. Command: command or a set of commands which user may run.

Some of the Situations, and their corresponding ‘sudo‘ line:

Q1. You have a user niki which is a Database Administrator. You are supposed to provide him
all the access on Database Server (beta.database_server.com) only, and not on any host.

For the above situation the ‘sudo‘ line can be written as:
120

niki beta.database_server.com=(ALL) ALL


Page
Q2. You have a user ‘shimon‘ which is supposed to execute system command as user other
than root on the same Database Server, above Explained.

For the above situation the ‘sudo‘ line can be written as:
niki beta.database_server.com=(shimon) ALL

Q3. You have a sudo user ‘cat‘ which is supposed to run command ‘dog‘ only.

To implement the above situation, we can write ‘sudo’ as:


niki beta.database_server.com=(cat) dog

Q4. What if the user needs to be granted several commands?

If the number of commands, user is supposed to run is under 10, we can place all the
commands alongside, with white space in between them, as shown below:
niki beta.database_server.com=(cat) /usr/bin/command1 /usr/sbin/command2
/usr/sbin/command3 ...

If this list of command varies to the range, where it is literally not possible to type each
command manually we need to use aliases. Aliases! Yeah the Linux utility where a long-lengthy
command or a list of command can be referred as a small and easy keyword.

A few alias Examples, which can be used in place of entry in ‘sudo‘ configuration file.
User_Alias ADMINS=niki,shimon,nik
user_Alias WEBMASTER=shimon,niki
WEBMASTERS WEBSERVERS=(www) APACHE
Cmnd_Alias PROC=/bin/kill,/bin/killall, /usr/bin/top

It is possible to specify a System Groups, in place of users, that belongs to that group just
suffixing ‘%’ as below:
%apacheadmin WEBSERVERS=(www) APACHE
121

Q5. How about executing a ‘sudo‘ command without entering password?


Page

We can execute a ‘sudo‘ command without entering password by using ‘NOPASSWD‘ flag.
niki ALL=(ALL) NOPASSWD: PROCS
Here the user ‘niki‘ can execute all the commands aliased under “PROCS”, without entering
password.

“sudo” provides you a robust and safe environment with loads of flexibility as compared to
‘su‘. Moreover “sudo” configuration is easy. Some Linux distributions have “sudo” enabled by
default while most of the distros of today needs you to enable it as a Security Measure.

To add an user (shimon) to sudo just run the below command as root.
adduser shimon sudo

sudo allows a permitted user to execute a command as root (or another user), as specified by
the security policy:

1. It reads and parses /etc/sudoers, looks up the invoking user and its permissions,
2. then prompts the invoking user for a password (normally the user’s password, but it can
as well be the target user’s password. Or it can be skipped with NOPASSWD tag),
3. after that, sudo creates a child process in which it calls setuid() to switch to the target
user
4. next, it executes a shell or the command given as arguments in the child process above.

Below are ten /etc/sudoers file configurations to modify the behavior of sudo command using
Defaults entries.
$ sudo cat /etc/sudoers

1. Set a Secure PATH


This is the path used for every command run with sudo, it has two importances:

1. Used when a system administrator does not trust sudo users to have a secure PATH
environment variable
2. To separate “root path” and “user path”, only users defined by exempt_group are not
affected by this setting.
122
Page
To set it, add the line
Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/
bin"

2. Enable sudo on TTY User Login Session


To enable sudo to be invoked from a real tty but not through methods such as cron or cgi-bin
scripts, add the line:
Defaults requiretty

3. Run Sudo Command Using a pty


A few times, attackers can run a malicious program (such as a virus or malware) using sudo,
which would again fork a background process that remains on the user’s terminal device even
when the main program has finished executing.

To avoid such a scenario, you can configure sudo to run other commands only from a psuedo-
pty using the use_pty parameter, whether I/O logging is turned on or not as follows:
Defaults use_pty

4. Create a Sudo Log File


By default, sudo logs through syslog(3). However, to specify a custom log file, use the logfile
parameter like so:
Defaults logfile="/var/log/sudo.log"

To log hostname and the four-digit year in the custom log file, use log_host and log_year
parameters respectively as follows:
Defaults log_host, log_year, logfile="/var/log/sudo.log"

5. Log Sudo Command Input/Output


The log_input and log_output parameters enable sudo to run a command in pseudo-tty and log
all user input and all output sent to the screen receptively.
123
Page
The default I/O log directory is /var/log/sudo-io, and if there is a session sequence number, it is
stored in this directory. You can specify a custom directory through the iolog_dir parameter.
Defaults log_input, log_output

There are some escape sequences are supported such as %{seq} which expands to a
monotonically increasing base-36 sequence number, such as 000001, where every two digits
are used to form a new directory, e.g. 00/00/01 as in the example below:
$ cd /var/log/sudo-io/
$ ls
$ cd 00/00/01
$ ls
$ cat log
You can view the rest of the files in that directory using the cat command.

6. Lecture Sudo Users


To lecture sudo users about password usage on the system, use the lecture parameter as
below.
It has 3 possible values:

1. always – always lecture a user.


2. once – only lecture a user the first time they execute sudo command (this is used when
no value is specified)
3. never – never lecture the user.

Defaults lecture="always"

Additionally, you can set a custom lecture file with the lecture_file parameter, type the
appropriate message in the file:
Defaults lecture_file="/path/to/file"
124

7. Show Custom Message When You Enter Wrong sudo Password


When a user enters a wrong password, a certain message is displayed on the command line.
Page

The default message is “sorry, try again”, you can modify the message using the
badpass_message parameter as follows:
Defaults badpass_message="Password is wrong, please try again"

8. Increase sudo Password Tries Limit


The parameter passwd_tries is used to specify the number of times a user can try to enter a
password.

The default value is 3:


Defaults passwd_tries=5

To set a password timeout (default is 5 minutes) using passwd_timeout parameter, add the line
below:
Defaults passwd_timeout=2

9. Let Sudo Insult You When You Enter Wrong Password


In case a user types a wrong password, sudo will display insults on the terminal with the insults
parameter. This will automatically turn off the badpass_message parameter.
Defaults insults

25.cksum Command
cksum command is used to display the CRC checksum and byte count of an input file.
$ cksum README.txt

26.clear Command
clear command lets you clear the terminal screen, simply type.
$ clear

27.cmp Command
cmp performs a byte-by-byte comparison of two files like this.
125

$ cmp file1 file2


Page

28.comm Command
comm command is used to compare two sorted files line-by-line as shown below.
$ comm file1 file2

29.cp Command
cp command is used for copying files and directories from one location to another.
$ cp /home/tecmint/file1 /home/tecmint/Personal/

How to Copy a File to Multiple Directories

The cp command is used to copy files from one directory to another, the easiest syntax for
using it is as follows:
# cp [options….] source(s) destination

Consider the commands below, normally, you would type two different commands to copy the
same file into two separate directories as follows:
# cp -v /home/niki/bin/sys_info.sh /home/niki/test
# cp -v /home/niki/bin/sys_info.sh /home/niki/tmp

Assuming that you want to copy a particular file into up to five or more directories, this means
you would have to type five or more cp commands?

To do away with this problem, you can employ the echo command, a pipe, xargs command
together with the cp command in the form below:
# echo /home/niki/test/ /home/niki/tmp | xargs -n 1 cp -v /home/niki/bin/sys_info.sh

In the form above, the paths to the directories (dir1,dir2,dir3…..dirN) are echoed and piped as
input to the xargs command where:

1. -n 1 – tells xargs to use at most one argument per command line and send to the cp
command.
2. cp – used to copying a file.
126

3. -v – enables verbose mode to show details of the copy operation.


Page

Advanced Copy Command – Shows Progress Bar While Copying Large Files/Folders
Advanced-Copy is a powerful command line program which is very much similar, but little
modified version of original cp command. This modified version of cp command adds a
progress bar along with total time taken to complete, while copying large files from one
location to another. This additional feature is very useful especially while copying large files,
and this gives an idea to user about the status of copy process and how long it takes to
complete.

Download and Install Advanced-Copy


There are two methods to install Advanced-Copy utility in Linux systems, either you compile
from sources or using pre-compiled binaries. Installing from pre-compiled binaries should
always work correctly and requires lesser experience and very effective for Linux newbies.

But I suggest you to compile from sources, for this you required original version of GNU
coreutils and latest patchfile of Advacned-Copy. The whole installation should go like this:

Compiling from Sources

First, download the latest version of GNU coreutils and patchfile using wget command and
compile and patch it as shown below, you must be root user to perform all commands.

# wget http://ftp.gnu.org/gnu/coreutils/coreutils-8.21.tar.xz
# tar xvJf coreutils-8.21.tar.xz
# cd coreutils-8.21/
# wget https://raw.githubusercontent.com/atdt/advcpmv/master/advcpmv-0.5-8.21.patch
# patch -p1 -i advcpmv-0.5-8.21.patch
# ./configure
# make
You might get the following error, while running “./configure” command.

checking whether mknod can create fifo without root privileges... configure: error: in
`/home/tecmint/coreutils-8.21':
127

configure: error: you should not run configure as root (set FORCE_UNSAFE_CONFIGURE=1 in
environment to bypass this check)
Page

See `config.log' for more details


Run the following command on the terminal to fix that error and run the “./configure”
command again.
export FORCE_UNSAFE_CONFIGURE=1

Once, compilation completes, two new commands are created under src/cp and src/mv. You
need to replace your original cp and mv commands with these two new commands to get the
progress bar while copying files.
# cp src/cp /usr/local/bin/cp
# cp src/mv /usr/local/bin/mv

Note: If you don’t want to copy these commands under standard system paths, you can still run
them from source directory like “./cp” and “./mv or create new commands as shown”.
# mv ./src/cp /usr/local/bin/cpg
# mv ./src/mv /usr/local/bin/mvg

Automatic progress bar


If you want the progress bar to be appear all the time while copying, you need to add the
following lines to your ~/.bashrc file. Save and close the file
alias cp='cp -gR'
alias mv='mv -g'

You need to logout and login again to get this work correctly.

How to Use Advacned-Copy Command


The command is same, the only change is adding “-g” or “–progress-bar” option with cp
command. The “-R” option is for copying directories recursively. Here is an example screen-
shots of a copy process using advanced copy command.
# cp -gR /niki.com/ /data/
OR
# cp -R --progress-bar /niki.com/ /data/
128

Here is the example of ‘mv‘ command


Page

# mv --progress-bar Songs/ /data/


OR
# mv -g Songs/ /data/

Please remember, original commands are not overwritten, if you ever need to use them or
you’re not happy with the new progress bar, and want to revert back to original cp and mv
commands. You can call them via /usr/bin/cp or /usr/bin/mv.

Progress – A Tiny Tool to Monitor Progress for (cp, mv, dd, tar, etc.)

Progress, formerly known as Coreutils Viewer, is a light C command that searches for coreutils
basic commands such as cp, mv, tar, dd, gzip/gunzip, cat, grep etc currently being executed on
the system and shows the percentage of data copied, it only runs on Linux and Mac OS X
operating systems.

Additionally, it also displays important aspects such as estimated time and throughput, and
offers users a “top-like” mode.

It utterly scans the /proc filesystem for fascinating commands, and then searches the fd and
fdinfo directories to find opened files, seeks positions, and reports status for the extensive files.
Importantly, it is a very light tool, and compatible with practically any command.

How to Install Progress Viewer in Linux


Progress requires the ncurses library in order to work, therefore install libncurses before
proceeding to install it, by running the appropriate command below:

On RHEL, CentOS and Fedora


# yum install ncurses-devel

On Fedora 22+ Releases


# dnf install ncurses-devel
129

On Debian, Ubuntu and Linux Mint


$ sudo apt-get install libncurses5-dev
Page
Next, move into the progress directory and build it as shown:
$ cd progress
$ make
$ sudo make install

After successfully installing it, simply run this tool from your terminal, below we shall walk
through a few examples of using Progress on a Linux system.

You can view all the coreutils commands that Progress works with by running it without any
options, provided non of the coreutils commands is being executed on the system:
$ progress

To display estimated I/O throughput and estimated remaining time for on going coreutils
commands, enable the -w option:
$ progress -w

Start a heavy command and monitor it using -m option and $! as follows:

$ tar czf images.tar.gz linuxmint-18-cinnamon-64bit.iso CentOS-7.0-1406-x86_64-DVD.iso


CubLinux-1.0RC-amd64.iso | progress -m $!

In the next example, you can open two or more terminal windows, then run the coreutils
commands in one each, and watch their progress using the other terminal window.

The command below will enable you to monitor all the current and imminent instances of
coreutils commands:
$ watch progress -q

Exploring /proc File System

One misconception that we have to immediately clear up is that the /proc directory is NOT a
130

real File System, in the sense of the term. It is a Virtual File System. Contained within the procfs
Page
are information about processes and other system information. It is mapped to /proc and
mounted at boot time.

First, lets get into the /proc directory and have a look around:
# cd /proc

The first thing that you will notice is that there are some familiar sounding files, and then a
whole bunch of numbered directories. The numbered directories represent processes, better
known as PIDs, and within them, a command that occupies them. The files contain system
information such as memory (meminfo), CPU information (cpuinfo), and available filesystems.

Let’s take a look at one of the files first:


# cat /proc/meminfo

Running the cat command on any of the files in /proc will output their contents. Information
about any files is available in the man page by running:
# man 5 /proc/<filename>

Quick rundown on /proc’s files:

1. /proc/cmdline – Kernel command line information.


2. /proc/console – Information about current consoles including tty.
3. /proc/devices – Device drivers currently configured for the running kernel.
4. /proc/dma – Info about current DMA channels.
5. /proc/fb – Framebuffer devices.
6. /proc/filesystems – Current filesystems supported by the kernel.
7. /proc/iomem – Current system memory map for devices.
8. /proc/ioports – Registered port regions for input output communication with device.
9. /proc/loadavg – System load average.
10. /proc/locks – Files currently locked by kernel.
11. /proc/meminfo – Info about system memory (see above example).
131

12. /proc/misc – Miscellaneous drivers registered for miscellaneous major device.


13. /proc/modules – Currently loaded kernel modules.
Page

14. /proc/mounts – List of all mounts in use by system.


15. /proc/partitions – Detailed info about partitions available to the system.
16. /proc/pci – Information about every PCI device.
17. /proc/stat – Record or various statistics kept from last reboot.
18. /proc/swap – Information about swap space.
19. /proc/uptime – Uptime information (in seconds).
20. /proc/version – Kernel version, gcc version, and Linux distribution installed.

Within /proc’s numbered directories you will find a few files and links. Remember that these
directories’ numbers correlate to the PID of the command being run within them. Let’s use an
example. On my system, there is a folder name /proc/12:
# cd /proc/12
# ls
If I run:
# cat /proc/12/status

We can see from the status file that this process belongs to. We also can see who is running
this, as UID and GID are 0, indicating that this process belongs to the root user.

In any numbered directory, you will have a similar file structure. The most important ones, and
their descriptions, are as follows:

1. cmdline – command line of the process


2. environ – environmental variables
3. fd – file descriptors
4. limits – contains information about the limits of the process
5. mounts – related information

You will also notice a number of links in the numbered directory:

1. cwd – a link to the current working directory of the process


2. exe – link to the executable of the process
3. root – link to the work directory of the process
132

How to Monitor Progress of (Copy/Backup/Compress) Data using ‘pv’ Command


Page
When making backups, coping/moving large files on your Linux system, you may want to
monitor the progress of an on going operation. Many terminal tools do not have the
functionality to allow you to view progress information when a command is running in a pipe.

Pv is a terminal-based tool that allows you to monitor the progress of data that is being sent
through a pipe. When using the pv command, it gives you a visual display of the following
information:

1. The time that has elapsed.


2. The percentage completed including a progress bar.
3. Shows current throughput rate.
4. The total data transferred.
5. and the ETA (estimated Time).

How to Install pv Command in Linux?


This command is not installed by default on most Linux distributions, therefore you can install it
by following the steps below.

On Fedora, CentOS and RHEL


First you need to turn on EPEL repository and then run the following command.
# yum install pv
# dnf install pv [On Fedora 22+ versions]

On Debian, Ubuntu and Linux Mint


# apt-get install pv

On Gentoo Linux
Use emerge package manager to install pv command as shown.
# emerge --ask sys-apps/pv
133
Page
On FreeBSD Linux
You can use the port to install it as follows:
# cd /usr/ports/sysutils/pv/
# make install clean

OR add the binary package as follows:


# pkg_add -r pv

How Do I use pv Command in Linux?


pv is mostly used with other programs which lack the ability to monitor the progress of a an
ongoing operation. You can use it, by placing it in a pipeline between two processes, with the
appropriate options available.

The standard input of pv will be passed through to its standard output and progress (output)
will be printed on standard error. It has a similar behavior as the cat command in Linux.

The syntax of pv command as follows:


pv file
pv options file
pv file > filename.out
pv options | command > filename.out
comand1 | pv | command2

The options used with pv are divided into three categories, display switches, output modifiers
and general options.

Some options under display modifiers.

1. To turn on the display bar, use the -p option.


2. To view the elapsed time, use the –timer option.
134

3. To turn on ETA timer which tries to guess how long it will take before completion of an
operation, use the –eta option. The guess is based on previous transfer rates and the
Page

total data size.


4. To turn on a rate counter use the –rate option.
5. To display the total amount of data transferred so far, use the –bytes option.
6. To display progress inform of integer percentage instead of visual indication, use the -n
option. This can be good when using pv with the dialog command to show progress in a
dialog box.

Some options under output modifiers.

1. To wait until the first byte is transferred before displaying progress information, use the
–wait option.
2. To assume the total amount of data to be transferred is SIZE bytes when computing
percentage and ETA, use –size SIZE option.
3. To specify seconds between updates, use the –interval SECONDS option.
4. Use –force option to force an operation. This option forces pv to display visuals when
standard error is not a terminal.
5. The general options are –help to display usage information and –version to display
version information.

Use pv Command with Examples


1. When no option is included, pv commands run with default -p, -t, -e, -r and -b options.

For example, to copy the opensuse.vdi file to /tmp/opensuse.vdi, run this command and watch
the progress bar in screencast.
# pv opensuse.vdi > /tmp/opensuse.vdi

2. To make a zip file from your /var/log/syslog file, run the following command.
# pv /var/log/syslog | zip > syslog.zip

3. To count the number of lines, word and bytes in the /etc/hosts file while showing progress
bar only, run this command below.
# pv -p /etc/hosts | wc
135

4. Monitor the progress of creating a backup file using tar utility.


# tar -czf - ./Downloads/ | (pv -p --timer --rate --bytes > backup.tgz)
Page
5. Using pv and dialog terminal-based tool together to create a dialog progress bar as follows.
# tar -czf - ./Documents/ | (pv -n > backup.tgz) 2>&1 | dialog --gauge "Progress" 10 70

30.date Command
date command displays/sets the system date and time like this.
$ date
$ date --set="8 JUN 2019 13:00:00"

How to Set Time, Timezone and Synchronize System Clock Using timedatectl Command
The timedatectl command allows you to query and change the configuration of the system
clock and its settings, you can use this command to set or change the current date, time and
timezone or enable automatic system clock synchronization with a remote NTP server.
1. To display the current time and date on your system, use the timedatectl command from the
commandline as follows:
# timedatectl status

2. The time on your Linux system is always managed through the timezone set on the system,
to view your current timezone, do it as follows:
# timedatectl
OR
# timedatectl | grep Time

3. To view all available timezones, run the command below:


# timedatectl list-timezones

4. To find the local timezone according to your location, run the following command:
# timedatectl list-timezones | egrep -o “Asia/B.*”
# timedatectl list-timezones | egrep -o “Europe/L.*”
# timedatectl list-timezones | egrep -o “America/N.*”
136

5. To set your local timezone in Linux, we will use set-timezone switch as shown below.
# timedatectl set-timezone “Asia/Kolkata”
Page
It is always recommended to use and set the coordinated universal time, UTC.
# timedatectl set-timezone UTC

You need to type the correct name timezone other wise you may get errors when changing the
timezone, in the following example, the timezone “Asia/Kalkata” is not correct therefore
causing the errorHow to Set Time and Date in Linux

6. You can set the date and time on your system, using the timedatectl command as follows:
To set time only, we can use set-time switch along the format of time in HH:MM:SS (Hour,
Minute and Seconds).
# timedatectl set-time 15:58:30

7. To set date only, we can use set-time switch along the format of date in YY:MM:DD (Year,
Month, Day).
# timedatectl set-time 20191120

8. To set both date and time:


# timedatectl set-time '2019-11-20 16:14:50'

9. To set your hardware clock to coordinated universal time, UTC, use the set-local-rtc boolean-
value option as follows:
First Find out if your hardware clock is set to local timezone:
# timedatectl | grep local
Set your hardware clock to local timezone:
# timedatectl set-local-rtc 1
Set your hardware clock to coordinated universal time (UTC):
# timedatectl set-local-rtc 0

Synchronizing Linux System Clock with a Remote NTP Server


NTP stands for Network Time Protocol is a internet protocol, which is used to synchronize
137

system clock between computers. The timedatectl utility enables you to automatically sync
your Linux system clock with a remote group of servers using NTP.
Page
Please note that, you must have NTP installed on the system to enable automatic time
synchronization with NTP servers.

To start automatic time synchronization with remote NTP server, type the following command
at the terminal.
# timedatectl set-ntp true

To disable NTP time synchronization, type the following command at the terminal.
# timedatectl set-ntp false

31.dd Command
dd command is used for copying files, converting and formatting according to flags provided on
the command line. It can strip headers, extracting parts of binary files and so on.

The example below shows creating a boot-able USB device:


$ dd if=/home/tecmint/kali-linux-1.0.4-i386.iso of=/dev/sdc1 bs=512M; sync

32.df Command
df command is used to show file system disk space usage as follows.
$ df -h

33.diff Command
diff command is used to compare two files line by line. It can also be used to find the difference
between two directories in Linux like this:
$ diff file1 file2

34.dir Command
dir command works like Linux ls command, it lists the contents of a directory.
$ dir

The general syntax of the dir command is as follows.


138

# dir [OPTION] [FILE]


Page
1.To list one file per line use -1 option as follows.
# dir
# dir -1

2.To list all files in a directory including . (hidden) files, use the -a option. You can include the -l
option to format output as a list.
# dir -a
# dir -al

3.When you need to list only directory entries instead of directory content, you can use the -d
option.
# dir -d /etc

When you use -dl, it shows a long listing of the directory including owner, group owner,
permissions.
# dir -dl /etc

4.In case you want to view the index number of each file, use the option -i. From the output
below, you can see that first column shows numbers. These numbers are called inodes which
are sometimes referred to as index nodes or index numbers.

An inode in Linux systems is a data storage on a filesystem that stores information about a file
except the filename and its actual data.
# dir -il

5.You can view files sizes using the -s option. If you need to sort the files according to size, then
use the -S option.

In this case you need to also use the -h option to view the files sizes in a human-readable
format.
# dir -shl

Sorted list of files according to their sizes by using the -S option.


139

# dir -ashlS /home/niki


Page
6.You can also sort by modification time, with the file that has recently been modified
appearing first on the list. This can be done using the -t option.
# dir -ashlt /home/niki

7.To list files without their owners, you have to use -g option which works like the -l option only
that it does not print out the file owner. And to list files without group owner use the -G option
as follows.
# dir -ahgG /home/niki

You can as well view the author of a file by using the –author flag as follows.
# dir -al --author /home/niki

8.You may wish to view directories before all other files and this can be done by using the –
group-directories-first flag as follows.
# dir -l --group-directories-first

9.You can also view subdirectories recursively, meaning that you can list all other subdirectories
in a directory using the -R option as follows.
# dir -R

10.To view user and group IDs, you need to use -n option.
# dir -l –author
# dir -nl –author

This can be archived by using -m option.


# dir -am

35.dmidecode Command
dmidecode command is a tool for retrieving hardware information of any Linux system. It
dumps a computer’s DMI (a.k.a SMBIOS) table contents in a human-readable format for easy
retrieval.
140

To view your system hardware info, you can type:


Page

$ sudo dmidecode --type system


36.du Command
du command is used to show disk space usage of files present in a directory as well as its sub-
directories as follows.
$ du /home/niki

37.echo Command
echo command prints a text of line provided to it.
$ echo “This is Shimon”

The syntax for echo is:


echo [option(s)] [string(s)]

1. Input a line of text and display on standard output


$ echo This is a community

2. Declare a variable and echo its value. For example, Declare a variable of x and assign its
value=10.
$ x=10

echo its value:


$ echo The value of variable x = $x

Note: The ‘-e‘ option in Linux acts as interpretation of escaped characters that are backslashed.

3. Using option ‘\b‘ – backspace with backslash interpretor ‘-e‘ which removes all the spaces in
between.
$ echo -e "This \bis \ba \bcommunity"

4. Using option ‘\n‘ – New line with backspace interpretor ‘-e‘ treats new line from where it is
used.
$ echo -e "This \nis \na \ncommunity "
141

5. Using option ‘\t‘ – horizontal tab with backspace interpretor ‘-e‘ to have horizontal tab
spaces.
Page

$ echo -e "This \tis \ta \tcommunity"


6. How about using option new Line ‘\n‘ and horizontal tab ‘\t‘ simultaneously.
$ echo -e "\n\tThis \n\tis \n\ta \n\tcommunity"

7. Using option ‘\v‘ – vertical tab with backspace interpretor ‘-e‘ to have vertical tab spaces.
$ echo -e "\vThis \vis \va \vcommunity"

8. How about using option new Line ‘\n‘ and vertical tab ‘\v‘ simultaneously.
$ echo -e "\n\vThis \n\vis \n\va \n\vcommunity"

Note: We can double the vertical tab, horizontal tab and new line spacing using the option two
times or as many times as required.

9. Using option ‘\r‘ – carriage return with backspace interpretor ‘-e‘ to have specified carriage
return in output.
$ echo -e "This \ris a community"

10. Using option ‘\c‘ – suppress trailing new line with backspace interpretor ‘-e‘ to continue
without emitting new line.
$ echo -e "This is a community \cof Linux Nerds"

11. Omit echoing trailing new line using option ‘-n‘.


$ echo -n "This is a community of Linux Nerds"

12. Using option ‘\a‘ – alert return with backspace interpretor ‘-e‘ to have sound alert.
$ echo -e "This is a community of \aLinux Nerds"

Note: Make sure to check Volume key, before firing.

13. Print all the files/folder using echo command (ls command alternative).
$ echo *

14. Print files of a specific kind. For example, let’s assume you want to print all ‘.jpeg‘ files, use
the following command.
142

$ echo *.jpeg
Page
15. The echo can be used with redirect operator to output to a file and not standard output.
$ echo "Test Page" > testpage

echo Options

Options Description

-n do not print the trailing newline.

-e enable interpretation of backslash escapes.

\b backspace

\\ backslash

\n new line

\r carriage return

\t horizontal tab

\v vertical tab

38.eject Command
eject command is used to eject removable media such as DVD/CD ROM or floppy disk from the
143

system.
$ eject /dev/cdrom
Page
$ eject /mnt/cdrom/
$ eject /dev/sda

39.env Command
env command lists all the current environment variables and used to set them as well.
$ env

40.exit Command
exit command is used to exit a shell like so.
$ exit

41.expr Command
expr command is used to calculate an expression as shown below.
$ expr 20 + 30

42.factor Command
factor command is used to show the prime factors of a number.
$ factor 10

43.find Command
find command lets you search for files in a directory as well as its sub-directories. It searches
for files by attributes such as permissions, users, groups, file type, date, size and other possible
criteria.
$ find /home/niki/ -name niki.txt

44.free Command
free command shows the system memory usage (free, used, swapped, cached, etc.) in the
system including swap space. Use the -h option to display output in human friendly format.
$ free -h
144

‘free’ Commands to Check Memory Usage in Linux


Page
The most important and single way of determining the total available space of the physical
memory and swap memory is by using “free” command.

The “free” command gives information about total used and available space of physical
memory and swap memory with buffers used by kernel in Linux/Unix like operating systems

1. Display System Memory


Free command used to check the used and available space of physical memory and swap
memory in KB. See the command in action below.
# free

2. Display Memory in Bytes


Free command with option -b, display the size of memory in Bytes.
# free -b

3. Display Memory in Kilo Bytes


Free command with option -k, display the size of memory in (KB) Kilobytes.
# free -k

4. Display Memory in Megabytes


To see the size of the memory in (MB) Megabytes use option as -m.
# free -m

5. Display Memory in Gigabytes


Using -g option with free command, would display the size of the memory in GB(Gigabytes).
# free -g

6. Display Total Line


Free command with -t option, will list the total line at the end.
# free -t
145
Page
7. Disable Display of Buffer Adjusted Line
By default the free command display “buffer adjusted” line, to disable this line use option as -o.
# free -o

8. Display Memory Status for Regular Intervals


The -s option with number, used to update free command at regular intervals. For example, the
below command will update free command every 5 seconds.
# free -s 5

9. Show Low and High Memory Statistics


The -l switch displays detailed high and low memory size statistics.
# free -l

10. Check Free Version


The -V option, display free command version information.
# free -V

Find Top Running Processes by Highest Memory and CPU Usage

The following command will show the list of top processes ordered by RAM and CPU use in
descendant form (remove the pipeline and head if you want to see the full list):
# ps -eo pid,ppid,cmd,%mem,%cpu --sort=-%mem | head

Brief explanation of above options used in above command.

The -o (or –format) option of ps allows you to specify the output format. A favorite of mine is
to show the processes’ PIDs (pid), PPIDs (pid), the name of the executable file associated with
the process (cmd), and the RAM and CPU utilization (%mem and %cpu, respectively).

Additionally, I use --sort to sort by either %mem or %cpu. By default, the output will be sorted
in ascendant form, but personally I prefer to reverse that order by adding a minus sign in front
146

of the sort criteria.


Page
To add other fields to the output, or change the sort criteria, refer to the OUTPUT FORMAT
CONTROL section in the man page of ps command.

Find Top 15 Processes by Memory Usage with ‘top’ in Batch Mode

To display the top 15 processes sorted by memory use in descending order, do:
# top -b -o +%MEM | head -n 22

As opposed to the previous tip, here you have to use +%MEM (note the plus sign) to sort the
output in descending order

From the command above, the option:

1. -b : runs top in batch mode


2. -o : used to specify fields for sorting processes
3. head utility displays the first few lines of a file and
4. the -n option is used to specify the number of lines to be displayed.

Note: that head utility, by default displays the first ten lines of a file, that is when you do not
specify the number of lines to be displayed.

Redirect or Save ‘top’ Output to File in Linux


Additionally, using top in batch mode allows you to redirect the output to a file for later
inspection:
# top -b -o +%MEM | head -n 22 > topreport.txt

As we have seen, the top utility offers us more dynamic information while listing processes on a
Linux system, therefore, this approach has an extra advantage compared to using ps utility.
147

1. Find Top Running Processes by Highest Memory and CPU Usage in Linux
Page

2. Smem – Reports Memory Consumption Per-Process and Per-User Basis in Linux


3. How to Clear RAM Memory Cache, Buffer and Swap Space on Linux
grep Command
grep command searches for a specified pattern in a file (or files) and displays in output lines
containing that pattern as follows.

$ grep ‘tecmint’ domain-list.txt


Learn more about grep command usage in Linux.

1. What’s Difference Between Grep, Egrep and Fgrep in Linux?


2. 12 Basic Linux ‘Grep’ Command Examples in Linux
3. 11 Advanced Linux ‘Grep’ Commands in Linux

groups Command
groups command displays all the names of groups a user is a part of like this.

$ groups
$ groups tecmint
gzip Command
Gzip helps to compress a file, replaces it with one having a .gz extension as shown below:

$ gzip passwds.txt
$ cat file1 file2 | gzip > foo.gz
gunzip Command
gunzip expands or restores files compressed with gzip command like this.

$ gunzip foo.gz
head Command
head command is used to show first lines (10 lines by default) of the specified file or stdin to the
screen:

# ps -eo pid,ppid,cmd,%mem,%cpu --sort=-%mem | head


148

history Command
history command is used to show previously used commands or to get info about command
Page

executed by a user.
$ history
Learn more about Linux history command.

1. The Power of Linux “History Command” in Bash Shell


2. Set Date and Time for Each Command You Execute in Bash History
3. How to Use ‘Yum History’ to Find Out Installed/Removed Packages Info

hostname Command
hostname command is used to print or set system hostname in Linux.

$ hostname
$ hostname NEW_HOSTNAME
hostnamectl Command
hostnamectl command controls the system hostname under systemd. It is used to print or
modify the system hostname and any related settings:

$ hostnamectl
$ sudo hostnamectl set-hostname NEW_HOSTNAME
hwclock
hwclock is a tool for managing the system hardware clock; read or set the hardware clock
(RTC).

$ sudo hwclock
$ sudo hwclock --set --date 8/06/2017
hwinfo Command
hwinfo is used to probe for the hardware present in a Linux system like this.

$ hwinfo
Learn more about how to get Linux hardware info.

1. I-Nex – An Advanced Tool to Collect System/Hardware Information in Linux


149

2. 9 Useful Tools to Get System Information in Linux


Page

id Command
id command shows user and group information for the current user or specified username as
shown below.

$ id tecmint
ifconfig Command
ifconfig command is used to configure a Linux systems network interfaces. It is used to
configure, view and control network interfaces.

$ ifconfig
$ sudo ifconfig eth0 up
$ sudo ifconfig eth0 down
$ sudo ifconfig eth0 172.16.25.125
ionice Command
ionice command is used to set or view process I/O scheduling class and priority of the specified
process.

If invoked without any options, it will query the current I/O scheduling class and priority for that
process:

$ ionice -c 3 rm /var/logs/syslog
To understand how it works, read this article: How to Delete HUGE (100-200GB) Files in Linux

iostat Command
iostat is used to show CPU and input/output statistics for devices and partitions. It produces
useful reports for updating system configurations to help balance the input/output load
between physical disks.

$ iostat
ip Command
ip command is used to display or manage routing, devices, policy routing and tunnels. It also
150

works as a replacement for well known ifconfig command.


Page

This command will assign an IP address to a specific interface (eth1 in this case).
$ sudo ip addr add 192.168.56.10 dev eth1
iptables Command
iptables is a terminal based firewall for managing incoming and outgoing traffic via a set of
configurable table rules.

The command below is used to check existing rules on a system (using it may require root
privileges).

$ sudo iptables -L -n -v
iw Command
iw command is used to manage wireless devices and their configuration.

$ iw list
iwlist Command
iwlist command displays detailed wireless information from a wireless interface. The command
below enables you to get detailed information about the wlp1s0 interface.

$ iwlist wlp1s0 scanning


kill Command
kill command is used to kill a process using its PID by sending a signal to it (default signal for kill
is TERM).

$ kill -p 2300
$ kill -SIGTERM -p 2300
killall Command
killall command is used to kill a process by its name.

$ killall firefox
kmod Command
kmod command is used to manage Linux kernel modules. To list all currently loaded modules,
151

type.
Page

$ kmod list
last Command
last command display a listing of last logged in users.

$ last
ln Command
ln command is used to create a soft link between files using the -s flag like this.

$ ln -s /usr/bin/lscpu cpuinfo
locate Command
locate command is used to find a file by name. The locate utility works better and faster than
it’s find counterpart.

The command below will search for a file by its exact name (not *name*):

$ locate -b '\domain-list.txt'
login Command
login command is used to create a new session with the system. You’ll be asked to provide a
username and a password to login as below.

$ sudo login
ls Command
ls command is used to list contents of a directory. It works more or less like dir command.

The -l option enables long listing format like this.

$ ls -l file1
To know more about ls command, read our guides.

1. 15 Basic ‘ls’ Command Examples in Linux


2. 7 Quirky ‘ls’ Command Tricks Every Linux User Should Know
152

3. How to Sort Output of ‘ls’ Command By Last Modified Date and Time
4. 15 Interview Questions on Linux “ls” Command – Part 1
Page

5. 10 Useful ‘ls’ Command Interview Questions – Part 2


lshw Command
lshw command is a minimal tool to get detailed information on the hardware configuration of
the machine, invoke it with superuser privileges to get a comprehensive information.

$ sudo lshw
lscpu Command
lscpu command displays system’s CPU architecture information (such as number of CPUs,
threads, cores, sockets, and more).

$ lscpu
lsof Command
lsof command displays information related to files opened by processes. Files can be of any
type, including regular files, directories, block special files, character special files, executing text
reference, libraries, and stream/network files.

To view files opened by a specific user’s processes, type the command below.

$ lsof -u tecmint
lsusb Command
lsusb command shows information about USB buses in the system and the devices connected
to them like this.

$ lsusb
man Command
man command is used to view the on-line reference manual pages for commands/programs
like so.

$ man du
$ man df
md5sum Command
153

md5sum command is used to compute and print the MD5 message digest of a file. If run
without arguments, debsums checks every file on your system against the stock md5sum files:
Page
$ sudo debsums
mkdir Command
mkdir command is used to create single or more directories, if they do not already exist (this
can be overridden with the -p option).

$ mkdir tecmint-files
OR
$ mkdir -p tecmint-files
more Command
more command enables you to view through relatively lengthy text files one screenful at a
time.

$ more file.txt
Check difference between more and less command and Learn Why ‘less’ is Faster Than ‘more’
Command

mv Command
mv command is used to rename files or directories. It also moves a file or directory to another
location in the directory structure.

$ mv test.sh sysinfo.sh
nano Command
nano is a popular small, free and friendly text editor for Linux; a clone of Pico, the default editor
included in the non-free Pine package.

To open a file using nano, type:

$ nano file.txt
nc/netcat Command
nc (or netcat) is used for performing any operation relating to TCP, UDP, or UNIX-domain
154

sockets. It can handle both IPv4 and IPv6 for opening TCP connections, sending UDP packets,
listening on arbitrary TCP and UDP ports, performing port scanning.
Page
The command below will help us see if the port 22 is open on the host 192.168.56.5.

$ nc -zv 192.168.1.5 22
Learn more examples and usage on nc command.

1. How to Check Remote Ports are Reachable Using ‘nc’ Command


2. How to Transfer Files Between Computers Using ‘nc’ Command

netstat Command
netstat command displays useful information concerning the Linux networking subsystem
(network connections, routing tables, interface statistics, masquerade connections, and
multicast memberships).

This command will display all open ports on the local system:

$ netstat -a | more
nice Command
nice command is used to show or change the nice value of a running program. It runs specified
command with an adjusted niceness. When run without any command specified, it prints the
current niceness.

The following command starts the process “tar command” setting the “nice” value to 12.

$ nice -12 tar -czf backup.tar.bz2 /home/*


nmap Command
nmap is a popular and powerful open source tool for network scanning and security auditing. It
was intended to quickly scan large networks, but it also works fine against single hosts.

The command below will probe open ports on all live hosts on the specified network.

$ nmap -sV 192.168.56.0/24


155

nproc Command
nproc command shows the number of processing units present to the current process. It’s
Page

output may be less than the number of online processors on a system.


$ nproc
openssl Command
The openssl is a command line tool for using the different cryptography operations of
OpenSSL’s crypto library from the shell. The command below will create an archive of all files in
the current directory and encrypt the contents of the archive file:

$ tar -czf - * | openssl enc -e -aes256 -out backup.tar.gz


passwd Command
passwd command is used to create/update passwords for user accounts, it can also change the
account or associated password validity period. Note that normal system users may only
change the password of their own account, while root may modify the password for any
account.

$ passwd tecmint
pidof Command
pidof displays the process ID of a running program/command.

$ pidof init
$ pidof cinnamon
ping Command
ping command is used to determine connectivity between hosts on a network (or the Internet):

$ ping google.com
ps Command
ps shows useful information about active processes running on a system. The example below
shows the top running processes by highest memory and CPU usage.

# ps -eo pid,ppid,cmd,%mem,%cpu --sort=-%mem | head


pstree Command
pstree displays running processes as a tree which is rooted at either PID or init if PID is omitted.
156

$ pstree
Page

pwd Command
pwd command displays the name of current/working directory as below.

$ pwd
rdiff-backup Command
rdiff-backup is a powerful local/remote incremental backup script written in Python. It works on
any POSIX operating system such as Linux, Mac OS X.

Note that for remote backups, you must install the same version of rdiff-backup on both the
local and remote machines. Below is an example of a local backup command:

$ sudo rdiff-backup /etc /media/tecmint/Backup/server_etc.backup


reboot Command
reboot command may be used to halt, power-off or reboot a system as follows.

$ reboot
rename Command
rename command is used to rename many files at once. If you’ve a collection of files with
“.html” extension and you want to rename all of them with “.php” extension, you can type the
command below.

$ rename 's/\.html$/\.php/' *.html


rm command
rm command is used to remove files or directories as shown below.

$ rm file1
$ rm -rf my-files
rmdir Command
rmdir command helps to delete/remove empty directories as follows.

$ rmdir /backup/all
157

scp Command
scp command enables you to securely copy files between hosts on a network, for example.
Page
$ scp ~/names.txt [email protected]:/root/names.txt
shutdown Command
shutdown command schedules a time for the system to be powered down. It may be used to
halt, power-off or reboot the machine like this.

$ shutdown --poweroff
Learn how to show a Custom Message to Users Before Linux Server Shutdown.

sleep Command
sleep command is used to delay or pause (specifically execution of a command) for a specified
amount of time.

$ check.sh; sleep 5; sudo apt update


sort Command
sort command is used to sort lines of text in the specified file(s) or from stdin as shown below

$ cat words.txt
Learn more examples of sort command in Linux.

1. 7 Interesting Linux ‘sort’ Command Examples


2. How to Sort Output of ‘ls’ Command By Last Modified Date and Time
3. How to Find and Sort Files Based on Modification Date and Time

split Command
split as the name suggests, is used to split a large file into small parts.

$ tar -cvjf backup.tar.bz2 /home/tecmint/Documents/*


ssh Command
ssh (SSH client) is an application for remotely accessing and running commands on a remote
machine. It is designed to offer a secure encrypted communications between two untrusted
158

hosts over an insecure network such as the Internet.


Page

$ ssh [email protected]
Learn more about ssh command and how to use it on Linux.

1. 5 Best Practices to Secure and Protect SSH Server


2. Configure “No Password SSH Keys Authentication” with PuTTY on Linux
3. SSH Passwordless Login Using SSH Keygen in 5 Easy Steps
4. Restrict SSH User Access to Certain Directory Using Chrooted Jail

stat Command
stat is used to show a file or file system status like this (-f is used to specify a filesystem).

$ stat file1
su Command
su command is used to switch to another user ID or become root during a login session. Note
that when su is invoked without a username, it defaults to becoming root.

$ su
$ su tecmint
sudo Command
sudo command allows a permitted system user to run a command as root or another user, as
defined by the security policy such as sudoers.

In this case, the real (not effective) user ID of the user running sudo is used to determine the
user name with which to query the security policy.

$ sudo apt update


$ sudo useradd tecmint
$ sudo passwd tecmint
Learn more about sudo command and how to use it on Linux.

1. 10 Useful Sudoers Configurations for Setting ‘sudo’ in Linux


2. How to Run ‘sudo’ Command Without Entering a Password in Linux
159

3. How to Keep ‘sudo’ Password Timeout Session Longer in Linux


Page

sum Command
sum command is used to show the checksum and block counts for each each specified file on
the command line.

$ sum output file.txt


tac Command
tac command concatenates and displays files in reverse. It simply prints each file to standard
output, showing last line first.

$tac file.txt
tail Command
tail command is used to display the last lines (10 lines by default) of each file to standard
output.

If there more than one file, precede each with a header giving the file name. Use it as follow
(specify more lines to display using -n option).

$ tail long-file
OR
$ tail -n 15 long-file
talk Command
talk command is used to talk to another system/network user. To talk to a user on the same
machine, use their login name, however, to talk to a user on another machine use ‘user@host’.

$ talk person [ttyname]


OR
$ talk‘user@host’ [ttyname]
tar Command
tar command is a most powerful utility for archiving files in Linux.

$ tar -czf home.tar.gz .


160

Learn more about tar command and its usage on Linux.


Page

1. 18 Tar Command Examples in Linux


2. How to Split Large ‘tar’ Archive into Multiple Files of Certain Size
3. How to Extract Tar Files to Specific or Different Directory in Linux

tee Command
tee command is used to read from standard input and prints to standard output and files as
shown below.

$ echo "Testing how tee command works" | tee file1


time Command
time command runs programs and summarizes system resource usage.

$ time wc /etc/hosts
top Command
top program displays all processes on a Linux system in regards to memory and CPU usage and
provides a dynamic real-time view of a running system.

$ top
touch Command
touch command changes file timestamps, it can also be used to create a file as follows.

$ touch file.txt
tr Command
tr command is a useful utility used to translate (change) or delete characters from stdin, and
write the result to stdout or send to a file as follows.

$ cat domain-list.txt | tr [:lower:] [:upper:]


uname Command
uname command displays system information such as operating system, network node
hostname kernel name, version and release etc.
161

Use the -a option to show all the system information:


Page

$ uname
uniq Command
uniq command displays or omits repeated lines from input (or standard input). To indicate the
number of occurrences of a line, use the -c option.

$ cat domain-list.txt
uptime Command
uptime command shows how long the system has been running, number of logged on users
and the system load averages as follows.

$ uptime
users Command
users command shows the user names of users currently logged in to the current host like this.

$ users
vim/vi Command
vim (Vi Improved) popular text editor on Unix-like operating systems. It can be used to edit all
kinds of plain text and program files.

$ vim file
Learn how to use vi/vim editor in Linux along with some tips and tricks.

1. 10 Reasons Why You Should Use Vi/Vim Editor in Linux


2. How to Install and Use Vi/Vim Editor in Linux
3. How to Save a File in Vim Editor in Linux
4. How to Exit a File in Vim Editor in Linux
5. Learn Useful ‘Vi/Vim’ Editor Tips and Tricks to Enhance Your Skills
6. 8 Interesting ‘Vi/Vim’ Editor Tips and Tricks for Every Linux Administrator

w Command
w command displays system uptime, load averages and information about the users currently
162

on the machine, and what they are doing (their processes) like this.
Page

$w
wall Command
wall command is used to send/display a message to all users on the system as follows.

$ wall “This is TecMint – Linux How Tos”


watch Command
watch command runs a program repeatedly while displaying its output on fullscreen. It can also
be used to watch changes to a file/directory. The example below shows how to watch the
contents of a directory change.

$ watch -d ls -l
wc Command
wc command is used to display newline, word, and byte counts for each file specified, and a
total for many files.

$ wc filename
wget Command
wget command is a simple utility used to download files from the Web in a non-interactive (can
work in the background) way.

$ wget -c http://ftp.gnu.org/gnu/wget/wget-1.5.3.tar.gz
whatis Command
whatis command searches and shows a short or one-line manual page descriptions of the
provided command name(s) as follows.

$ whatis wget
which Command
which command displays the absolute path (pathnames) of the files (or possibly links) which
would be executed in the current environment.

$ which who
163

who Command
Page

who command shows information about users who are currently logged in like this.
$ who
whereis Command
whereis command helps us locate the binary, source and manual files for commands.

$ whereis cat
xargs Command
xargs command is a useful utility for reading items from the standard input, delimited by blanks
(protected with double or single quotes or a backslash) or newlines, and executes the entered
command.

The example below show xargs being used to copy a file to multiple directories in Linux.

$ echo /home/aaronkilik/test/ /home/aaronkilik/tmp | xargs -n 1 cp -v


/home/aaronkilik/bin/sys_info.sh
yes Command
yes command is used to display a string repeatedly until when terminated or killed using [ Ctrl +
C] as follows.

$ yes "This is TecMint - Linux HowTos"


youtube-dl Command
youtube-dl is a lightweight command-line program to download videos and also extract MP3
tracks from YouTube.com and a few more sites.

The command below will list available formats for the video in the provided link.

$ youtube-dl --list-formats https://www.youtube.com/watch?v=iR


zcmp/zdiff Command
zcmp and zdiff minimal utilities used to compare compressed files as shown in the examples
below.
164

$ zcmp domain-list.txt.zip basic_passwords.txt.zip


$ zdiff domain-list.txt.zip basic_passwords.txt.zip
Page

zip Command
zip is a simple and easy-to-use utility used to package and compress (archive) files.

$ tar cf - . | zip | dd of=/dev/nrst0 obs=16k


$ zip inarchive.zip foo.c bar.c --out outarchive.zip
$ tar cf - .| zip backup -
zz Command
zz command is an alias of the fasd commandline tool that offers quick access to files and
directories in Linux. It is used to quickly and interactively cd into a previously accessed directory
by selecting the directory number from the first field as follows.

$ zz

165
Page

You might also like