Unix
Unix
cp – copy a file
mv – move or rename files or directories
tar – create and use archives of files
gzip – compress a file
ftp – file transfer program
lpr – print out a file
mkdir – make a directory
rm – remove files or directories
rmdir – remove a directory
mount – attaches a file system to the file system hierarchy at the mount_point, which is
the pathname of a directory.
umount – unmounts a currently mounted file system.
cd – change directory
pwd – display the name of your current directory
ls – list names of files in a directory
du – Use this command to see the size/usage of the folder you are in. Example usage: du -
sk *
df – Report file system disk space usage. Example usage: df -k
File Editing
Search
find – find files of a specified name or type.
grep – searches files for a specified string or expression.
Administration
top – Top displays the top 10 processes on the system and periodically updates this
information. Raw cpu percentage is used to rank the processes.
chmod – change the permissions of a file or a directory.
ps – The ps command prints information about active processes.
kill – kill a process.
Information
Help Related
man – The man command displays information from the reference manuals.
help – The help utility retrieves information to further explain errors messages and
warnings from SCCS commands.
$ who
Page
Control Commands
These commands are a two-key combination where a letter is pressed simultaneously with the
‘Ctrl’ key.
Other commands:
#1) ps: displays a snapshot of all current processes
Syntax: $ ps [options]
Example: $ ps -ef
Show every process running, formatted as a table
#2) top - displays a live status of current processes
Syntax: $ top [options]
Example: $ top
Show a live view of all current processes
#3) bg - resume a background suspended a job
Syntax: $ bg [job_spec …]
Example: $ xterm
Ctrl-Z
$ bg
Continue running a job that was previously suspended (using Ctrl-Z) in the background
#4) fg - bring a background job to the foreground
Syntax: $ fg [job_spec]
Example: $ xterm
Ctrl-Z
$ bg
$ fg
Bring a previous background job to the foreground
#5) clear – clear a terminal screen
Syntax: $ clear
Example: $ clear
Clear all prior text from the terminal screen
#6) history – print history of commands in the current session
Syntax: $ history [options]
Example: $ history
4
These are used to organize a group of files – the contained files may be of any type.
#3) Special Files
Special files, also known as device files, are used to represent physical devices such as a printer,
a disk drive, or a remote terminal.
The ‘ls’ command is used to list filenames and other associated data. With the option ‘ls -il’,
this command lists out a long format of file details along with its inode number.
File Manipulation
#1) chmod: Change file access permissions.
Description: This command is used to change the file permissions. These permissions
read, write and execute permission for owner, group, and others.
Syntax (symbolic mode): chmod [ugoa][[+-=][mode]] file
The first optional parameter indicates who – this can be (u)ser, (g)roup, (o)thers or (a)ll.
The second optional parameter indicates opcode – this can be for adding (+), removing
(-) or assigning (=) a permission.
The third optional parameter indicates the mode – this can be (r)ead, (w)rite, or
e(x)ecute.
Example: Add write permission for user, group and others for file1.
$ chmod ugo+w file1
#1) cmp: This command is used to compare two files character by character.
Syntax: cmp [options] file1 file2
Example: Add write permission for user, group and others for file1.
$ cmp file1 file2
#2) comm: This command is used to compare two sorted files.
Syntax: comm [options] file1 file2
One set of options allows selection of ‘columns’ to suppress.
-1: suppress lines unique to file1 (column 1)
-2: suppress lines unique to file2 (column 2)
-3: suppress lines common to file1 and file2 (column3)
Example: Only show column-3 that contains lines common between file1 and file2
$ comm -12 file1 file2
#3) diff: This command is used to compare two files line by line.
Syntax: diff [options] file1 file2
Example: Add write permission for user, group and others for file1
$ diff file1 file2
Change commands
< lines from file1
—
> lines from file2
The change commands are in the format [range][acd][range]. The range on the left may be a
line number or a comma-separated range of line numbers referring to file1, and the range on
the right similarly refers to file2. The character in the middle indicates the action i.e. add,
change or delete.
‘LaR’ – Add lines in range ‘R’ from file2 after line ‘L’ in file1.
‘FcT’ – Change lines in range ‘F’ of file1 to lines in range ‘T’ of file2.
‘RdL’ – Delete lines in range ‘R’ from file1 that would have appeared at line ‘L’ in file2
This wild-card selects all the files that matches the expression by replacing
Page
the marked range with one or more characters from the range.
Example1: List all files that have digits after ‘file’. g. file1, file2, file33
$ ls file[0-9]*
This pattern matches any pattern except the set of characters specified
Page
Ls command
The Ls command is used to get a list of files and directories. Options can be used to get
additional information about the files.
ls Syntax:
ls [options] [paths]
The ls command supports the following options:
ls -a: list all files including hidden files. These are files that start with “.”.
ls -A: list all files including hidden files except for “.” and “..” – these refer to the entries for
the current directory, and for the parent directory.
ls -R: list all files recursively, descending down the directory tree from the given path.
ls -l: list the files in long format i.e. with an index number, owner name, group name, size,
and permissions.
ls – o: list the files in long format but without the group name.
ls -g: list the files in long format but without the owner name.
10
ls -t: sort the list by time of modification, with the newest at the top.
ls -S: sort the list by size, with the largest at the top.
ls -r: reverse the sorting order.
Grep Command:
The grep filter searches a file for a particular pattern of characters, and displays all lines that
contain that pattern. The pattern that is searched in the file is referred to as the regular
expression (grep stands for globally search for regular expression and print out).
Syntax:
grep [options] pattern [files]
Options Description
-c : This prints only a count of the lines that match a pattern
-h : Display the matched lines, but do not display the filenames.
-i : Ignores, case for matching
-l : Displays list of a filenames only.
-n : Display the matched lines and their line numbers.
-v : This prints out all the lines that do not matches the pattern
-e exp : Specifies expression with this option. Can use multiple times.
-f file : Takes patterns from file, one per line.
-E : Treats pattern as an extended regular expression (ERE)
-w : Match whole word
-o : Print only the matched parts of a matching line, with each such part on a separate output
line.
df Command:
The ‘df‘ command stand for “disk filesystem“, it is used to get full summary of available and
used disk space usage of file system on Linux system.
Using ‘-h‘ parameter with (df -h) will shows the file system disk space statistics in “human
readable” format, means it gives the details in bytes, mega bytes and gigabyte.
-a : -all : It includes all the dummy files also in the output which are actually having zero
12
block sizes.
Page
-B : -block-size=S : This is the option we were talking in the above para which is used to scale
sizes by SIZE like -BM prints sizes in units of 1,048,576 bytes.
-total : It is used to display the grand total for size.
-h : -human-readable : It print sizes in human readable format.
-H : -si : This option is same as -h but it use powers of 1000 instead of 1024.
-I : -inodes : This option is used when you want to display the inode information instead of
block usage.
-k : Its use is like –block-size-1k.
-l : -local : This will display the disk usage of only local file systems.
-P : -portability : It uses the POSIX output format.
-t : -type=TYPE : It will only show the output of file systems having type TYPE.
-T : -print-type : This option is used to print file system type shown in the output.
-x : -exclude-type=TYPE : It will exclude all the file systems having type TYPE from the output.
-v : Ignored, included for compatibility reasons.
-no-sync : This is the default setting i.e not to invoke sync before getting usage info.
-sync : It invokes a sync before getting usage info.
-help : It displays a help message and exit.
-version : It displays version information and exit.
du Command
The “du” (Disk Usage) is a standard Unix/Linux command, used to check the information of disk
usage of files and directories on a machine. The du command has many parameter options that
can be used to get the results in many formats. The du command also displays the files and
directory sizes in a recursively manner.
1. To find out the disk usage summary of a /home/niki directory tree and each of its sub
directories. Enter the command as:
du /home/niki
2. Using “-h” option with “du” command provides results in “Human Readable Format“. Means
you can see sizes in Bytes, Kilobytes, Megabytes, Gigabytes etc.
du -h /home/niki
3. To get the summary of a grand total disk usage size of an directory use the option “-s” as
follows.
du -sh /home/niki
4. Using “-a” flag with “du” command displays the disk usage of all the files and directories.
13
du -a /home/niki
Page
5. Using “-a” flag along with “-h” displays disk usage of all files and folders in human readeable
format. The below output is more easy to understand as it shows the files in Kilobytes,
Megabytes etc.
du -ah /home/niki
6. Find out the disk usage of a directory tree with its subtress in Kilobyte blcoks. Use the “-k”
(displays size in 1024 bytes units).
du -k /home/niki
7. To get the summary of disk usage of directory tree along with its subtrees in Megabytes (MB)
only. Use the option “-mh” as follows. The “-m” flag counts the blocks in MB units and “-h”
stands for human readable format.
du -mh /home/niki
8. The “-c” flag provides a grand total usage disk space at the last line. If your directory taken
674MB space, then the last last two line of the output would be.
du -ch /home/niki
9. The below command calculates and displays the disk usage of all files and directories, but
excludes the files that matches given pattern. The below command excludes the “.txt” files
while calculating the total size of diretory. So, this way you can exclude any file formats by using
flag “-–exclude“. See the output there is no txt files entry.
du -ah --exclude="*.txt" /home/niki
10. Display the disk usage based on modification of time, use the flag “–time” as shown below.
du -ha --time /home/niki
Options :
Let us break down the command and see what says each parameter.
Some of you would like to display the above result in human readable format. i.e you might
want to display the largest files in KB, MB, or GB.
du -hs * | sort -rh | head -5
The above command will show the top directories, which are eating up more disk space. If you
feel that some directories are not important, you can simply delete few sub-directories or
delete the entire folder to free up some space.
If you want to display the biggest file sizes only, then run the following command:
find -type f -exec du -Sh {} + | sort -rh | head -n 5
5.To find the largest files in a particular location, just include the path besides the find
command:
find /home/niki/Downloads/ -type f -exec du -Sh {} + | sort -rh | head -n 5
OR
find /home/niki/Downloads/ -type f -printf "%s %p\n" | sort -rn | head -n 5
The above command will display the largest file from /home/niki/Downloads directory.
Find command
Find Command is one of the most important and much used command in Linux sytems. Find
command used to search and locate list of files and directories based on conditions you specify
for files that match the arguments. Find can be used in variety of conditions like you can find
files by permissions, users, groups, file type, date, size and other possible criteria.
Find all .mp3 files with more than 10MB and delete them using one single command.
# find / -type f -name *.mp3 -size +10M -exec rm {} \;
Page
36.How to Use ‘find’ Command to Search for Multiple Filenames (Extensions)
1. Assuming that you want to find all files in the current directory with .sh and .txt file
extensions, you can do this by running the command below:
# find . -type f \( -name "*.sh" -o -name "*.txt" \)
It is recommended that you enclose the file extensions in a bracket, and also use the \ ( back
slash) escape character as in the command.
2. To find three filenames with .sh, .txt and .c extensions, issues the command below:
3. Here is another example where we search for files with .png, .jpg, .deb and .pdf extensions:
Following are the options that we can use with find command as follows:
1. -type – specifies the file type to search for, in the case above, the f means find all regular
files.
2. -print – an action to print the absolute path of a file.
20
3. -l – this option prints the total number of newlines, which is equals to the total number
Page
Important: Use sudo command to read all files in the specified directory including those in the
subdirectories with superuser privileges, in order to avoid “Permission denied” errors.
You can see that in the first command above, not all files in the current working directory are
read by find command.
The following are extra examples to show total number of regular files in /var/log and /etc
directories respectively:
When we add a new user in Linux with ‘useradd‘ command it gets created in locked state and
to unlock that user account, we need to set a password for that account with ‘passwd‘
command.
# passwd cshimona
Changing password for user cshimona.
New UNIX password:
Retype new UNIX password:
21
The above entry contains a set of seven colon-separated fields, each field has it’s own meaning.
Let’s see what are these fields:
1. Username: User login name used to login into system. It should be between 1 to 32
charcters long.
2. Password: User password (or x character) stored in /etc/shadow file in encrypted
format.
3. User ID (UID): Every user must have a User ID (UID) User Identification Number. By
default UID 0 is reserved for root user and UID’s ranging from 1-99 are reserved for
other predefined accounts. Further UID’s ranging from 100-999 are reserved for system
accounts and groups.
4. Group ID (GID): The primary Group ID (GID) Group Identification Number stored in
/etc/group file.
5. User Info: This field is optional and allow you to define extra information about the user.
For example, user full name. This field is filled by ‘finger’ command.
6. Home Directory: The absolute location of user’s home directory.
7. Shell: The absolute location of a user’s shell i.e. /bin/bash.
You can see the user home directory and other user related information like user id, group id,
shell and comments.
Page
# cat /etc/passwd | grep niki
niki:x:505:505::/data/projects:/bin/bash
Now, let’s verify that the user created with a defined userid (999) using following command.
# cat /etc/passwd | grep cshimonaniki:x:999:999::/home/cshimona:/bin/bash
NOTE: Make sure the value of a user ID must be unique from any other already created users
on the system.
uid=1001(cshimona) gid=1001(cshimona)
groups=1001(cshimona),500(admins),501(webadmin),502(developers)
context=root:system_r:unconfined_t:SystemLow-SystemHigh
To create user’s without their home directories, ‘-M‘ is used. For example, the following
command will create a user ‘niki‘ without a home directory.
# useradd -M niki
Now, let’s verify that the user is created without home directory, using ls command.
# ls -l /home/niki
ls: cannot access /home/niki: No such file or directory
Next, verify the age of account and password with ‘chage‘ command for user ‘niki‘ after setting
24
# chage -l niki
Last password change : Mar 28, 2019
Password expires : never
Password inactive : never
Account expires : Mar 27, 2019
Minimum number of days between password change :0
Maximum number of days between password change : 99999
Number of days of warning before password expire :7
niki:x:1006:1008:Nikila Neethi:/home/mansi:/bin/sh
assign different shells to our users. We can assign different login shells to a each user with ‘-s‘
Page
option.
Here in this example, will add a user ‘cshimona‘ without login shell i.e. ‘/sbin/nologin‘ shell.
# useradd -s /sbin/nologin cshimona
11. Add a User with Specific Home Directory, Default Shell and Custom Comment
The following command will create a user ‘niki‘ with home directory ‘/var/www/cshimona‘,
default shell /bin/bash and adds extra information about user.
# useradd -m -d /var/www/niki -s /bin/bash -c "CShimona Owner" -U niki
In the above command ‘-m -d‘ option creates a user with specified home directory and the ‘-s‘
option set the user’s default shell i.e. /bin/bash. The ‘-c‘ option adds the extra information
about user and ‘-U‘ argument create/adds a group with the same name as the user.
12. Add a User with Home Directory, Custom Shell, Custom Comment and UID/GID
The command is very similar to above, but here we defining shell as ‘/bin/zsh‘ and custom UID
and GID to a user ‘niki‘. Where ‘-u‘ defines new user’s UID (i.e. 1000) and whereas ‘-g‘ defines
GID (i.e. 1000).
# useradd -m -d /var/www/niki -s /bin/zsh -c "CShimona Technical Writer" -u 1000 -g 1000
niki
13. Add a User with Home Directory, No Shell, Custom Comment and User ID
The following command is very much similar to above two commands, the only difference is
here, that we disabling login shell to a user called ‘niki‘ with custom User ID (i.e. 1019).
Here ‘-s‘ option adds the default shell /bin/bash, but in this case we set login to
‘/usr/sbin/nologin‘. That means user ‘avishek‘ will not able to login into the system.
26
1019 niki
14. Add a User with Home Directory, Shell, Custom Skell/Comment and User ID
The only change in this command is, we used ‘-k‘ option to set custom skeleton directory i.e.
/etc/custom.skell, not the default one /etc/skel. We also used ‘-s‘ option to define different
shell i.e. /bin/tcsh to user ‘niki‘.
# useradd -m -d /var/www/niki -k /etc/custom.skell -s /bin/tcsh -c "No Active Member of
CShimona" -u 1027 niki
15. Add a User without Home Directory, No Shell, No Group and Custom Comment
This following command is very different than the other commands explained above. Here we
used ‘-M‘ option to create user without user’s home directory and ‘-N‘ argument is used that
tells the system to only create username (without group). The ‘-r‘ arguments is for creating a
system user.
# useradd -M -N -r -s /bin/false -c "Disabled CShimona Member" clayton
For more information and options about useradd, run ‘useradd‘ command on the terminal to
see available options.
After creating user accounts, in some scenarios where we need to change the attributes of an
existing user such as, change user’s home directory, login name, login shell, password expiry
date, etc, where in such case ‘usermod’ command is used.
When we execute ‘usermod‘ command in terminal, the following files are used and affected.
Options of Usermod
After adding information on user, the same comment can be viewed in /etc/passwd file.
# grep -E ‘cshimona' /etc/passwd
change it to some other directory we can change it using -d option with usermod command.
For example, I want to change our home directory to /var/www/, but before changing, let’s
Page
check the current home directory of a user, using the following command.
# grep -E --color '/home/cshimona' /etc/passwd
# usermod -d /var/www/ cshimona
# grep -E --color '/var/www/' /etc/passwd
The expiry status of a ‘cshimona‘ user is Dec 1 2014, let’s change it to Nov 1 2014 using
‘usermod -e‘ option and confirm the expiry date with ‘chage‘ command.
# id cshimona_test
uid=501(cshimona_test) gid=502(cshimona_test) groups=502(cshimona_test)
Now, set the niki group as a primary group to user shimona_test and confirm the changes.
# usermod -g niki cshimona_test
# id cshimona_test
uid=501(cshimona_test) gid=502(niki) groups=502(cshimona_test)
Note: Be careful, while adding a new groups to an existing user with ‘-G’ option alone, will
remove all existing groups that user belongs. So, always add the ‘-a‘ (append) with ‘-G‘ option
to add or append new groups.
So, user cshimona_test0 remains in its primary group and also in secondary group (wheel). This
will make my normal user account to execute any root privileged commands in Linux box.
30
Check for the files under ‘/home/niki‘. Here we have moved the files using -m option so there
will be no files. The niki user files will be now under /var/niki.
# ls -l /home/pinky/
# ls -l /var/pinky/
Move User Home Directory
After setting password, now check the shadow file to see whether its in encrypted format or un-
encrypted.
Note: The password is clearly visible to everyone. So, this option is not recommended to use,
because the password will be visible to all users.
changed with ‘usermod‘ command using option ‘-s‘ (shell). For example, the user ‘niki‘ has the
/bin/bash shell by default, now I want to change it to /bin/sh.
Page
# grep -E --color 'niki' /etc/passwd
# usermod -s /bin/sh niki
After changing user shell, verify the user shell using the following command.
# grep -E --color 'niki' /etc/passwd
Change User Login Shell
The user niki has the default home directory /home/niki, Now I want to change it to
/var/www/html and assign his shell as bash, set expiry date as September 3rd 2019, add new
label as This is niki, change UID to 555 and he will be member of apple group.
Let we see how to modify the niki account using multiple option now.
# usermod -d /var/www/html/ -s /bin/bash -e 2019-09-03 -c "This is Niki" -u 555 -aG
apple
Check for the group which all niki have been member.
# grep -E --color 'niki' /etc/group
Find the actual description of each Linux command in their manual page:
$ man command-name
1.adduser/addgroup Command:
The adduser and addgroup commands are used to add a user and group to the system
respectively according to the default configuration specified in /etc/adduser.conf file.
34
3.alias Command
alias is a useful shell built-in command for creating aliases (shortcut) to a Linux command on a
system. It is helpful for creating new/custom commands from existing Shell/Linux commands
(including options):
$ alias home='cd /home/niki/public_html'
The above command will create an alias called home for /home/tecmint/public_html
directory, so whenever you type home in the terminal prompt, it will put you in the
/home/niki/public_html directory.
4.anacron Command
anacron is a Linux facility used to run commands periodically with a frequency defined in days,
weeks and months.
Unlike its sister cron; it assumes that a system will not run continuously, therefore if a
scheduled job is due when the system is off, it’s run once the machine is powered on.
To schedule a task on given or later time, you can use the ‘at’ or ‘batch’ commands and to set
up commands to run repeatedly, you can employ the cron and anacron facilities.
Cron – is a daemon used to run scheduled tasks such as system backups, updates and many
more. It is suitable for running scheduled tasks on machines that will run continuously 24X7
such as servers.
35
The commands/tasks are scripted into cron jobs which are scheduled in crontab files. The
Page
default system crontab file is /etc/crontab, but each user can also create their own crontab file
that can launch commands at times that the user defines.
It is appropriate for running daily, weekly, and monthly scheduled jobs normally run by cron, on
machines that will not run 24-7 such as laptops and desktops machines.
Assuming you have a scheduled task (such as a backup script) to be run using cron every
midnight, possibly when your asleep, and your desktop/laptop is off by that time. Your backup
script will not be executed.
However, if you use anacron, you can be assured that the next time you power on the
desktop/laptop again, the backup script will be executed.
period – this is the frequency of job execution specified in days or as @daily, @weekly,
or @monthly for once per day, week, or month. You can as well use numbers: 1 – daily,
7 – weekly, 30 – monthly and N – number of days.
delay – it’s the number of minutes to wait before executing a job.
job-id – it’s the distinctive name for the job written in log files.
36
Page
total 12
-rw------- 1 root root 9 Jun 1 10:25 cron.daily
-rw------- 1 root root 9 May 27 11:01 cron.monthly
-rw------- 1 root root 9 May 30 10:28 cron.weekly
Anacron will check if a job has been executed within the specified period in the period
field. If not, it executes the command specified in the command field after waiting the
number of minutes specified in the delay field.
Once the job has been executed, it records the date in a timestamp file in the
/var/spool/anacron directory with the name specified in the job-id (timestamp file
name) field.
Let’s now look at an example. This will run the /home/aaronkilik/bin/backup.sh script
everyday:
@daily 10 example.daily /bin/bash /home/aaronkilik/bin/backup.sh
If the machine is off when the backup.sh job is expected to run, anacron will run it 10 minutes
after the machine is powered on without having to wait for another 7 days.
There are two important variables in the anacrontab file that you should understand:
START_HOURS_RANGE – this sets time range in which jobs will be started (i.e execute
jobs during the following hours only).
RANDOM_DELAY – this defines the maximum random delay added to the user defined
delay of a job (by default it’s 45).
37
The following is a comparison of cron and anacron to help you understand when to use either
Page
of them.
Cron Anacron
Enables you to run scheduled jobs every Only enables you to run scheduled jobs on daily
minute basis
Doesn’t executed a scheduled job when the If the machine if off when a scheduled job is due, it
machine if off will execute a scheduled job when the machine is
powered on the next time
Can be used by both normal users and root Can only be used by root unless otherwise
(enabled for normal users with specific configs)
The major difference between cron and anacron is that cron works effectively on machines that
will run continuously while anacron is intended for machines that will be powered off in a day
or week.
5.apropos Command
apropos command is used to search and display a short man page description of a
command/program as follows.
$ apropos adduser
6.apt Command
38
apt tool is a relatively new higher-level package manager for Debian/Ubuntu systems:
Page
1. Installing a Package
You can install a package as follows by specify a single package name or install many packages
at once by listing all their names.
$ sudo apt install glances
Sometimes during package installation, you may get errors concerning broken package
Page
dependencies, to check that you do not have these problems run the command below with the
package name.
$ sudo apt check firefox
When you run apt with remove, it only removes the package files but configuration files remain
Page
on the system. Therefore to remove a package and it’s configuration files, you will have to use
purge.
$ sudo apt purge glances
7.apt-get Command
apt-get is a powerful and free front-end package manager for Debian/Ubuntu systems. It is
used to install new software packages, remove available software packages, upgrade existing
software packages as well as upgrade entire operating system.
$ sudo apt-get update
What is apt-get?
The apt-get utility is a powerful and free package management command line program, that is
used to work with Ubuntu’s APT (Advanced Packaging Tool) library to perform installation of
new software packages, removing existing software packages, upgrading of existing software
packages and even used to upgrading the entire operating system.
What is apt-cache?
The apt-cache command line tool is used for searching apt software package cache. In simple
words, this tool is used to search software packages, collects information of packages and also
used to search for what available packages are ready for installation on Debian or Ubuntu
41
based systems.
Page
APT-CACHE
To find and list down all the packages starting with ‘vsftpd‘, you could use the following
command.
$ apt-cache pkgnames vsftpd
the cache.
$ apt-cache stats
Page
APT-GET
However, if you want to upgrade, unconcerned of whether software packages will be added or
removed to fulfill dependencies, use the ‘dist-upgrade‘ sub command.
$ sudo apt-get dist-upgrade
we use * wildcard to install several packages that contains the ‘*name*‘ string, name would be
‘package-name’.
Page
Alternatively, you can combine both the commands together as shown below.
$ sudo apt-get remove --purge vsftpd
Still there are more options available, you can check them out using ‘man apt-get‘ or ‘man apt-
cache‘ from the terminal.
What is YUM?
YUM (Yellowdog Updater Modified) is an open source command-line as well as graphical based
package management tool for RPM (RedHat Package Manager) based Linux systems. It allows
users and system administrator to easily install, update, remove or search software packages on
a systems. It was developed and released by Seth Vidal under GPL (General Public License) as
an open source, means anyone can allowed to download and access the code to fix bugs and
develop customized packages. YUM uses numerous third party repositories to install packages
automatically by resolving their dependencies issues.
Same way the above command will ask confirmation before removing a package. To disable
confirmation prompt just add option -y as shown in below.
# yum -y remove firefox
To make your search more accurate, define package name with their version, in case you know.
For example to search for a specific version openssh-4.3p2 of the package, use the command.
# yum list openssh-4.3p2
belongs to the group. For example to list all the available groups, just issue following command.
Page
# yum grouplist
13. Install a Group Packages
To install a particular package group, we use option as groupinstall. Fore example, to install
“MySQL Database“, just execute the below command.
# yum groupinstall 'MySQL Database'
8.aptitude Command
aptitude is a powerful text-based interface to the Debian GNU/Linux package management
system. Like apt-get and apt; it can be used to install, remove or upgrade software packages on
a system.
$ sudo aptitude update
Package Management
In few words, package management is a method of installing and maintaining (which includes
updating and probably removing as well) software on the system.
In the early days of Linux, programs were only distributed as source code, along with the
required man pages, the necessary configuration files, and more. Nowadays, most Linux
50
distributors use by default pre-built programs or sets of programs called packages, which are
Page
presented to users ready for installation on that distribution. However, one of the wonders of
Linux is still the possibility to obtain source code of a program to be studied, improved, and
compiled.
Packaging Systems
Almost all the software that is installed on a modern Linux system will be found on the Internet.
It can either be provided by the distribution vendor through central repositories (which can
contain several thousands of packages, each of which has been specifically built, tested, and
maintained for the distribution) or be available in source code that can be downloaded and
installed manually.
Because different distribution families use different packaging systems (Debian: *.deb / CentOS:
*.rpm / openSUSE: *.rpm built specially for openSUSE), a package intended for one distribution
will not be compatible with another distribution. However, most distributions are likely to fall
into one of the three distribution families covered by the LFCS certification.
In order to perform the task of package management effectively, you need to be aware that
you will have two types of available utilities: low-level tools (which handle in the backend the
actual installation, upgrade, and removal of package files), and high-level tools (which are in
charge of ensuring that the tasks of dependency resolution and metadata searching -”data
about the data”- are performed).
dpkg is a low-level package manager for Debian-based systems. It can install, remove, provide
information about and build *.deb packages but it can’t automatically download and install
their corresponding dependencies.
Debian GNU/Linux, the mother Operating System of a number of Linux distributions including
Knoppix, Kali, Ubuntu, Mint, etc. uses various package Manager like dpkg, apt, aptitude,
synaptic, tasksel, deselect, dpkg-deb and dpkg-split.
APT Command - Apt stands for Advanced Package Tool. It doesn’t deal with ‘deb‘ package and
works directly, but works with ‘deb‘ archive from the location specified in the
“/etc/apt/sources.list” file.
Aptitude - Aptitude is a text based package manager for Debian which is front-end to ‘apt‘,
which enables user to manage packages easily.
Synaptic - Graphical package manager which makes it easy to install, upgrade and uninstall
packages even to novice.
Tasksel - Tasksel lets the user to install all the relevant packages related to a specific task, viz.,
Desktop-environment.
Deselect - A menu-driven package management tool, initially used during the first time install
and now is replaced with aptitude.
Dpkg-split - Useful in splitting and merging large file into chunks of small files to be stored on
Page
dpkg is the main package management program in Debian and Debian based System. It is used
to install, build, remove, and manage packages. Aptitude is the primary front-end to dpkg.
1. Install a Package
For installing an “.deb” package, use the command with “-i” option. For example, to install an
“.deb” package called “flashpluginnonfree_2.8.2+squeeze1_i386.deb” use the following
command.
dpkg -i flashpluginnonfree_2.8.2+squeeze1_i386.deb
To view a specific package installed or not use the option “-l” along with package-name. For
example, check whether apache2 package installed or not.
# dpkg -l apache2
3. Remove a Package
To remove the “.deb” package, we must specify the package name “flashpluginnonfree“, not
the original name “flashplugin-nonfree_3.2_i386.deb“. The “-r” option is used to
remove/uninstall a package.
# dpkg -r flashpluginnonfree
You can also use ‘p‘ option in place of ‘r’ which will remove the package along with
configuration file. The ‘r‘ option will only remove the package and not configuration files.
# dpkg -p flashpluginnonfree
53
RPM Command:
RPM (Red Hat Package Manager) is an default open source and most popular package
management utility for Red Hat based systems like (RHEL, CentOS and Fedora). The tool allows
system administrators and users to install, update, uninstall, query, verify and manage system
software packages in Unix/Linux operating systems. The RPM formerly known as .rpm file, that
includes compiled software programs and libraries needed by the packages. This utility only
works with packages that built on .rpm format.
3. RPM is the only way to install packages under Linux systems, if you’ve installed packages
Page
1. http://rpmfind.net
2. http://www.redhat.com
3. http://freshrpms.net/
4. http://rpm.pbone.net/
Please remember you must be root user when installing packages in Linux, with the root
privileges you can manage rpm commands with their appropriate options.
1. -i : install a package
2. -v : verbose for a nicer display
3. -h: print hash marks as the package archive is unpacked.
1. -q : Query a package
2. -p : List capabilities this package provides.
3. -R: List capabilities on which this package depends..
The above command forcefully install rpm package by ignoring dependencies errors, but if
those dependency files are missing, then the program will not work at all, until you install them.
The downside of this installation method is that no dependency resolution is provided. You will
most likely choose to install a package from a compiled file when such package is not available
in the distribution’s repositories and therefore cannot be downloaded and installed through a
high-level tool. Since low-level tools do not perform dependency resolution, they will exit with
an error if we try to install a package with unmet dependencies.
60
Again, you will only upgrade an installed package manually when it is not available in the
central repositories.
# dpkg -i file.deb [Debian and derivative]
# rpm -U file.rpm [CentOS / openSUSE]
When you first get your hands on an already working system, chances are you’ll want to know
what packages are installed.
# dpkg -l [Debian and derivative]
# rpm -qa [CentOS / openSUSE]
If you want to know whether a specific package is installed, you can pipe the output of the
above commands to grep.Suppose we need to verify if package mysql-common is installed on
an Ubuntu system.
# dpkg -l | grep mysql-common
For example, let’s find out whether package sysdig is installed on our system.
# rpm -qa | grep sysdig
In the search all option, yum will search for package_name not only in package names, but also
in package descriptions.
# yum search package_name
# yum search all package_name
# yum whatprovides “*/package_name”
Let’s supposed we need a file whose name is sysdig. To know that package we will have to
install, let’s run.
# yum whatprovides “*/sysdig”
whatprovides tells yum to search the package the will provide a file that matches the above
regular expression.
# zypper refresh && zypper search package_name [On openSUSE]
While installing a package, you may be prompted to confirm the installation after the package
manager has resolved all dependencies. Note that running update or refresh (according to the
package manager being used) is not strictly necessary, but keeping installed packages up to
date is a good sysadmin practice for security and dependency reasons.
# aptitude update && aptitude install package_name [Debian and derivatives]
# yum update && yum install package_name [CentOS]
# zypper refresh && zypper install package_name [openSUSE]
62
3. Removing a package
Page
The option remove will uninstall the package but leaving configuration files intact, whereas
purge will erase every trace of the program from your system.
# aptitude remove / purge package_name
# yum erase package_name
---Notice the minus sign in front of the package that will be uninstalled, openSUSE ---
# zypper remove -package_name
Most (if not all) package managers will prompt you, by default, if you’re sure about proceeding
with the uninstallation before actually performing it. So read the onscreen messages carefully
to avoid running into unnecessary trouble!
The following command will display information about the birthday package.
# aptitude show birthday
# yum info birthday
# zypper info birthday
9.arch Command
arch is a simple command for displaying machine architecture or hardware name (similar to
uname -m):
$ arch
10.arp Command
ARP (Address Resolution Protocol) is a protocol that maps IP network addresses of a network
neighbor with the hardware (MAC) addresses in an IPv4 network.
You can use it as below to find all alive hosts on a network:
$ sudo arp-scan --interface=enp2s0 --localnet
11.at Command
at command is used to schedule tasks to run in a future time. It’s an alternative to cron and
63
anacron, however, it runs a task once at a given future time without editing any config files:
Page
For example, to shutdown the system at 23:55 today, run:
$ sudo echo "shutdown -h now" | at -m 23:55
As an alternative to cron job scheduler, the at command allows you to schedule a command to
run once at a given time without editing a configuration file.
The only requirement consists of installing this utility and starting and enabling its execution:
Once atd is running, you can schedule any command or task as follows. We want to send 4 ping
probes to www.google.com when the next minute starts (i.e. if it’s 22:20:13, the command will
be executed at 22:21:00) and report the result through an email (-m, requires Postfix or
equivalent) to the user invoking the command:
# echo "ping -c 4 www.google.com" | at -m now + 1 minute
If you choose to not use the -m option, the command will be executed but nothing will be
printed to standard output. You can, however, choose to redirect the output to a file instead.
In addition, please note that at not only allows the following fixed times: now, noon (12:00),
and midnight (00:00), but also custom 2-digit (representing hours) and 4-digit times (hours and
minutes).
64
Page
For example,
To run updatedb at 11 pm today (or tomorrow if the current date is greater than 11 pm), do:
# echo "updatedb" | at -m 23
To shutdown the system at 23:55 today (same criteria as in the previous example applies):
# echo "shutdown -h now" | at -m 23:55
You can also delay the execution by minutes, hours, days, weeks, months, or years using the +
sign and the desired time specification as in the first example.
12.atq Command
atq command is used to view jobs in at command queue:
$ atq
13.atrm Command
atrm command is used to remove/deletes jobs (identified by their job number) from at
command queue:
$ atrm 2
14.awk Command
Awk is a powerful programming language created for text processing and generally used as a
data extraction and reporting tool.
$ awk '//{print}'/etc/hosts
15.batch Command
batch is also used to schedule tasks to run a future time, similar to the at command.
16.basename Command
basename command helps to print the name of a file stripping of directories in the absolute
65
path:
Page
$ basename bin/findhosts.sh
17.bc Command
bc is a simple yet powerful and arbitrary precision CLI calculator language which can be used
like this:
$ echo 20.05 + 15.00 | bc
18.bg Command
bg is a command used to send a process to the background.
$ tar -czf home.tar.gz .
$ bg
$ jobs
When a process is associated with a terminal, two problems might occur:
1. your controlling terminal is filled with so much output data and error/diagnostic messages.
2. in the event that the terminal is closed, the process together with its child processes will be
terminated.
To deal with these two issues, you need to totally detach a process from a controlling terminal.
Before we actually move to solve the problem, let us briefly cover how to run processes in the
background.
You can view all your background jobs by typing jobs. However, its stdin, stdout, stderr are still
joined to the terminal.
$ tar -czf home.tar.gz .
$ bg
$ jobs
You can as well run a process directly from the background using the ampersand, & sign.
66
$ jobs
Take a look at the example below, although the tar command was started as a background job,
an error message was still sent to the terminal meaning the process is still connected to the
controlling terminal.
$ tar -czf home.tar.gz . &
$ jobs
We will use disown command, it is used after the a process has been launched and put in the
background, it’s work is to remove a shell job from the shell’s active list jobs, therefore you will
not use fg, bg commands on that particular job anymore.
In addition, when you close the controlling terminal, the job will not hang or send a SIGHUP to
any child jobs.
Let’s take a look at the below example of using diswon bash built-in function.
$ sudo rsync Templates/* /var/www/html/files/ &
$ jobs
$ disown -h %1
$ jobs
You can also use nohup command, which also enables a process to continue running in the
background when a user exits a shell.
$ nohup tar -czf iso.tar.gz Templates/* &
$ jobs
In Linux, /dev/null is a special device file which writes-off (gets rid of) all data written to it, in
the command above, input is read from, and output is sent to /dev/null.
67
Page
SSH or Secure Shell in simple terms is a way by which a person can remotely access another
user on other system but only in command line i.e. non-GUI mode. In more technical terms,
when we ssh on to other user on some other system and run commands on that machine, it
actually creates a pseudo-terminal and attaches it to the login shell of the user logged in.
When we log out of the session or the session times out after being idle for quite some time,
the SIGHUP signal is send to the pseudo-terminal and all the jobs that have been run on that
terminal, even the jobs that have their parent jobs being initiated on the pseudo-terminal are
also sent the SIGHUP signal and are forced to terminate.
Only the jobs that have been configured to ignore this signal are the ones that survive the
session termination. On Linux systems, we can have many ways to make these jobs running on
the remote server or any machine even after user logout and session termination.
Normal Process
Normal processes are those which have life span of a session. They are started during the
session as foreground processes and end up in certain time span or when the session gets
logged out. These processes have their owner as any of the valid user of the system, including
root.
Orphan Process
Orphan processes are those which initially had a parent which created the process but after
some time, the parent process unintentionally died or crashed, making init to be the parent of
that process. Such processes have init as their immediate parent which waits on these
processes until they die or end up.
Daemon Process
These are some intentionally orphaned processes, such processes which are intentionally left
running on the system are termed as daemon or intentionally orphaned processes. They are
usually long-running processes which are once initiated and then detached from any controlling
68
terminal so that they can run in background till they do not get completed, or end up throwing
Page
an error. Parent of such processes intentionally dies making child execute in background.
Techniques to Keep SSH Session Running After Disconnection
There can be various ways to leave ssh sessions running after disconnection as described below:
screen sessions can be started and then detached from the controlling terminal leaving them
running in background and then be resumed at any time and even at any place. Just you need
to start your session on the screen and when you want, detach it from pseudo-terminal (or the
controlling terminal) and logout. When you feel, you can re-login and resume the session.
After typing ‘screen’ command, you will be in a new screen session, within this session you can
create new windows, traverse between windows, lock the screen, and do many more stuff
which you can do on a normal terminal.
$ screen
Once screen session started, you can run any command and keep the session running by
detaching the session.
Detaching a Screen
Just when you want to log out of the remote session, but you want to keep the session you
created on that machine alive, then just what you need to do is detach the screen from the
terminal so that it has no controlling terminal left. After doing this, you can safely logout.
To detach a screen from the remote terminal, just press “Ctrl+a” immediately followed by “d”
and you will be back to the terminal seeing the message that the Screen is detached. Now you
69
If you want to Resume a detached screen session which you left before logging out, just re-login
to remote terminal again and type “screen -r” in case if only one screen is opened, and if
multiple screen sessions are opened run “screen -r <pid.tty.host>”.
$ screen -r
$ screen -r <pid.tty.host>
Screen is a full-screen software program that can be used to multiplexes a physical console
between several processes (typically interactive shells). It offers a user to open several separate
terminal instances inside a one single terminal window manager.
The screen application is very useful, if you are dealing with multiple programs from a
command line interface and for separating programs from the terminal shell. It also allows you
to share your sessions with others users and detach/attach terminal sessions.
Actually, Screen is a very good command in Linux which is hidden inside hundreds of Linux
commands. Let’s start to see the function of Screen.
Type “Ctrl-A” and “?” without quotes. Then you will see all commands or parameters on screen.
Page
To get out of the help screen, you can press “space-bar” button or “Enter“. (Please note that all
shortcuts which use “Ctrl-A” is done without quotes).
You are in the middle of SSH-on your server. Let’s say that you are downloading 400MB patch
for your system using wget command.
The download process is estimated to take 2 hours long. If you disconnect the SSH session, or
suddenly the connection lost by accident, then the download process will stop. You have to
start from the beginning again. To avoid that, we can use screen and detach it.
Take a look at this command. First, you have to enter the screen.
~ $ screen
Then you can do the download process. For examples,I am upgrading my dpkg package using
apt-get command.
~ $ sudo apt-get install dpkg
While downloading in progress, you can press “Ctrl-A” and “d“. You will not see anything when
you press those buttons.
And you will see that the process you left is still running.
When you have more than 1 screen session, you need to type the screen session ID. Use screen
71
~ $ screen -ls
Using Multiple Screen
When you need more than 1 screen to do your job, is it possible? Yes it is. You can run multiple
screen window at the same time. There are 2 (two) ways to do it.
First, you can detach the first screen and the run another screen on the real terminal. Second,
you do nested screen.
With this screen logging, you don’t need to write down every single command that you have
done. To activate screen logging function, just press “Ctrl-A” and “H“. (Please be careful, we use
capital ‘H’ letter. Using non capital ‘h’, will only create a screenshot of screen in another file
named hardcopy).
At the bottom left of the screen, there will be a notification that tells you like: Creating logfile
“screenlog.0“. You will find screenlog.0 file in your home directory.
This feature will append everything you do while you are in the screen window. To close screen
to log running activity, press “Ctrl-A” and “H” again.
Another way to activate logging feature, you can add the parameter “-L” when the first time
running screen. The command will be like this.
~ $ screen -L
72
Lock screen
Page
Screen also have shortcut to lock the screen. You can press “Ctrl-A” and “x” shortcut to lock the
screen. This is handy if you want to lock your screen quickly. Here’s a sample output of lock
screen after you press the shortcut.
To make your screen password protected, you can edit “$HOME/.screenrc” file. If the file
doesn’t exist, you can create it manually. The syntax will be like this.
password crypt_password
mkpasswd will generate a hash password as shown above. Once you get the hash password,
you can copy it into your “.screenrc” file and save it. So the “.screenrc” file will be like this.
Next time you run screen and detach it, password will be asked when you try to re-attach it, as
shown below:
~$ screen -r 5741
After you implement this screen password and you press “Ctrl-A” and “x”.
A Password will be asked to you twice. First password is your Linux password, and the second
password is the password that you put in your .screenrc file.
Leaving Screen
There are 2 (two) ways to leaving the screen. First, we are using “Ctrl-A” and “d” to detach the
screen. Second, we can use the exit command to terminating screen. You also can use “Ctrl-A”
and “K” to kill the screen.
73
Page
2. Using Tmux (Terminal Multiplexer) to Keep SSH Sessions Running
Tmux is another software which is created to be a replacement for screen. It has most of the
capabilities of screen, with few additional capabilities which make it more powerful than
screen.
It allows, apart from all options offered by screen, splitting panes horizontally or vertically
between multiple windows, resizing window panes, session activity monitoring, scripting using
command line mode etc. Due to these features of tmux, it has been enjoying wide adoption by
nearly all Unix distributions and even it has been included in the base system of OpenBSD.
After doing ssh on the remote host and typing tmux, you will enter into a new session with a
new window opening in front of you, wherein you can do anything you do on a normal
terminal.
$ tmux
After performing your operations on the terminal, you can detach that session from the
controlling terminal so that it goes into background and you can safely logout.
Either you can run “tmux detach” on running tmux session or you can use the shortcut (Ctrl+b
then d). After this your current session will be detached and you will come back to your
terminal from where you can log out safely.m
$ tmux detach
To re-open the session which you detached and left as is when you logged out of the system,
just re-login to the remote machine and type “tmux attach” to reattach to the closed session
and it will be still be there and running.
$ tmux attach
74
Page
Debian (from the admin packages section of the stable version) and derivatives:
# aptitude update && aptitude install tmux
Once you have installed tmux, let’s take a look at what it has to offer.
At the bottom of the screen you will see an indicator of the session you’re currently in:
1. divide the terminal into as many panes as you want with Ctrl+b+" to split
horizontally and Ctrl+b+% to split vertically. Each pane will represent a separate
console.
2. move from one to another with Ctrl+b+left, +up, +right, or +down keyboard
arrow, to move in the same direction.
3. resize a pane, by holding Ctrl+b while you press one of the keyboard arrows in
the direction where you want to move the boundaries of the active pane.
4. show the current time inside the active pane by pressing Ctrl+b+t.
5. close a pane, by placing the cursor inside the pane that you want to remove and
pressing Ctrl+b+x. You will be prompted to confirm this operation.
6. detach from the current session (thus returning to the regular terminal) by
pressing Ctrl+b+d.
7. create a new session named admin with
75
If you find the default key bindings used in the preceding examples inconvenient, you can
change it and customize it on either 1) a per-user basis (by creating a file named .tmux.conf
inside each user’s home directory – do not omit the leading dot in the filename) or 2) system-
wide (through /etc/tmux.conf, not present by default).
If both methods are used, the system-wide configuration is overridden by each user’s
preferences.
For example, let’s say you want to use Alt+a instead of Ctrl+b, insert the following contents in
one of the files mentioned earlier as needed:
unbind C-b
set -g prefix M-a
After saving changes and restarting tmux, you will be able to use Alt+a+" and Alt+a+t to split
the window horizontally and to show the current time inside the active pane, respectively.
Here, is a simple scenario wherein, we have run find command to search for files in background
on ssh session using nohup, after which the task was sent to background with prompt returning
immediately giving PID and job ID of the process ([JOBID] PID).
# nohup find / -type f $gt; files_in_system.out 2>1 &
When you re-login again, you can check the status of command, bring it back to foreground
using 'fg %JOBID' to monitor its progress and so on. Below, the output shows that the job was
completed as it doesn’t show on re-login, and has given the output which is displayed.
# fg %JOBID
Disown, removes the job from the process job list of the system, so the process is shielded from
being killed during session disconnection as it won’t receive SIGHUP by the shell when you
logout.
Disadvantage of this method is that, it should be used only for the jobs that do not need any
input from the stdin and neither need to write to stdout, unless you specifically redirect jobs
input and output, because when job will try to interact with stdin or stdout, it will halt.
Below, we sent ping command to background so that ut keeps on running and gets removed
Page
from job list. As seen, the job was first suspended, after which it was still in the job list as
Process ID: 15368.
$ ping tecmint.com > pingout &
$ jobs -l
$ diswon -h %1
$ ps -ef | grep ping
After that disown signal was passed to the job, and it was removed from job list, though was
still running in background. The job would still be running when you would re-login to the
remote server as seen below.
$ ps -ef | grep ping
setsid on other hand allocates a new process group to the process being executed and hence,
the process created is totally in a newly allocated process group and can execute safely without
fear of being killed even after session logout.
Here, it shows that the process ‘sleep 10m’ has been detached from the controlling terminal,
since the time it has been created.
$ setsid sleep 10m
$ ps -ef | grep sleep
Now, when you would re-login the session, you will still find this process running.
$ ps -ef | grep [s]leep
19.bzip2 Command
78
Page
bzip2 command is used to compress or decompress file(s).
$ bzip2 -z filename #Compress
$ bzip2 -d filename.bz2 #Decompress
To compress a file(s), is to significantly decrease the size of the file(s) by encoding data in the
file(s) using less bits, and it is normally a useful practice during backup and transfer of a file(s)
over a network. On the other hand, decompressing a file(s) means restoring data in the file(s) to
its original state.
There are several file compression and decompression tools available in Linux such as gzip, 7-
zip, Lrzip, PeaZip and many more.
Bzip2 is a well known compression tool and it’s available on most if not all the major Linux
distributions, you can use the appropriate command for your distribution to install it.
$ sudo apt install bzip2 [On Debian/Ubuntu]
$ sudo yum install bzip2 [On CentOS/RHEL]
$ sudo dnf install bzip2 [On Fedora 22+]
Important: By default, bzip2 deletes the input files during compression or decompression, to
keep the input files, use the -k or --keep option.
In addition, the -f or --force flag will force bzip2 to overwrite an existing output file.
79
Page
You can as well set the block size to 100k upto 900k, using -1 or --fast to -9 or –best as shown in
the below examples:
$ bzip2 -k1 Etcher-linux-x64.AppImage
$ ls -lh Etcher-linux-x64.AppImage.bz2
$ bzip2 -k9 Etcher-linux-x64.AppImage
$ bzip2 -kf9 Etcher-linux-x64.AppImage
$ ls -lh Etcher-linux-x64.AppImage.bz2
Note: The file must end with a .bz2 extension for the command above to work.
$ bzip2 -vd Etcher-linux-x64.AppImage.bz2
$ bzip2 -vfd Etcher-linux-x64.AppImage.bz2
$ ls -l Etcher-linux-x64.AppImage
To view the bzip2 help page and man page, type the command below:
$ bzip2 -h
$ man bzip2
Rsync (Remote Sync) is a most commonly used command for copying and synchronizing files
and directories remotely as well as locally in Linux/Unix systems. With the help of rsync
command you can copy and synchronize your data remotely and locally across directories,
across disks and networks, perform data backups and mirroring between two Linux machines.
1. -v : verbose
2. -r : copies data recursively (but don’t preserve timestamps and permission while
transferring data
3. -a : archive mode, archive mode allows copying files recursively and it also preserves
symbolic links, file permissions, user & group ownerships and timestamps
4. -z : compress file data
5. -h : human-readable, output numbers in a human-readable format
This following command will sync a single file on a local machine from one location to another
81
location. Here in this example, a file name backup.tar needs to be copied or synced to
Page
/tmp/backups/ folder.
# rsync -zvh backup.tar /tmp/backups/
The following command will transfer or sync all the files of from one directory to a different
directory in the same machine. Here in this example, /root/rpmpkgs contains some rpm
package files and you want that directory to be copied inside /tmp/backups/ folder.
# rsync -avzh /root/rpmpkgs /tmp/backups/
This command will sync a directory from a local machine to a remote machine. For example:
There is a folder in your local computer “rpmpkgs” which contains some RPM packages and you
want that local directory’s content send to a remote server, you can use following command.
$ rsync -avz rpmpkgs/ [email protected]:/home/
This command will help you sync a remote directory to a local directory. Here in this example, a
directory /home/niki/rpmpkgs which is on a remote server is being copied in your local
computer in /tmp/myrpms.
# rsync -avzh [email protected]:/home/niki/rpmpkgs /tmp/myrpms
To specify a protocol with rsync you need to give “-e” option with protocol name you want to
use. Here in this example, We will be using “ssh” with “-e” option and perform data transfer.
# rsync -avzhe ssh [email protected]:/root/install.log /tmp/
Here in this example, rsync command will include those files and directory only which starts
with ‘R’ and exclude all other files and directory.
# rsync -avze ssh --include 'R*' --exclude '*' [email protected]:/var/lib/rpm/ /root/rpm
If a file or directory not exist at the source, but already exists at the destination, you might want
Page
Target has the new file called test.txt, when synchronize with the source with ‘–delete‘ option,
it removed the file test.txt.
So, will you wait for transfer to complete and then delete those local backup file manually? Of
Course NO. This automatic deletion can be done using ‘–remove-source-files‘ option.
# rsync --remove-source-files -zvh backup.tar /tmp/backups/
Use of this option will not make any changes only do a dry run of the command and shows the
84
output of the command, if the output shows exactly same you want to do then you can remove
Page
‘–dry-run‘ option from your command and run on the terminal.
# rsync --dry-run --remove-source-files -zvh backup.tar /tmp/backups/
Also, by default rsync syncs changed blocks and bytes only, if you want explicitly want to sync
whole file then you use ‘-W‘ option with it.
# rsync -zvhW backup.tar /tmp/backups/backup.tar
To start with, you need remember that the conventional and simplest form of using rsync is as
follows:
# rsync options source destination
That said, let us dive into some examples to uncover how the concept above actually works.
By default, rsync only copies new or changed files from a source to destination, when I add a
new file into my Documents directory, this is what happens after running the same command
85
second time:
Page
The --update or -u option allows rsync to skip files that are still new in the destination directory,
and one important option, --dry-run or -n enables us to execute a test operation without
making any changes. It shows us what files are to be copied.
$ rsync -aunv Documents/* /tmp/documents
After executing a test run, we can then do away with the -n and perform a real operation:
$ rsync -auv Documents/* /tmp/documents
Subsequently, to sync only updated or modified files on the remote machine that have changed
on the local machine, we can perform a dry run before copying files as below:
$ rsync -av --dry-run --update Documents/* [email protected]:~/all/
$ rsync -av --update Documents/* [email protected]:~/all/
To update existing files and prevent creation of new files in the destination, we utilize the
--existing option.
The main advantages of creating a web server backup with rsync are as follows:
Page
1. Rsync syncs only those bytes and blocks of data that have changed.
2. Rsync has the ability to check and delete those files and directories at backup server
that have been deleted from the main web server.
3. It takes care of permissions, ownerships and special attributes while copying data
remotely.
4. It also supports SSH protocol to transfer data in an encrypted manner so that you will be
assured that all data is safe.
5. Rsync uses compression and decompression method while transferring data which
consumes less bandwidth.
Main Server
1. IP Address: 192.168.0.100
2. Hostname: webserver.example.com
Backup Server
1. IP Address: 192.168.0.101
2. Hostname: backup.example.com
# passwd niki
Here I have created a user “niki” and assigned a password to user.
You can see that your rsync is now working absolutely fine and syncing data. I have used
“/var/www” to transfer; you can change the folder location according to your needs.
Here in this example, I am doing it as root to preserve file ownerships as well, you can do it for
alternative users too.
First, we’ll generate a public and private key with following commands on backups server (i.e.
backup.example.com).
# ssh-keygen -t rsa -b 2048
When you enter this command, please don’t provide passphrase and click enter for Empty
passphrase so that rsync cron will not need any password for syncing data.
Now, our Public and Private key has been generated and we will have to share it with main
server so that main web server will recognize this backup machine and will allow it to login
without asking any password while syncing data.
# ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]
Now try logging into the machine, with “ssh ‘[email protected]‘”, and check
88
in .ssh/authorized_keys.
Page
# [email protected]
Now, we are done with sharing keys. To know more in-depth about SSH password less login,
you can read our article on it.
It will open up /etc/crontab file to edit with your default editor. Here In this example, I am
writing a cron to run it every 5 minutes to sync the data.
The above cron and rsync command simply syncing “/var/www/” from the main web server to
a backup server in every 5 minutes. You can change the time and folder location configuration
according to your needs.
SSH (Secure SHELL) is an open source and most trusted network protocol that is used to login
into remote servers for execution of commands and programs. It is also used to transfer files
from one computer to another computer over the network using secure copy (SCP) Protocol.
In this example we will setup SSH password-less automatic login from server 192.168.0.12 as
user shimon to 192.168.0.11 with user niki.
$ ssh-keygen -t rsa
Cron Scheduling
Cron is a daemon to run schedule tasks. Cron wakes up every minute and checks schedule tasks
in crontable. Crontab (CRON TABle) is a table where we can schedule such kind of repeated
tasks.
Tips: Each user can have their own crontab to create, modify and delete tasks. By default cron
is enable to users, however we can restrict adding entry in /etc/cron.deny file.
Crontab file consists of command per line and have six fields actually and separated either of
space or tab. The beginning five fields represent time to run tasks and last field is for command.
5. Day of week (hold values between 0-6 or Sun-Sat, Here also you can use first three
Page
Note: Only root user have complete privileges to see other users crontab entry. Normal user
can’t view it others.
3. Slash (/) – 1st field /10 meaning every ten minute or increment of range.
4. Comma (,) – To separate items.
Page
7. System Wide Cron Schedule
System administrator can use predefine cron directory as shown below.
1. /etc/cron.d
2. /etc/cron.daily
3. /etc/cron.hourly
4. /etc/cron.monthly
5. /etc/cron.weekly
Strings Meanings
Need to replace five fields of cron command with keyword if you want to use the same.
Page
10. Multiple Commands with Double amper-sand(&&)
# crontab -e
@daily <command1> && <command2>
Features of Archiving
1. Data Compression
2. Encryption
3. File Concatenation
4. Automatic Extraction
5. Automatic Installation
6. Source Volume and Media Information
7. File Spanning
8. Checksum
9. Directory Structure Information
10. Other Metadata (Data About Data)
11. Error discovery
Area of Application
93
1. tar Command
tar is the standard UNIX/Linux archiving application tool. In its early stage it used to be a Tape
Archiving Program which gradually is developed into General Purpose archiving package which
is capable of handling archive files of every kind. tar accepts a lot of archiving filter with
options.
tar options
tar Examples
The Linux “tar” stands for tape archive, which is used by large number of Linux/Unix system
administrators to deal with tape drives backup. The tar command used to rip a collection of files
Page
and directories into highly compressed archive file commonly called tarball or tar, gzip and bzip
in Linux. The tar is most widely used command to create compressed archive files and that can
be moved easily from one disk to another disk or machine to machine.
The main purpose of this guide is to provide various tar command examples that might be
helpful for you to understand and become expert in tar archive manipulation.
OR
# tar cvfj Phpfiles-org.tar.tbz /home/php
OR
# tar cvfj Phpfiles-org.tar.tb2 /home/php
13. Untar Multiple files from tar, tar.gz and tar.bz2 File
To extract or untar multiple files from the tar, tar.gz and tar.bz2 archive file. For example the
below command will extract “file 1” “file 2” from the archive files.
# tar -xvf niki-14-09-12.tar "file 1" "file 2"
# tar -zxvf MyImages-14-09-12.tar.gz "file 1" "file 2"
# tar -jxvf Phpfiles-org.tar.bz2 "file 1" "file 2"
97
Page
14. Extract Group of Files using Wildcard
To extract a group of files we use wildcard based extracting. For example, to extract a group of
all files whose pattern begins with .php from a tar, tar.gz and tar.bz2 archive file.
# tar -xvf Phpfiles-org.tar --wildcards '*.php'
# tar -zxvf Phpfiles-org.tar.gz --wildcards '*.php'
# tar -jxvf Phpfiles-org.tar.bz2 --wildcards '*.php'
18. Check the Size of the tar, tar.gz and tar.bz2 Archive File
To check the size of any tar, tar.gz and tar.bz2 archive file, use the following command. For
example the below command will display the size of archive file in Kilobytes (KB).
# tar -czf - tecmint-14-09-12.tar | wc -c
# tar -czf - MyImages-14-09-12.tar.gz | wc -c
# tar -czf - Phpfiles-org.tar.bz2 | wc -c
98
Page
Tar Usage and Options
2.shar Command
shar which stands for Shell archive is a shell script, the execution of which will create the files.
shar is a self-extracting archive file which is a legacy utility and needs Unix Bourne Shell to
extract the files. shar has an advantage of being plain text however it is potentially dangerous,
since it outputs an executable.
shar options
Note: The ‘-o‘ option is required if the ‘-l‘ or ‘-L‘ option is used and the ‘-n‘ option is required if
the ‘-a‘ option is used.
shar Examples
99
Page
Create a shar archive file.
# shar file_name.extension > filename.shar
3. ar Command
ar is the creation and manipulation utility for archives, mainly used for binary object file
libraries. ar stands for archiver which can be used to create archive of any kind for any purpose
but has largely been replaced by ‘tar’ and now-a-days it is used only to create and update static
library files.
ar options
ar Examples
Create an archive using ‘ar‘ tool with a static library say ‘libmath.a‘ with the objective files
‘substraction’ and ‘division’ as.
# ar cr libmath.a substraction.o division.o
cpio options
cpio Examples
5. Gzip
gzip is standard and widely used file compression and decompression utility. Gzip allows file
concatenation. Compressing the file with gzip, outputs the tarball which is in the format of
‘*.tar.gz‘ or ‘*.tgz‘.
gzip options
gzip Examples
Note: The architecture and functionality of ‘gzip’ makes it difficult to recover corrupted
‘gzipped tar archive’ file. It is advised to make several backups of gzipped Important files, at
different Locations.
20.cal Command
The cal command print a calendar on the standard output.
$ cal
21.cat Command
cat command is used to view contents of a file or concatenate files, or data provided on
standard input, and display it on the standard output.
$ cat file.txt
The cat (short for “concatenate“) command is one of the most frequently used command in
Linux/Unix like operating systems. cat command allows us to create single or multiple files, view
contain of file, concatenate files and redirect output in terminal or files. In this article, we are
going to find out handy use of cat commands with their examples in Linux.
102
General Syntax
Page
Awaits input from user, type desired text and press CTRL+D (hold down Ctrl Key and type ‘d‘) to
exit. The text will be written in test2 file. You can see content of file with following cat
command.
# cat test2
# cat -e test
Page
7. Display Tab separated Lines in File
In the below output, we could see TAB space is filled up with ‘^I‘ character.
# cat -T test
The command can also be used to concatenate (join) multiple files into one single file using the
“>” Linux redirection operator.
# cat file1.txt file2.txt file3.txt > file-all.txt
By using the append redirector you can add the content of a new file to the bottom of the file-
all.txt with the following syntax.
# cat file4.txt >> file-all.txt
The cat command can be used to copy the content of file to a new file. The new file can be
renamed arbitrary. For example, copy the file from the current location to /tmp/ directory.
# cat file1.txt > /tmp/file1.txt
Copy the file from the current location to /tmp/ directory and change its name.
# cat file1.txt > /tmp/newfile.cfg
A less usage of the cat command is to create a new file with the below syntax. When finished
editing the file hit CTRL+D to save and exit the new file.
# cat > new_file.txt
In order to number all output lines of a file, including empty lines, use the -n switch.
# cat -n file-all.txt
To display only the number of each non-empty line use the -b switch.
# cat -b file-all.txt
is practically the reverse version of cat command (also spelled backwards) which prints each
line of a file starting from the bottom line and finishing on the top line to your machine
Page
standard output.
# tac file-all.txt
One of the most important option of the command is represented by the -s switch, which
separates the contents of the file based on a string or a keyword from the file.
# tac file-all.txt --separator "two"
Next, most important usage of tac command is, that it can provide a great help in order to
debug log files, reversing the chronological order of log contents.
$ tac /var/log/auth.log
Or to display the last lines
$ tail /var/log/auth.log | tac
Same as cat command, tac does an excellent job in manipulating text files, but it should be
avoided in other type of files, especially binary files or on files where the first line denotes the
program that will run it.
22.chgrp Command
chgrp command is used to change the group ownership of a file. Provide the new group name
as its first argument and the name of file as the second argument like this:
$ chgrp niki users.txt
23.chmod Command
chmod command is used to change/update file access permissions like this.
$ chmod +x sysinfo.sh
24.chown Command
chown command changes/updates the user and group ownership of a file/directory like this.
$ chmod -R www-data:www-data /var/www/html
106
Managing Users & Groups, File Permissions & Attributes and Enabling sudo Access on
Accounts
Page
Adding User Accounts
To add a new user account, you can run either of the following two commands as root.
# adduser [new_account]
# useradd [new_account]
When a new user account is added to the system, the following operations are performed.
2. The following hidden files are copied into the user’s home directory, and will be used to
provide environment variables for his/her user session.
.bash_logout .b
ash_profile
.bashrc
4. A group is created and given the same name as the new user account.
Understanding /etc/passwd
The full account information is stored in the /etc/passwd file. This file contains a record per
system user account and has the following format (fields are delimited by a colon).
5. The [Default shell] is the shell that will be made available to this user when he or she
Page
Group information is stored in the /etc/group file. Each record has the following format.
After adding an account, you can edit the following information (to name a few fields) using the
usermod command, whose basic syntax of usermod is as follows.
# usermod [options] [username]
# groups niki
Page
# id niki
Now let’s execute all the above commands in one go.
# usermod --expiredate 2019-10-25--append --groups root,users --home /tmp --shell /bin/sh
niki
In the example above, we will set the expiry date of the niki user account to October 25th,
2019. We will also add the account to the root and users group. Finally, we will set sh as its
default shell and change the location of the home directory to /tmp:
Creating a new group for read and write access to files that need to be accessed by several
users
Deleting a group
You can delete a group with the following command.
# groupdel [group_name]
109
If there are files owned by group_name, they will not be deleted, but the group owner will be
Page
Group Management
Every time a new user account is added to the system, a group with the same name is created
with the username as its only member. Other users can be added to the group later. One of the
purposes of groups is to implement a simple access control to files and other system resources
by setting the right permissions on those resources.
All of them need read and write access to a file called common.txt located somewhere on your
local system, or maybe on a network share that user1 has created. You may be tempted to do
something like,
# chmod 660 common.txt
OR
# chmod u=rw,g=rw,o= common.txt [notice the space between the last equal sign and the file
name]
However, this will only provide read and write access to the owner of the file and to those users
110
who are members of the group owner of the file (user1 in this case). Again, you may be
tempted to add user2 and user3 to group user1, but that will also give them access to the rest
Page
Understanding Setuid
When the setuid permission is applied to an executable file, an user running the program
inherits the effective privileges of the program’s owner. Since this approach can reasonably
raise security concerns, the number of files with setuid permission must be kept to a minimum.
You will likely find programs with this permission set when a system user needs to access a file
owned by root.
Summing up, it isn’t just that the user can execute the binary file, but also that he can do so
with root’s privileges. For example, let’s check the permissions of /bin/passwd. This binary is
used to change the password of an account, and modifies the /etc/shadow file. The superuser
can change anyone’s password, but all other users should only be able to change their own.
Thus, any user should have permission to run /bin/passwd, but only root will be able to specify
an account. Other users can only change their corresponding passwords.
Understanding Setgid
When the setgid bit is set, the effective GID of the real user becomes that of the group owner.
Thus, any user can access a file under the privileges granted to the group owner of such file. In
addition, when the setgid bit is set on a directory, newly created files inherit the same group as
the directory, and newly created subdirectories will also inherit the setgid bit of the parent
directory. You will most likely use this approach whenever members of a certain group need
access to all the files in a directory, regardless of the file owner’s primary group.
# chmod g+s [filename]
To set the setgid in octal form, prepend the number 2 to the current (or desired) basic
permissions.
111
To set the sticky bit in octal form, prepend the number 1 to the current (or desired) basic
permissions.
# chmod 1755 [directory]
Without the sticky bit, anyone able to write to the directory can delete or rename files. For that
reason, the sticky bit is commonly found on directories, such as /tmp, that are world-writable.
After executing those two commands, file1 will be immutable (which means it cannot be
moved, renamed, modified or deleted) whereas file2 will enter append-only mode (can only be
open in append mode for writing).
chattr (Change Attribute) is a command line Linux utility that is used to set/unset certain
attributes to a file in Linux system to secure accidental deletion or modification of important
files and folders, even though you are logged in as a root user.
In Linux native filesystems i.e. ext2, ext3, ext4, btrfs, etc. supports all the flags, though all the
flags won’t support to all non-native FS. One cannot delete or modify file/folder once attributes
are sets with chattr command, even though one have full permissions on it.
112
Syntax of chattr
# chattr [operator] [flags] [filename]
Page
Attributes and Flags
Following are the list of common attributes and associated flags can be set/unset using the
chattr command.
1. If a file is accessed with ‘A‘ attribute set, its atime record is not updated.
2. If a file is modified with ‘S‘ attribute set, the changes are updates synchronously on the
disk.
3. A file is set with ‘a‘ attribute, can only be open in append mode for writing.
4. A file is set with ‘i‘ attribute, cannot be modified (immutable). Means no renaming, no
symbolic link creation, no execution, no writable, only superuser can unset the attribute.
5. A file with the ‘j‘ attribute is set, all of its information updated to the ext3 journal before
being updated to the file itself.
6. A file is set with ‘t‘ attribute, no tail-merging.
7. A file with the attribute ‘d‘, will no more candidate for backup when the dump process is
run.
8. When a file has ‘u‘ attribute is deleted, its data are saved. This enables the user to ask
for its undeletion.
Operator
Here, we are going to demonstrate some of the chattr command examples to set/unset
attributes to a file and folders.
# ls -l
Page
To set attribute, we use the + sign and to unset use the – sign with the chattr command. So,
let’s set immutable bit on the files with +i flags to prevent anyone from deleting a file, even a
root user don’t have permission to delete it.
# chattr +i demo/
# chattr +i important_file.conf
Note: The immutable bit +i can only be set by superuser (i.e root) user or a user with sudo
privileges can able to set.
After setting immutable bit, let’s verify the attribute with command ‘lsattr‘.
# lsattr
Now, tried to delete forcefully, rename or change the permissions, but it won’t allowed says
“Operation not permitted“.
# rm -rf demo/
# mv demo/ demo_alter
# chmod 755 important_file.conf
After resetting permissions, verify the immutable status of files using ‘lsattr‘ command.
# lsattr
You see in the above results that the ‘-i‘ flag removed, that means you can safely remove all the
file and folder reside in niki folder.
# rm -rf *
114
Page
Now try to create a new system user, you will get error message saying ‘cannot open
/etc/passwd‘.
This way you can set immutable permissions on your important files or system configuration
files to prevent from deletion.
After setting append mode, the file can be opened for writing data in append mode only. You
can unset the append attribute as follows.
# chattr -a example.txt
Now try to replace already existing content on a file example.txt, you will get error saying
‘Operation not permitted‘.
Now try to append new content on a existing file example.txt and verify it.
# echo "replace contain on file." >> example.txt
# cat example.txt
To secure entire directory and its files, we use ‘-R‘ (recursively) switch with ‘+i‘ flag along with
full path of the folder.
Page
# chattr -R +i myfolder
After setting recursively attribute, try to delete the folder and its files.
# rm -rf myfolder/
To unset permission, we use same ‘-R’ (recursively) switch with ‘-i’ flag along with full path of
the folder.
# chattr -R -i myfolder
If authentication succeeds, you will be logged on as root with the current working directory as
the same as you were before. If you want to be placed in root’s home directory instead, run.
$ su -
and then enter root’s password.
The above procedure requires that a normal user knows root’s password, which poses a serious
security risk. For that reason, the sysadmin can configure the sudo command to allow an
ordinary user to execute commands as a different user (usually the superuser) in a very
controlled and limited way. Thus, restrictions can be set on a user so as to enable him to run
one or more specific privileged commands and no others.
To authenticate using sudo, the user uses his/her own password. After entering the command,
we will be prompted for our password (not the superuser’s) and if the authentication succeeds
(and if the user has been granted privileges to run the command), the specified command is
carried out.
To grant access to sudo, the system administrator must edit the /etc/sudoers file. It is
recommended that this file is edited using the visudo command instead of opening it directly
with a text editor.
116
# visudo
Page
Defaults secure_path="/usr/sbin:/usr/bin:/sbin:/usr/local/bin"
This line lets you specify the directories that will be used for sudo, and is used to prevent using
user-specific directories, which can harm the system.
1. The first ALL keyword indicates that this rule applies to all hosts.
2. The second ALL indicates that the user in the first column can run commands with the
privileges of any user.
3. The third ALL means any command can be run.
shimon ALL=NOPASSWD:/bin/updatedb
The NOPASSWD directive allows user shimon to run /bin/updatedb without needing to enter
his password.
the line is identical to that of an regular user. This means that members of the group “admin”
can run all commands as any user on all hosts.
Page
To see what privileges are granted to you by sudo, use the “-l” option to list them.
For example, with PAM, it doesn’t matter whether your password is stored in /etc/shadow or
on a separate server inside your network.
For example, when the login program needs to authenticate a user, PAM provides dynamically
the library that contains the functions for the right authentication scheme. Thus, changing the
authentication scheme for the login application (or any other program using PAM) is easy since
it only involves editing a configuration file (most likely, a file named after the application,
located inside /etc/pam.d, and less likely in /etc/pam.conf).
Files inside /etc/pam.d indicate which applications are using PAM natively. In addition, we can
tell whether a certain application uses PAM by checking if it the PAM library (libpam) has been
linked to it:
# ldd $(which login) | grep libpam # login uses PAM
# ldd $(which top) | grep libpam # top does not use PAM
Let’s examine the PAM configuration file for passwd – yes, the well-known utility to change
user’s passwords. It is located at /etc/pam.d/passwd:
# cat /etc/passwd
1. account: this module type checks if the user or service has supplied valid credentials to
118
authenticate.
2. auth: this module type verifies that the user is who he / she claims to be and grants any
Page
needed privileges.
3. password: this module type allows the user or service to update their password.
4. session: this module type indicates what should be done before and/or after the
authentication succeeds.
Control indicates what should happen if the authentication with this module fails:
1. requisite: if the authentication via this module fails, overall authentication will be
denied immediately.
2. required is similar to requisite, although all other listed modules for this service will be
called before denying authentication.
3. sufficient: if the authentication via this module fails, PAM will still grant authentication
even if a previous marked as required failed.
4. optional: if the authentication via this module fails or succeeds, nothing happens unless
this is the only module of its type defined for this service.
5. include means that the lines of the given type should be read from another file.
6. substack is similar to includes but authentication failures or successes do not cause the
exit of the complete module, but only of the substack.
‘su’ Vs ‘sudo’
‘su‘ forces you to share your root password to other users whereas ‘sudo‘ makes it possible to
execute system commands without root password. ‘sudo‘ lets you use your own password to
execute system commands i.e., delegates system responsibility without root password.
What is ‘sudo’?
‘sudo‘ is a root binary setuid, which executes root commands on behalf of authorized users
and the users need to enter their own password to execute system command followed by
‘sudo‘.
Editing ‘/usr/sbin/visudo’ file to something like the below pattern may really be very
dangerous, unless you believe all the listed users completely.
Parameters of sudo
A properly configured ‘sudo‘ is very flexible and number of commands that needs to be run
may be precisely configured.
Q1. You have a user niki which is a Database Administrator. You are supposed to provide him
all the access on Database Server (beta.database_server.com) only, and not on any host.
For the above situation the ‘sudo‘ line can be written as:
120
For the above situation the ‘sudo‘ line can be written as:
niki beta.database_server.com=(shimon) ALL
Q3. You have a sudo user ‘cat‘ which is supposed to run command ‘dog‘ only.
If the number of commands, user is supposed to run is under 10, we can place all the
commands alongside, with white space in between them, as shown below:
niki beta.database_server.com=(cat) /usr/bin/command1 /usr/sbin/command2
/usr/sbin/command3 ...
If this list of command varies to the range, where it is literally not possible to type each
command manually we need to use aliases. Aliases! Yeah the Linux utility where a long-lengthy
command or a list of command can be referred as a small and easy keyword.
A few alias Examples, which can be used in place of entry in ‘sudo‘ configuration file.
User_Alias ADMINS=niki,shimon,nik
user_Alias WEBMASTER=shimon,niki
WEBMASTERS WEBSERVERS=(www) APACHE
Cmnd_Alias PROC=/bin/kill,/bin/killall, /usr/bin/top
It is possible to specify a System Groups, in place of users, that belongs to that group just
suffixing ‘%’ as below:
%apacheadmin WEBSERVERS=(www) APACHE
121
We can execute a ‘sudo‘ command without entering password by using ‘NOPASSWD‘ flag.
niki ALL=(ALL) NOPASSWD: PROCS
Here the user ‘niki‘ can execute all the commands aliased under “PROCS”, without entering
password.
“sudo” provides you a robust and safe environment with loads of flexibility as compared to
‘su‘. Moreover “sudo” configuration is easy. Some Linux distributions have “sudo” enabled by
default while most of the distros of today needs you to enable it as a Security Measure.
To add an user (shimon) to sudo just run the below command as root.
adduser shimon sudo
sudo allows a permitted user to execute a command as root (or another user), as specified by
the security policy:
1. It reads and parses /etc/sudoers, looks up the invoking user and its permissions,
2. then prompts the invoking user for a password (normally the user’s password, but it can
as well be the target user’s password. Or it can be skipped with NOPASSWD tag),
3. after that, sudo creates a child process in which it calls setuid() to switch to the target
user
4. next, it executes a shell or the command given as arguments in the child process above.
Below are ten /etc/sudoers file configurations to modify the behavior of sudo command using
Defaults entries.
$ sudo cat /etc/sudoers
1. Used when a system administrator does not trust sudo users to have a secure PATH
environment variable
2. To separate “root path” and “user path”, only users defined by exempt_group are not
affected by this setting.
122
Page
To set it, add the line
Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/
bin"
To avoid such a scenario, you can configure sudo to run other commands only from a psuedo-
pty using the use_pty parameter, whether I/O logging is turned on or not as follows:
Defaults use_pty
To log hostname and the four-digit year in the custom log file, use log_host and log_year
parameters respectively as follows:
Defaults log_host, log_year, logfile="/var/log/sudo.log"
There are some escape sequences are supported such as %{seq} which expands to a
monotonically increasing base-36 sequence number, such as 000001, where every two digits
are used to form a new directory, e.g. 00/00/01 as in the example below:
$ cd /var/log/sudo-io/
$ ls
$ cd 00/00/01
$ ls
$ cat log
You can view the rest of the files in that directory using the cat command.
Defaults lecture="always"
Additionally, you can set a custom lecture file with the lecture_file parameter, type the
appropriate message in the file:
Defaults lecture_file="/path/to/file"
124
The default message is “sorry, try again”, you can modify the message using the
badpass_message parameter as follows:
Defaults badpass_message="Password is wrong, please try again"
To set a password timeout (default is 5 minutes) using passwd_timeout parameter, add the line
below:
Defaults passwd_timeout=2
25.cksum Command
cksum command is used to display the CRC checksum and byte count of an input file.
$ cksum README.txt
26.clear Command
clear command lets you clear the terminal screen, simply type.
$ clear
27.cmp Command
cmp performs a byte-by-byte comparison of two files like this.
125
28.comm Command
comm command is used to compare two sorted files line-by-line as shown below.
$ comm file1 file2
29.cp Command
cp command is used for copying files and directories from one location to another.
$ cp /home/tecmint/file1 /home/tecmint/Personal/
The cp command is used to copy files from one directory to another, the easiest syntax for
using it is as follows:
# cp [options….] source(s) destination
Consider the commands below, normally, you would type two different commands to copy the
same file into two separate directories as follows:
# cp -v /home/niki/bin/sys_info.sh /home/niki/test
# cp -v /home/niki/bin/sys_info.sh /home/niki/tmp
Assuming that you want to copy a particular file into up to five or more directories, this means
you would have to type five or more cp commands?
To do away with this problem, you can employ the echo command, a pipe, xargs command
together with the cp command in the form below:
# echo /home/niki/test/ /home/niki/tmp | xargs -n 1 cp -v /home/niki/bin/sys_info.sh
In the form above, the paths to the directories (dir1,dir2,dir3…..dirN) are echoed and piped as
input to the xargs command where:
1. -n 1 – tells xargs to use at most one argument per command line and send to the cp
command.
2. cp – used to copying a file.
126
Advanced Copy Command – Shows Progress Bar While Copying Large Files/Folders
Advanced-Copy is a powerful command line program which is very much similar, but little
modified version of original cp command. This modified version of cp command adds a
progress bar along with total time taken to complete, while copying large files from one
location to another. This additional feature is very useful especially while copying large files,
and this gives an idea to user about the status of copy process and how long it takes to
complete.
But I suggest you to compile from sources, for this you required original version of GNU
coreutils and latest patchfile of Advacned-Copy. The whole installation should go like this:
First, download the latest version of GNU coreutils and patchfile using wget command and
compile and patch it as shown below, you must be root user to perform all commands.
# wget http://ftp.gnu.org/gnu/coreutils/coreutils-8.21.tar.xz
# tar xvJf coreutils-8.21.tar.xz
# cd coreutils-8.21/
# wget https://raw.githubusercontent.com/atdt/advcpmv/master/advcpmv-0.5-8.21.patch
# patch -p1 -i advcpmv-0.5-8.21.patch
# ./configure
# make
You might get the following error, while running “./configure” command.
checking whether mknod can create fifo without root privileges... configure: error: in
`/home/tecmint/coreutils-8.21':
127
configure: error: you should not run configure as root (set FORCE_UNSAFE_CONFIGURE=1 in
environment to bypass this check)
Page
Once, compilation completes, two new commands are created under src/cp and src/mv. You
need to replace your original cp and mv commands with these two new commands to get the
progress bar while copying files.
# cp src/cp /usr/local/bin/cp
# cp src/mv /usr/local/bin/mv
Note: If you don’t want to copy these commands under standard system paths, you can still run
them from source directory like “./cp” and “./mv or create new commands as shown”.
# mv ./src/cp /usr/local/bin/cpg
# mv ./src/mv /usr/local/bin/mvg
You need to logout and login again to get this work correctly.
Please remember, original commands are not overwritten, if you ever need to use them or
you’re not happy with the new progress bar, and want to revert back to original cp and mv
commands. You can call them via /usr/bin/cp or /usr/bin/mv.
Progress – A Tiny Tool to Monitor Progress for (cp, mv, dd, tar, etc.)
Progress, formerly known as Coreutils Viewer, is a light C command that searches for coreutils
basic commands such as cp, mv, tar, dd, gzip/gunzip, cat, grep etc currently being executed on
the system and shows the percentage of data copied, it only runs on Linux and Mac OS X
operating systems.
Additionally, it also displays important aspects such as estimated time and throughput, and
offers users a “top-like” mode.
It utterly scans the /proc filesystem for fascinating commands, and then searches the fd and
fdinfo directories to find opened files, seeks positions, and reports status for the extensive files.
Importantly, it is a very light tool, and compatible with practically any command.
After successfully installing it, simply run this tool from your terminal, below we shall walk
through a few examples of using Progress on a Linux system.
You can view all the coreutils commands that Progress works with by running it without any
options, provided non of the coreutils commands is being executed on the system:
$ progress
To display estimated I/O throughput and estimated remaining time for on going coreutils
commands, enable the -w option:
$ progress -w
In the next example, you can open two or more terminal windows, then run the coreutils
commands in one each, and watch their progress using the other terminal window.
The command below will enable you to monitor all the current and imminent instances of
coreutils commands:
$ watch progress -q
One misconception that we have to immediately clear up is that the /proc directory is NOT a
130
real File System, in the sense of the term. It is a Virtual File System. Contained within the procfs
Page
are information about processes and other system information. It is mapped to /proc and
mounted at boot time.
First, lets get into the /proc directory and have a look around:
# cd /proc
The first thing that you will notice is that there are some familiar sounding files, and then a
whole bunch of numbered directories. The numbered directories represent processes, better
known as PIDs, and within them, a command that occupies them. The files contain system
information such as memory (meminfo), CPU information (cpuinfo), and available filesystems.
Running the cat command on any of the files in /proc will output their contents. Information
about any files is available in the man page by running:
# man 5 /proc/<filename>
Within /proc’s numbered directories you will find a few files and links. Remember that these
directories’ numbers correlate to the PID of the command being run within them. Let’s use an
example. On my system, there is a folder name /proc/12:
# cd /proc/12
# ls
If I run:
# cat /proc/12/status
We can see from the status file that this process belongs to. We also can see who is running
this, as UID and GID are 0, indicating that this process belongs to the root user.
In any numbered directory, you will have a similar file structure. The most important ones, and
their descriptions, are as follows:
Pv is a terminal-based tool that allows you to monitor the progress of data that is being sent
through a pipe. When using the pv command, it gives you a visual display of the following
information:
On Gentoo Linux
Use emerge package manager to install pv command as shown.
# emerge --ask sys-apps/pv
133
Page
On FreeBSD Linux
You can use the port to install it as follows:
# cd /usr/ports/sysutils/pv/
# make install clean
The standard input of pv will be passed through to its standard output and progress (output)
will be printed on standard error. It has a similar behavior as the cat command in Linux.
The options used with pv are divided into three categories, display switches, output modifiers
and general options.
3. To turn on ETA timer which tries to guess how long it will take before completion of an
operation, use the –eta option. The guess is based on previous transfer rates and the
Page
1. To wait until the first byte is transferred before displaying progress information, use the
–wait option.
2. To assume the total amount of data to be transferred is SIZE bytes when computing
percentage and ETA, use –size SIZE option.
3. To specify seconds between updates, use the –interval SECONDS option.
4. Use –force option to force an operation. This option forces pv to display visuals when
standard error is not a terminal.
5. The general options are –help to display usage information and –version to display
version information.
For example, to copy the opensuse.vdi file to /tmp/opensuse.vdi, run this command and watch
the progress bar in screencast.
# pv opensuse.vdi > /tmp/opensuse.vdi
2. To make a zip file from your /var/log/syslog file, run the following command.
# pv /var/log/syslog | zip > syslog.zip
3. To count the number of lines, word and bytes in the /etc/hosts file while showing progress
bar only, run this command below.
# pv -p /etc/hosts | wc
135
30.date Command
date command displays/sets the system date and time like this.
$ date
$ date --set="8 JUN 2019 13:00:00"
How to Set Time, Timezone and Synchronize System Clock Using timedatectl Command
The timedatectl command allows you to query and change the configuration of the system
clock and its settings, you can use this command to set or change the current date, time and
timezone or enable automatic system clock synchronization with a remote NTP server.
1. To display the current time and date on your system, use the timedatectl command from the
commandline as follows:
# timedatectl status
2. The time on your Linux system is always managed through the timezone set on the system,
to view your current timezone, do it as follows:
# timedatectl
OR
# timedatectl | grep Time
4. To find the local timezone according to your location, run the following command:
# timedatectl list-timezones | egrep -o “Asia/B.*”
# timedatectl list-timezones | egrep -o “Europe/L.*”
# timedatectl list-timezones | egrep -o “America/N.*”
136
5. To set your local timezone in Linux, we will use set-timezone switch as shown below.
# timedatectl set-timezone “Asia/Kolkata”
Page
It is always recommended to use and set the coordinated universal time, UTC.
# timedatectl set-timezone UTC
You need to type the correct name timezone other wise you may get errors when changing the
timezone, in the following example, the timezone “Asia/Kalkata” is not correct therefore
causing the errorHow to Set Time and Date in Linux
6. You can set the date and time on your system, using the timedatectl command as follows:
To set time only, we can use set-time switch along the format of time in HH:MM:SS (Hour,
Minute and Seconds).
# timedatectl set-time 15:58:30
7. To set date only, we can use set-time switch along the format of date in YY:MM:DD (Year,
Month, Day).
# timedatectl set-time 20191120
9. To set your hardware clock to coordinated universal time, UTC, use the set-local-rtc boolean-
value option as follows:
First Find out if your hardware clock is set to local timezone:
# timedatectl | grep local
Set your hardware clock to local timezone:
# timedatectl set-local-rtc 1
Set your hardware clock to coordinated universal time (UTC):
# timedatectl set-local-rtc 0
system clock between computers. The timedatectl utility enables you to automatically sync
your Linux system clock with a remote group of servers using NTP.
Page
Please note that, you must have NTP installed on the system to enable automatic time
synchronization with NTP servers.
To start automatic time synchronization with remote NTP server, type the following command
at the terminal.
# timedatectl set-ntp true
To disable NTP time synchronization, type the following command at the terminal.
# timedatectl set-ntp false
31.dd Command
dd command is used for copying files, converting and formatting according to flags provided on
the command line. It can strip headers, extracting parts of binary files and so on.
32.df Command
df command is used to show file system disk space usage as follows.
$ df -h
33.diff Command
diff command is used to compare two files line by line. It can also be used to find the difference
between two directories in Linux like this:
$ diff file1 file2
34.dir Command
dir command works like Linux ls command, it lists the contents of a directory.
$ dir
2.To list all files in a directory including . (hidden) files, use the -a option. You can include the -l
option to format output as a list.
# dir -a
# dir -al
3.When you need to list only directory entries instead of directory content, you can use the -d
option.
# dir -d /etc
When you use -dl, it shows a long listing of the directory including owner, group owner,
permissions.
# dir -dl /etc
4.In case you want to view the index number of each file, use the option -i. From the output
below, you can see that first column shows numbers. These numbers are called inodes which
are sometimes referred to as index nodes or index numbers.
An inode in Linux systems is a data storage on a filesystem that stores information about a file
except the filename and its actual data.
# dir -il
5.You can view files sizes using the -s option. If you need to sort the files according to size, then
use the -S option.
In this case you need to also use the -h option to view the files sizes in a human-readable
format.
# dir -shl
7.To list files without their owners, you have to use -g option which works like the -l option only
that it does not print out the file owner. And to list files without group owner use the -G option
as follows.
# dir -ahgG /home/niki
You can as well view the author of a file by using the –author flag as follows.
# dir -al --author /home/niki
8.You may wish to view directories before all other files and this can be done by using the –
group-directories-first flag as follows.
# dir -l --group-directories-first
9.You can also view subdirectories recursively, meaning that you can list all other subdirectories
in a directory using the -R option as follows.
# dir -R
10.To view user and group IDs, you need to use -n option.
# dir -l –author
# dir -nl –author
35.dmidecode Command
dmidecode command is a tool for retrieving hardware information of any Linux system. It
dumps a computer’s DMI (a.k.a SMBIOS) table contents in a human-readable format for easy
retrieval.
140
37.echo Command
echo command prints a text of line provided to it.
$ echo “This is Shimon”
2. Declare a variable and echo its value. For example, Declare a variable of x and assign its
value=10.
$ x=10
Note: The ‘-e‘ option in Linux acts as interpretation of escaped characters that are backslashed.
3. Using option ‘\b‘ – backspace with backslash interpretor ‘-e‘ which removes all the spaces in
between.
$ echo -e "This \bis \ba \bcommunity"
4. Using option ‘\n‘ – New line with backspace interpretor ‘-e‘ treats new line from where it is
used.
$ echo -e "This \nis \na \ncommunity "
141
5. Using option ‘\t‘ – horizontal tab with backspace interpretor ‘-e‘ to have horizontal tab
spaces.
Page
7. Using option ‘\v‘ – vertical tab with backspace interpretor ‘-e‘ to have vertical tab spaces.
$ echo -e "\vThis \vis \va \vcommunity"
8. How about using option new Line ‘\n‘ and vertical tab ‘\v‘ simultaneously.
$ echo -e "\n\vThis \n\vis \n\va \n\vcommunity"
Note: We can double the vertical tab, horizontal tab and new line spacing using the option two
times or as many times as required.
9. Using option ‘\r‘ – carriage return with backspace interpretor ‘-e‘ to have specified carriage
return in output.
$ echo -e "This \ris a community"
10. Using option ‘\c‘ – suppress trailing new line with backspace interpretor ‘-e‘ to continue
without emitting new line.
$ echo -e "This is a community \cof Linux Nerds"
12. Using option ‘\a‘ – alert return with backspace interpretor ‘-e‘ to have sound alert.
$ echo -e "This is a community of \aLinux Nerds"
13. Print all the files/folder using echo command (ls command alternative).
$ echo *
14. Print files of a specific kind. For example, let’s assume you want to print all ‘.jpeg‘ files, use
the following command.
142
$ echo *.jpeg
Page
15. The echo can be used with redirect operator to output to a file and not standard output.
$ echo "Test Page" > testpage
echo Options
Options Description
\b backspace
\\ backslash
\n new line
\r carriage return
\t horizontal tab
\v vertical tab
38.eject Command
eject command is used to eject removable media such as DVD/CD ROM or floppy disk from the
143
system.
$ eject /dev/cdrom
Page
$ eject /mnt/cdrom/
$ eject /dev/sda
39.env Command
env command lists all the current environment variables and used to set them as well.
$ env
40.exit Command
exit command is used to exit a shell like so.
$ exit
41.expr Command
expr command is used to calculate an expression as shown below.
$ expr 20 + 30
42.factor Command
factor command is used to show the prime factors of a number.
$ factor 10
43.find Command
find command lets you search for files in a directory as well as its sub-directories. It searches
for files by attributes such as permissions, users, groups, file type, date, size and other possible
criteria.
$ find /home/niki/ -name niki.txt
44.free Command
free command shows the system memory usage (free, used, swapped, cached, etc.) in the
system including swap space. Use the -h option to display output in human friendly format.
$ free -h
144
The “free” command gives information about total used and available space of physical
memory and swap memory with buffers used by kernel in Linux/Unix like operating systems
The following command will show the list of top processes ordered by RAM and CPU use in
descendant form (remove the pipeline and head if you want to see the full list):
# ps -eo pid,ppid,cmd,%mem,%cpu --sort=-%mem | head
The -o (or –format) option of ps allows you to specify the output format. A favorite of mine is
to show the processes’ PIDs (pid), PPIDs (pid), the name of the executable file associated with
the process (cmd), and the RAM and CPU utilization (%mem and %cpu, respectively).
Additionally, I use --sort to sort by either %mem or %cpu. By default, the output will be sorted
in ascendant form, but personally I prefer to reverse that order by adding a minus sign in front
146
To display the top 15 processes sorted by memory use in descending order, do:
# top -b -o +%MEM | head -n 22
As opposed to the previous tip, here you have to use +%MEM (note the plus sign) to sort the
output in descending order
Note: that head utility, by default displays the first ten lines of a file, that is when you do not
specify the number of lines to be displayed.
As we have seen, the top utility offers us more dynamic information while listing processes on a
Linux system, therefore, this approach has an extra advantage compared to using ps utility.
147
1. Find Top Running Processes by Highest Memory and CPU Usage in Linux
Page
groups Command
groups command displays all the names of groups a user is a part of like this.
$ groups
$ groups tecmint
gzip Command
Gzip helps to compress a file, replaces it with one having a .gz extension as shown below:
$ gzip passwds.txt
$ cat file1 file2 | gzip > foo.gz
gunzip Command
gunzip expands or restores files compressed with gzip command like this.
$ gunzip foo.gz
head Command
head command is used to show first lines (10 lines by default) of the specified file or stdin to the
screen:
history Command
history command is used to show previously used commands or to get info about command
Page
executed by a user.
$ history
Learn more about Linux history command.
hostname Command
hostname command is used to print or set system hostname in Linux.
$ hostname
$ hostname NEW_HOSTNAME
hostnamectl Command
hostnamectl command controls the system hostname under systemd. It is used to print or
modify the system hostname and any related settings:
$ hostnamectl
$ sudo hostnamectl set-hostname NEW_HOSTNAME
hwclock
hwclock is a tool for managing the system hardware clock; read or set the hardware clock
(RTC).
$ sudo hwclock
$ sudo hwclock --set --date 8/06/2017
hwinfo Command
hwinfo is used to probe for the hardware present in a Linux system like this.
$ hwinfo
Learn more about how to get Linux hardware info.
id Command
id command shows user and group information for the current user or specified username as
shown below.
$ id tecmint
ifconfig Command
ifconfig command is used to configure a Linux systems network interfaces. It is used to
configure, view and control network interfaces.
$ ifconfig
$ sudo ifconfig eth0 up
$ sudo ifconfig eth0 down
$ sudo ifconfig eth0 172.16.25.125
ionice Command
ionice command is used to set or view process I/O scheduling class and priority of the specified
process.
If invoked without any options, it will query the current I/O scheduling class and priority for that
process:
$ ionice -c 3 rm /var/logs/syslog
To understand how it works, read this article: How to Delete HUGE (100-200GB) Files in Linux
iostat Command
iostat is used to show CPU and input/output statistics for devices and partitions. It produces
useful reports for updating system configurations to help balance the input/output load
between physical disks.
$ iostat
ip Command
ip command is used to display or manage routing, devices, policy routing and tunnels. It also
150
This command will assign an IP address to a specific interface (eth1 in this case).
$ sudo ip addr add 192.168.56.10 dev eth1
iptables Command
iptables is a terminal based firewall for managing incoming and outgoing traffic via a set of
configurable table rules.
The command below is used to check existing rules on a system (using it may require root
privileges).
$ sudo iptables -L -n -v
iw Command
iw command is used to manage wireless devices and their configuration.
$ iw list
iwlist Command
iwlist command displays detailed wireless information from a wireless interface. The command
below enables you to get detailed information about the wlp1s0 interface.
$ kill -p 2300
$ kill -SIGTERM -p 2300
killall Command
killall command is used to kill a process by its name.
$ killall firefox
kmod Command
kmod command is used to manage Linux kernel modules. To list all currently loaded modules,
151
type.
Page
$ kmod list
last Command
last command display a listing of last logged in users.
$ last
ln Command
ln command is used to create a soft link between files using the -s flag like this.
$ ln -s /usr/bin/lscpu cpuinfo
locate Command
locate command is used to find a file by name. The locate utility works better and faster than
it’s find counterpart.
The command below will search for a file by its exact name (not *name*):
$ locate -b '\domain-list.txt'
login Command
login command is used to create a new session with the system. You’ll be asked to provide a
username and a password to login as below.
$ sudo login
ls Command
ls command is used to list contents of a directory. It works more or less like dir command.
$ ls -l file1
To know more about ls command, read our guides.
3. How to Sort Output of ‘ls’ Command By Last Modified Date and Time
4. 15 Interview Questions on Linux “ls” Command – Part 1
Page
$ sudo lshw
lscpu Command
lscpu command displays system’s CPU architecture information (such as number of CPUs,
threads, cores, sockets, and more).
$ lscpu
lsof Command
lsof command displays information related to files opened by processes. Files can be of any
type, including regular files, directories, block special files, character special files, executing text
reference, libraries, and stream/network files.
To view files opened by a specific user’s processes, type the command below.
$ lsof -u tecmint
lsusb Command
lsusb command shows information about USB buses in the system and the devices connected
to them like this.
$ lsusb
man Command
man command is used to view the on-line reference manual pages for commands/programs
like so.
$ man du
$ man df
md5sum Command
153
md5sum command is used to compute and print the MD5 message digest of a file. If run
without arguments, debsums checks every file on your system against the stock md5sum files:
Page
$ sudo debsums
mkdir Command
mkdir command is used to create single or more directories, if they do not already exist (this
can be overridden with the -p option).
$ mkdir tecmint-files
OR
$ mkdir -p tecmint-files
more Command
more command enables you to view through relatively lengthy text files one screenful at a
time.
$ more file.txt
Check difference between more and less command and Learn Why ‘less’ is Faster Than ‘more’
Command
mv Command
mv command is used to rename files or directories. It also moves a file or directory to another
location in the directory structure.
$ mv test.sh sysinfo.sh
nano Command
nano is a popular small, free and friendly text editor for Linux; a clone of Pico, the default editor
included in the non-free Pine package.
$ nano file.txt
nc/netcat Command
nc (or netcat) is used for performing any operation relating to TCP, UDP, or UNIX-domain
154
sockets. It can handle both IPv4 and IPv6 for opening TCP connections, sending UDP packets,
listening on arbitrary TCP and UDP ports, performing port scanning.
Page
The command below will help us see if the port 22 is open on the host 192.168.56.5.
$ nc -zv 192.168.1.5 22
Learn more examples and usage on nc command.
netstat Command
netstat command displays useful information concerning the Linux networking subsystem
(network connections, routing tables, interface statistics, masquerade connections, and
multicast memberships).
This command will display all open ports on the local system:
$ netstat -a | more
nice Command
nice command is used to show or change the nice value of a running program. It runs specified
command with an adjusted niceness. When run without any command specified, it prints the
current niceness.
The following command starts the process “tar command” setting the “nice” value to 12.
The command below will probe open ports on all live hosts on the specified network.
nproc Command
nproc command shows the number of processing units present to the current process. It’s
Page
$ passwd tecmint
pidof Command
pidof displays the process ID of a running program/command.
$ pidof init
$ pidof cinnamon
ping Command
ping command is used to determine connectivity between hosts on a network (or the Internet):
$ ping google.com
ps Command
ps shows useful information about active processes running on a system. The example below
shows the top running processes by highest memory and CPU usage.
$ pstree
Page
pwd Command
pwd command displays the name of current/working directory as below.
$ pwd
rdiff-backup Command
rdiff-backup is a powerful local/remote incremental backup script written in Python. It works on
any POSIX operating system such as Linux, Mac OS X.
Note that for remote backups, you must install the same version of rdiff-backup on both the
local and remote machines. Below is an example of a local backup command:
$ reboot
rename Command
rename command is used to rename many files at once. If you’ve a collection of files with
“.html” extension and you want to rename all of them with “.php” extension, you can type the
command below.
$ rm file1
$ rm -rf my-files
rmdir Command
rmdir command helps to delete/remove empty directories as follows.
$ rmdir /backup/all
157
scp Command
scp command enables you to securely copy files between hosts on a network, for example.
Page
$ scp ~/names.txt [email protected]:/root/names.txt
shutdown Command
shutdown command schedules a time for the system to be powered down. It may be used to
halt, power-off or reboot the machine like this.
$ shutdown --poweroff
Learn how to show a Custom Message to Users Before Linux Server Shutdown.
sleep Command
sleep command is used to delay or pause (specifically execution of a command) for a specified
amount of time.
$ cat words.txt
Learn more examples of sort command in Linux.
split Command
split as the name suggests, is used to split a large file into small parts.
$ ssh [email protected]
Learn more about ssh command and how to use it on Linux.
stat Command
stat is used to show a file or file system status like this (-f is used to specify a filesystem).
$ stat file1
su Command
su command is used to switch to another user ID or become root during a login session. Note
that when su is invoked without a username, it defaults to becoming root.
$ su
$ su tecmint
sudo Command
sudo command allows a permitted system user to run a command as root or another user, as
defined by the security policy such as sudoers.
In this case, the real (not effective) user ID of the user running sudo is used to determine the
user name with which to query the security policy.
sum Command
sum command is used to show the checksum and block counts for each each specified file on
the command line.
$tac file.txt
tail Command
tail command is used to display the last lines (10 lines by default) of each file to standard
output.
If there more than one file, precede each with a header giving the file name. Use it as follow
(specify more lines to display using -n option).
$ tail long-file
OR
$ tail -n 15 long-file
talk Command
talk command is used to talk to another system/network user. To talk to a user on the same
machine, use their login name, however, to talk to a user on another machine use ‘user@host’.
tee Command
tee command is used to read from standard input and prints to standard output and files as
shown below.
$ time wc /etc/hosts
top Command
top program displays all processes on a Linux system in regards to memory and CPU usage and
provides a dynamic real-time view of a running system.
$ top
touch Command
touch command changes file timestamps, it can also be used to create a file as follows.
$ touch file.txt
tr Command
tr command is a useful utility used to translate (change) or delete characters from stdin, and
write the result to stdout or send to a file as follows.
$ uname
uniq Command
uniq command displays or omits repeated lines from input (or standard input). To indicate the
number of occurrences of a line, use the -c option.
$ cat domain-list.txt
uptime Command
uptime command shows how long the system has been running, number of logged on users
and the system load averages as follows.
$ uptime
users Command
users command shows the user names of users currently logged in to the current host like this.
$ users
vim/vi Command
vim (Vi Improved) popular text editor on Unix-like operating systems. It can be used to edit all
kinds of plain text and program files.
$ vim file
Learn how to use vi/vim editor in Linux along with some tips and tricks.
w Command
w command displays system uptime, load averages and information about the users currently
162
on the machine, and what they are doing (their processes) like this.
Page
$w
wall Command
wall command is used to send/display a message to all users on the system as follows.
$ watch -d ls -l
wc Command
wc command is used to display newline, word, and byte counts for each file specified, and a
total for many files.
$ wc filename
wget Command
wget command is a simple utility used to download files from the Web in a non-interactive (can
work in the background) way.
$ wget -c http://ftp.gnu.org/gnu/wget/wget-1.5.3.tar.gz
whatis Command
whatis command searches and shows a short or one-line manual page descriptions of the
provided command name(s) as follows.
$ whatis wget
which Command
which command displays the absolute path (pathnames) of the files (or possibly links) which
would be executed in the current environment.
$ which who
163
who Command
Page
who command shows information about users who are currently logged in like this.
$ who
whereis Command
whereis command helps us locate the binary, source and manual files for commands.
$ whereis cat
xargs Command
xargs command is a useful utility for reading items from the standard input, delimited by blanks
(protected with double or single quotes or a backslash) or newlines, and executes the entered
command.
The example below show xargs being used to copy a file to multiple directories in Linux.
The command below will list available formats for the video in the provided link.
zip Command
zip is a simple and easy-to-use utility used to package and compress (archive) files.
$ zz
165
Page