Rocky Linux Admin Guide
Rocky Linux Admin Guide
(English version)
A book from the Documentation Team
Version : 2025/04/10
Table of contents
1. Licence 11
3.2.1 History 15
3.5 Shell 24
3.5.1 Generalities 24
3.5.2 Functionalities 24
3.5.3 Principle 25
4.1 Generalities 28
4.2.4 Auto-complete 35
4.4.2 cd command 41
4.4.3 ls command 42
4.4.7 rm command 47
4.4.8 mv command 48
4.4.9 cp command 49
4.5 Visualization 50
4.5.9 wc command 56
4.6 Search 57
4.7.5 Pipes 63
6. VI Text Editor 83
6.1 vi command 83
6.5.1 Characters 89
6.5.2 Words 90
6.5.3 Lines 90
6.6 EX commands 92
7. User Management 98
7.1 General 98
1. Licence
RockyLinux offers Linux courseware for trainers or people wishing to learn how to
administer a Linux system on their own.
SA : Share Alike.
• https://docs.rockylinux.org
• https://github.com/rocky-linux/documentation
Our media sources are hosted at github.com. You'll find the source code repository
where the version of this document was created.
From these sources, you can generate your own personalized training material
using mkdocs. You will find instructions for generating your document here.
You'll find all the information you need to join us on our git project home page.
We wish you all a pleasant reading and hope you enjoy the content.
We start with Introduction to Linux, which outlines Linux, distributions, and the
whole ecosystem around our operating system.
User Commands contains essential commands for getting up to speed with Linux.
More experienced users should also consult the following chapter on Advanced
Linux Commands.
The VI Text Editor deserves its own chapter. While Linux comes with many editors,
VI is one of the most powerful. Other commands sometimes use identical syntax to
VI ( sed comes to mind). So, knowing something about VI, or at least demystifying
its essential functions (how to open a file, save, quit or quit without saving), is very
important. The user will become more comfortable with the other functions of VI as
they use the editor. An alternative would be to use nano which comes installed by
default in Rocky Linux. While not as versatile, it is simple to use, straightforward,
and gets the job done.
We can then get into the deep functioning of Linux to discover how the system
addresses:
• User Management
• File Systems
• Process Management
Backup and Restoration is essential info for the System Administrator. Linux comes
with many software solutions to enhance backups (rsnapshot, lsyncd, etcetera.). It
is valuable to know the essential components of the backup that exist within the
operating system. We investigate two tools: tar and the less widespread cpio in
this chapter.
Knowledge:
Complexity:
Linux, UNIX, BSD, Windows, and MacOS are all operating systems.
Abstract
An operating system is a set of programs that manages the available resources of a computer.
• The physical memory is made up of the RAM bars and the processor cache
memory, which are used for program execution.
• The virtual memory is a location on the hard disk (the swap partition) that
allows the unloading of the physical memory and the saving of the current state of
the system during the electrical shutdown of the computer.
3.2.1 History
UNIX
• 1969 — 1971: After the withdrawal of Bell (1969) and then General Electric from
the project, two developers, Ken Thompson and Dennis Ritchie (joined later by
Brian Kernighan), judging MULTICS to be too complex, begin development of
UNIX (UNiplexed Information and Computing Service). While it was created in
Assembly language, the creators of UNIX eventually developed the B language
and then the C language (1971) and completely rewrote UNIX. As it was
developed in 1970, the reference (epoch) date for the start of time of UNIX/Linux
systems is set at January 01, 1970.
UNIX is an open and evolving operating system that has played a major role in the
history of computing. It forms the basis for many other systems, such as Linux,
BSD, MacOS, etc.
GNU Project
• 1984: Richard Matthew Stallman launched the GNU (GNU's Not Unix) Project,
which aims to establish a free and open Unix system, in which the more
important tools are: gcc compiler, bash shell, Emacs editor and so on. GNU is a
Unix-like operating system. The development of GNU, started in January 1984, is
known as the GNU Project. Many of the programs in GNU are released under the
auspices of the GNU Project; those we call GNU packages.
• 1990: GNU's own kernel, the GNU Hurd, was started in 1990 (before Linux was
started).
MINIX
Linux
• 1991: A Finnish student, Linus Torvalds, creates an operating system that runs
on his personal computer and names it Linux. He publishes his first version,
called 0.02, on the Usenet discussion forum, and other developers help him
improve his system. The term Linux is a play on words between the founder's first
name, Linus, and UNIX.
• 1994: The commercial distribution Red Hat is created by the company Red Hat,
which is today the leading distributor of the GNU/Linux operating system. Red
Hat supports the community version Fedora and until recently the free
distribution CentOS.
• 2002: The Arch distribution is created. Its distinctive is that it offers rolling
release (continuous update).
Info
Dispute over the name: although people are accustomed to calling the Linux operating system verbally, Linux is strictly a kernel. We
must not forget the development and contribution of the GNU project to the open source cause, so! I prefer to call it the GNU/Linux
operating system.
Despite its prevalence, Linux remains relatively unknown to the general public.
Linux is hidden within smartphones, televisions, internet boxes, etc. Almost
70% of the websites in the world are hosted on a Linux or UNIX server!
Linux equips 100% of the top 500 supercomputers since 2018. A supercomputer is
a computer designed to achieve the highest possible performance with the
techniques known during its design, especially concerning computing speed.
• The shell is a utility that interprets user commands and ensures their execution.
• Main shells: Bourne shell, C shell, Korn shell, and Bourne-Again shell (bash).
• Internet browsers
• Word processors
• Spreadsheets
Multi-task
Multi-user
MULTICS's purpose was to allow users to work from several terminals (screen and
keyboard) on a single computer (which was very expensive at the time). Inspired by
this operating system, Linux kept this ability to work with several users
simultaneously and independently, each with its user account with memory space
and access rights to files and software.
Multi-processor
Multi-platform
Open
Linux is based on recognized standards such as POSIX, TCP/IP, NFS, and Samba,
which allow it to share data and services with other application systems.
• Value portability.
• "Unix is user-friendly. It just isn't promiscuous about which users it's friendly
with." (Steven King)
Each distribution offers one or more desktop environments, and provides a set of
pre-installed software and a library of additional software. Configuration options
(kernel or services options for example) are specific to each distribution.
There are many graphic environments such as GNOME, KDE, LXDE, XFCE, etc.
There is something for everyone, and their ergonomics hold their own against
Microsoft or Apple systems.
So why is there so little enthusiasm for Linux, when this system is practically virus
free? Could it be because so many editors (Adobe) and manufacturers (Nvidia) do
not play the free game and do not provide a version of their software or drivers for
GNU/Linux? Perhaps it's fear of change, or the difficulty of finding where to buy a
Linux computer, or too few games distributed under Linux. That last excuse, at
least, shouldn't be true for long, with the advent of the game engine Steam for
Linux.
The GNOME 3 desktop environment no longer uses the concept of desktop but
that of GNOME Shell (not to be confused with the command line shell). It serves as
a desktop, a dashboard, a notification area and a window selector. The GNOME
desktop environment is based on the GTK+ component library.
A Microsoft or Mac operating system user must purchase a license to use the
system. This license has a cost, although it is usually transparent (the price of the
license is included in the price of the computer).
The Free Software movement provides mostly free distributions in the GNU/Linux
world.
Open Source: the source code is available, so consulting and modifying it under
certain conditions is possible.
A free software is necessarily open-source, but the opposite is not true since open-
source software is distinct from the freedom offered by the GPL license.
The GPL guarantees the author of a software its intellectual property, but allows
modification, redistribution or resale of software by third parties, provided that the
source code is included with the software. The GPL is the license that came out of
the GNU (GNU is Not UNIX) project, which was instrumental in creating Linux.
It implies:
• The freedom to study how the program works and adapt it to your needs.
• The freedom to improve the program, and publish those improvements for the
benefit of the whole community.
On the other hand, even products licensed under the GPL can have a cost. This is
not for the product itself, but the guarantee that a team of developers will
continue to work on it to make it evolve and troubleshoot errors, or even
provide support to users.
3.5 Shell
3.5.1 Generalities
The shell, known as command interface, allows users to send commands to the
operating system. It is less visible today since the implementation of graphical
interfaces, but remains a privileged means on Linux systems which do not all have
graphical interfaces and whose services do not always have a setting interface.
3.5.2 Functionalities
3.5.3 Principle
True
False
Linus Torvalds
Ken Thompson
Lionel Richie
Brian Kernighan
The original nationality of Linus Torvalds, creator of the Linux kernel, is:
Swedish
Finnish
Norwegian
Flemish
French
Debian
Slackware
Red Hat
Arch
Multi-tasking
Multi-user
Multi-processor
Multi-core
Cross-platform
Open
True
False
True
False
Jason
C shell (csh)
In this chapter you will learn Linux commands and how to use them.
Objectives: In this chapter, future Linux administrators will learn how to:
Knowledge:
Complexity:
4.1 Generalities
• The majority of system commands are common to all Linux distributions, which is
not the case for graphical tools.
• It can happen that the system does not start correctly but that a backup command
interpreter remains accessible.
The user of a Linux system will be defined in the /etc/passwd file, by:
• A command interpreter, e.g., a shell, which can be different from one user to
another.
• # for administrators
Depending on the security policy implemented on the system, the password will
require a certain number of characters and meet certain complexity requirements.
The user's login directory is by convention stored in the /home directory of the
workstation. It will contain the user's personal data and the configuration files of
his applications. By default, at login, the login directory is selected as the current
directory.
Once the user is connected to a console, the shell displays the command prompt.
It then behaves like an infinite loop, repeating the same pattern with each
statement entered:
• Etc.
Short options begin with a dash ( -l ), while long options begin with two dashes
( --list ). A double dash ( -- ) indicates the end of the option list.
ls -l -i -a
is equivalent to:
ls -lia
In the literature, the term "option" is equivalent to the term "parameter," which is
more commonly used in programming. The optional side of an option or argument
is symbolized by enclosing it in square brackets [ and ] . When more than one
option is possible, a vertical bar called a "pipe" separates them [a|e|i] .
It is impossible for an administrator at any level to know all the commands and
options in detail. A manual is usually available for all installed commands.
apropos command
The command apropos allows you to search by keyword within these manual pages:
Options Description
-a or --and Displays only the item matching all the provided keywords.
Example:
$ apropos clear
clear (1) - clear the terminal screen
clear_console (1) - clear the console
clearenv (3) - clear the environment
clearerr (3) - check and reset stream status
clearerr_unlocked (3) - nonlocking stdio functions
feclearexcept (3) - floating-point rounding and exception handling
To find the command that will allow changing the password of an account:
whatis command
whatis clear
Example:
$ whatis clear
clear (1) - clear the terminal screen
man command
Once found by apropos or whatis , the manual is read by man ("Man is your
friend"). This set of manuals is divided into 8 sections, grouping information by
topic, the default section being 1:
Information about each section can be accessed by typing man x intro , where x is
the section number.
The command:
man passwd
will tell the administrator about the passwd command, its options, etc. While a:
man 5 passwd
Navigate through the manual with the arrows ↑ Up and ↓ Down . Exit the manual
by pressing the Q key.
The shutdown command allows you to electronically shut down a Linux server,
either immediately or after a certain period of time.
Specify the shutdown time in the format hh:mm for a precise time, or +mm for a
delay in minutes.
To force an immediate stop, use the word now in place of the time. In this case, the
optional message is not sent to other users of the system.
Examples:
Options:
Options Remarks
The history command displays the history of commands that have been entered by
the user.
The commands are stored in the .bash_history file in the user's login directory.
$ history
147 man ls
148 man history
Options Comments
-c Deletes the history of the current session (but not the contents of the .bash_history file).
• Manipulating history:
To manipulate the history, the following commands entered from the command
prompt will:
Keys Function
! + string Recalls the most recent command beginning with the string.
↑ Up Navigates through your history working backward in time from the most recent command.
4.2.4 Auto-complete
• Press the Tab ⇥ key to complete the entry in the case of a single solution.
• In the case of multiple solutions, press Tab ⇥ a second time to see options.
The clear command clears the contents of the terminal screen. More accurately, it
shifts the display so that the command prompt is at the top of the screen on the
first line.
Tip
This command is most commonly used in administration scripts to inform the user
during execution.
The -n option indicates no newline output string (by default, newline output
string).
For various reasons, the script developer may need to use special sequences
(starting with a \ character). In this case, the -e option will be stipulated,
allowing interpretation of the sequences.
Sequence Result
\b Back
The date command displays the date and time. The command has the following
syntax:
Examples:
$ date
Mon May 24 16:46:53 CEST 2021
$ date -d 20210517 +%j
137
In this last example, the -d option displays a given date. The +%j option formats
this date to show only the day of the year.
Warning
The format of a date can change depending on the value of the language defined in the environment variable $LANG .
Option Format
+%c Locale's date and time (e.g., Thu Mar 3 23:05:25 2005)
+%G Year
The date command also allows you to change the system date and time. In this
case, the -s option will be used.
$ id rockstar
uid=1000(rockstar) gid=1000(rockstar) groups=1000(rockstar),10(wheel)
The -g , -G , -n and -u options display the main group GID, subgroup GIDs, names
instead of numeric identifiers, and the user's UID respectively.
$ who
rockstar tty1 2021-05-24 10:30
root pts/0 2021-05-24 10:31
Since Linux is multi-user, it is possible that multiple sessions are open on the same
station, either physically or over the network. It is interesting to know which users
are logged in, if only to communicate with them by sending messages.
In Linux, the file tree is an inverted tree, called a single hierarchical tree, whose
root is the directory / .
The connection directory is the working directory associated with the user. The
login directories are, by default, stored in the /home directory.
When the user logs in, the current directory is the login directory.
An absolute path references a file from the root by traversing the entire tree to
the file level:
• /home/groupA/alice/file
The relative path references that same file by traversing the entire tree from the
current directory:
• ../alice/file
In the above example, the " .. " refers to the parent directory of the current
directory.
• . : reference to itself.
A relative path can thus start with ./ or ../ . When the relative path refers to a
subdirectory or file in the current directory, then the ./ is often omitted.
Mentioning the first ./ in the tree will only really be required to run an executable
file.
Errors in paths can cause many problems: creating folders or files in the wrong
places, unintentional deletions, etc. It is therefore strongly recommended to use
auto-completion when entering paths.
In the above example, we are looking to give the location of the file myfile from
the directory of bob.
• By an absolute path, the current directory does not matter. We start at the root,
and work our way down to the directories home , groupA , alice and finally the file
myfile : /home/groupA/alice/myfile .
• By a relative path, our starting point being the current directory bob , we go up
one level through .. (i.e., into the groupA directory), then down into the alice
directory, and finally the myfile file: ../alice/myfile .
The pwd (Print Working Directory) command displays the absolute path of the
current directory.
$ pwd
/home/rockstar
Depending on the type of shell and the different parameters of its configuration
file, the terminal prompt (also known as the command prompt) will display the
absolute or relative path of the current directory.
4.4.2 cd command
The cd (Change Directory) command allows you to change the current directory --
in other words, to move through the tree.
$ cd /tmp
$ pwd
/tmp
$ cd ../
$ pwd
/
$ cd
$ pwd
/home/rockstar
As you can see in the last example above, the command cd with no arguments
moves the current directory to the home directory .
4.4.3 ls command
Example:
$ ls /home
. .. rockstar
Option Information
-a Displays all files, even hidden ones. Hidden files in Linux are those beginning with . .
-l Use a long listing format, that is, each line displays long format information for a file or directory.
Option Information
-h Displays file sizes in the most appropriate format (byte, kilobyte, megabyte, gigabyte, ...). h stands for
Human Readable. Needs to be used with -l option.
-s Displays the allocated size of each file, in blocks. In the ls command, the default size of a single block is
1024-Byte. In the GNU/Linux operating system, "block" is the smallest unit of storage in the file system,
and generally speaking, one block is equal to 4096-Byte. In the Windows operating system, taking the
NTFS file system as an example, its smallest storage unit is called a "Cluster". The definition of the
minimum storage unit name may vary depending on different file systems.
-F Displays the type of files. Prints a / for a directory, * for executables, @ for a symbolic link, and nothing
for a text file.
$ ls -lia /home
78489 drwx------ 4 rockstar rockstar 4096 25 oct. 08:10 rockstar
Value Information
4 Number of subdirectories ( . and .. included). For a file, it represents the number of hard links, and 1
represents itself.
4096 For files, it shows the size of the file. For directories, it shows the fixed value of 4096 bytes occupied by
the file naming. To calculate the total size of a directory, use du -sh rockstar/
Note
The ls command has many options. Here are some advanced examples of uses:
$ ls -ltr /etc
total 1332
-rw-r--r--. 1 root root 662 29 may 2021 logrotate.conf
-rw-r--r--. 1 root root 272 17 may. 2021 mailcap
-rw-------. 1 root root 122 12 may. 2021 securetty
...
-rw-r--r--. 2 root root 85 18 may. 17:04 resolv.conf
-rw-r--r--. 1 root root 44 18 may. 17:04 adjtime
-rw-r--r--. 1 root root 283 18 may. 17:05 mtab
• List /var files larger than 1 megabyte but less than 1 gigabyte. The example here
uses advanced grep commands with regular expressions. Novices don't have to
struggle too much, there will be a special tutorial to introduce these regular
expressions in the future.
To find out the rights to a folder, in our example /etc , the following command
would not be appropriate:
$ ls -l /etc
total 1332
-rw-r--r--. 1 root root 44 18 nov. 17:04 adjtime
-rw-r--r--. 1 root root 1512 12 janv. 2010 aliases
-rw-r--r--. 1 root root 12288 17 nov. 17:41 aliases.db
drwxr-xr-x. 2 root root 4096 17 nov. 17:48 alternatives
...
The above command will display the contents of the folder (inside) by default. For
the folder itself, you can use the -d option.
ls -ld /etc
drwxr-xr-x. 69 root root 4096 18 nov. 17:05 /etc
ls -lhS
By default, the ls command does not display the last slash of a folder. In some
cases, like for scripts for example, it is useful to display them:
$ ls -dF /etc
/etc/
ls /etc --hide=*.conf
Example:
mkdir /home/rockstar/work
Otherwise, the -p option should be used. The -p option creates the parent
directories if they do not exist.
Danger
The touch command changes the timestamp of a file or creates an empty file if the
file does not exist.
Example:
touch /home/rockstar/myfile
Option Information
-t date Changes the date of last modification of the file with the specified date.
Tip
The touch command is primarily used to create an empty file, but it can be useful for incremental or differential backups for
example. Indeed, the only effect of executing a touch on a file will be to force it to be saved during the next backup.
Example:
rmdir /home/rockstar/work
Option Information
Tip
To delete both a non-empty directory and its contents, use the rm command.
4.4.7 rm command
Danger
Options Information
Note
The rm command itself does not ask for confirmation when deleting files. However, with a Red Hat/Rocky distribution, rm does ask
for confirmation of deletion because the rm command is an alias of the rm -i command. Don't be surprised if on another
distribution, like Debian for example, you don't get a confirmation request.
Deleting a folder with the rm command, whether the folder is empty or not, will
require the -r option to be added.
In the example:
The hard-hard file name starts with a - . Without the use of the -- the shell would
have interpreted the -d in -hard-hard as an option.
4.4.8 mv command
Examples:
mv /home/rockstar/file1 /home/rockstar/file2
Options Information
A few concrete cases will help you understand the difficulties that can arise:
mv /home/rockstar/file1 /home/rockstar/file2
Renames file1 to file2 . If file2 already exists, replace the contents of the file
with file1 .
mv file1 /repexist/file2
mv file1 file2
mv file1 /repexist
mv file1 /wrongrep
If the destination directory does not exist, file1 is renamed to wrongrep in the root
directory.
4.4.9 cp command
Example:
cp -r /home/rockstar /tmp
Options Information
cp file1 /repexist/file2
cp file1 file2
cp file1 /repexist
cp file1 /wrongrep
If the destination directory does not exist, file1 is copied under the name
wrongrep to the root directory.
4.5 Visualization
Example:
The more command displays the contents of one or more files screen by screen.
Example:
$ more /etc/passwd
root:x:0:0:root:/root:/bin/bash
...
Using the Enter ⏎ key, the move is line by line. Using the Space key, the move is
page by page. /text allows you to search for the occurrence in the file.
The less command displays the contents of one or more files. The less command
is interactive and has its own commands for use.
Command Action
h or H Help.
↑ Up ↓ Down → Right ← Left Move up, down a line, or to the right or left.
The cat command concatenates the contents of multiple files and displays the
result on the standard output.
cat /etc/passwd
Example 3 - Combining the contents of multiple files into one file using output
redirection:
$ cat -n /etc/profile
1 # /etc/profile: system-wide .profile file for the Bourne shell
(sh(1))
2 # and Bourne compatible shells (bash(1), ksh(1), ash(1), ...).
3
4 if [ "`id -u`" -eq 0 ]; then
5 PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
6 else
…
$ cat -b /etc/profile
1 # /etc/profile: system-wide .profile file for the Bourne shell
(sh(1))
2 # and Bourne compatible shells (bash(1), ksh(1), ash(1), ...).
The tac command does almost the opposite of the cat command. It displays the
contents of a file starting from the end (which is particularly interesting for reading
logs!).
Option Description
By default (without the -n option), the head command will display the first 10 lines
of the file.
Option Description
Example:
tail -n 3 /etc/passwd
sshd:x:74:74:Privilege-separeted sshd:/var/empty /sshd:/sbin/nologin
tcpdump::x:72:72::/:/sbin/nologin
user1:x:500:500:grp1:/home/user1:/bin/bash
With the -f option, the change information of the file will always be output unless
the user exits the monitoring state with ⌃ Ctrl + C . This option is very frequently
used to track log files (the logs) in real time.
Without the -n option, the tail command displays the last 10 lines of the file.
It allows you to order the result of a command or the content of a file in a given
order, numerically, alphabetically, by size (KB, MB, GB) or in reverse order.
Example:
Option Description
-t Specify a delimiter, which requires that the contents of the corresponding file must be regularly delimited
column contents, otherwise they cannot be sorted properly.
-r Reverse the order of the result. Used in conjunction with the -n option to sort in order from largest to
smallest.
The sort command sorts the file only on the screen. The file is not modified by the
sorting. To save the sort, use the -o option or an output redirection > .
By default, the numbers are sorted according to their character. Thus, "110" will be
before "20", which will itself be before "3". The -n option must be specified so that
the numeric character blocks are sorted by their value.
The sort command reverses the order of the results, with the -r option:
In this example, the sort command will sort the contents of the /etc/passwd file
this time from largest uid (user identifier) to smallest.
• Shuffling values
The sort command also allows you to shuffle values with the -R option:
sort -R /etc/passwd
• Sorting IP addresses
192.168.1.10
192.168.1.200
5.1.150.146
208.128.150.98
208.128.150.99
The sort command knows how to remove the duplicates from the file output using
-u as option.
Red
Green
Blue
Red
Pink
$ sort -u colours.txt
Blue
Green
Pink
Red
The sort command knows how to recognize file sizes, from commands like ls with
the -h option.
1.7G
18M
69K
2.4M
1.2M
4.2G
6M
124M
12.4M
4G
4.5.9 wc command
The wc command counts the number of lines, words and/or bytes in a file.
Option Description
4.6 Search
find directory [-name name] [-type type] [-user login] [-date date]
Since there are so many options to the find command, it is best to refer to the
man .
If the search directory is not specified, the find command will search from the
current directory.
Option Description
It is possible to use the -exec option of the find command to execute a command
on each result line:
The previous command searches for all files in the /tmp directory named *.txt
and deletes them.
In the example above, the find command will construct a string representing the command to be executed.
If the find command finds three files named log1.txt , log2.txt , and log3.txt , then the find command will construct the string by
replacing in the string rm -f {} \; the braces with one of the results of the search, and do this as many times as there are results.
The ; character is a special shell character that must be protected by a \ to prevent it from being interpreted too early by the find
command (and not in the -exec ).
Tip
Example:
$ whereis -b ls
ls: /bin/ls
Option Description
Example:
Option Description
The grep command returns the complete line containing the string you are looking
for.
• The ^ special character is used to search for a string at the beginning of a line.
Note
This command is very powerful and it is highly recommended to consult its manual. It has many derivatives.
/home/rockstar/tests
/home/rockstar/test362
The square brackets [ and ] are used to specify the values that a single character
can take.
Note
Always surround words containing meta-characters with " to prevent them from being replaced by the names of files that meet the
criteria.
Warning
Do not confuse shell meta-characters with regular expression meta-characters. The grep command uses regular expression meta-
characters.
On UNIX and Linux systems, there are three standard streams. They allow
programs, via the stdio.h library, to input or output information.
By default:
• the screen is the output device for channels 1 and 2, called stdout and stderr.
stderr receives the error streams returned by a command. The other streams are
directed to stdout.
These streams point to peripheral files, but since everything is a file in UNIX/Linux,
I/O streams can easily be diverted to other files. This principle is the strength of
the shell.
It is possible to redirect the input stream from another file with the character < or
<< . The command will read the file instead of the keyboard:
Note
Only commands that require keyboard input will be able to handle input redirection.
Input redirection can also be used to simulate user interactivity. The command will
read the input stream until it encounters the defined keyword after the input
redirection.
The shell exits the ftp command when it receives a line containing only the
keyword.
Warning
The ending keyword, here END or STOP , must be the only word on the line and must be at the beginning of the line.
The standard input redirection is rarely used because most commands accept a
filename as an argument.
$ wc -l .bash_profile
27 .bash_profile # the number of lines is followed by the file name
$ wc -l < .bash_profile
27 # returns only the number of lines
Standard output can be redirected to other files using the > or >> characters.
The simple > redirection overwrites the contents of the output file:
When the >> character is used, it indicates that the output result of the command
is appended to the file content.
In both cases, the file is automatically created when it does not exist.
The standard error output can also be redirected to another file. This time it will be
necessary to specify the channel number (which can be omitted for channels 0 and
1):
ls -R / 2> errors_file
ls -R / 2>> errors_file
ls -R / 2>> /dev/null
4.7.5 Pipes
A pipe is a mechanism allowing you to link the standard output of a first command
to the standard input of a second command.
This communication is uni directional and is done with the | symbol. The pipe
symbol | is obtained by pressing the ⇧ Shift + | simultaneously.
All data sent by the control on the left of the pipe through the standard output
channel is sent to the standard input channel of the control on the right.
• Examples:
ls -lia / | head
ls -lia / | tail
ls -lia / | sort
ls -lia / | wc
The tee command is used to redirect the standard output of a command to a file
while maintaining the screen display.
It is combined with the | pipe to receive as input the output of the command to be
redirected:
cat fic
Using alias is a way to ask the shell to remember a particular command with its
options and give it a name.
For example:
ll
ls -l
The alias command lists the aliases for the current session. Aliases are set by
default on Linux distributions. Here, the aliases for a Rocky server:
$ alias
alias l.='ls -d .* --color=auto'
alias ll='ls -l --color=auto'
alias ls='ls --color=auto'
alias vi='vim'
alias which='alias | /usr/bin/which --tty-only --read-alias --show-dot --show-
tilde'
The aliases are only defined temporarily, for the time of the user session.
Warning
Special care must be taken when using aliases which can be potentially dangerous! For example, an alias set up without the
administrator's knowledge:
unalias ll
unalias -a
type ls
ls is an alias to « ls -rt »
Now that this is known, we can see the results of using the alias or disabling it one
time with the \ by executing the following:
• grep alias.
• mcd function
It is common to create a folder and then move around in it: mcd() { mkdir -p "$1";
cd "$1"; }
• cls function
• backup function
• extract function
extract () {
if [ -f $1 ] ; then
case $1 in
*.tar.bz2) tar xjf $1 ;;
*.tar.gz) tar xzf $1 ;;
*.bz2) bunzip2 $1 ;;
*.rar) unrar e $1 ;;
*.gz) gunzip $1 ;;
*.tar) tar xf $1 ;;
*.tbz2) tar xjf $1 ;;
*.tgz) tar xzf $1 ;;
*.zip) unzip $1 ;;
*.Z) uncompress $1 ;;
*.7z) 7z x $1 ;;
*)
echo "'$1' cannot be extracted via extract()" ;;
esac
else
echo "'$1' is not a valid file"
fi
}
Then we can use cmount to show all of the system mounts in columns like this:
[root]# cmount
The commands will all run sequentially in the order of input once the user presses
Enter ⏎ .
ls /; cd /home; ls -lia; cd /
chuck --norris
info
apropos
whatis
find
grep
find
grep
ls -R / 2> errors.log
ls -R / 2>> errors.log
Knowledge:
Complexity:
The uniq command is a very powerful command, used with the sort command,
especially for log file analysis. It allows you to sort and display entries by removing
duplicates.
To illustrate how the uniq command works, let us use a firstnames.txt file
containing a list of first names:
antoine
xavier
steven
patrick
xavier
antoine
antoine
steven
Note
uniq requires the input file to be sorted before running because it only compares consecutive lines.
With no argument, the uniq command will not display identical lines that follow
each other in the firstnames.txt file:
To display only the rows that appear only once, use the -u option:
Conversely, to display only the lines that appear at least twice in the file, use the
-d option:
To simply delete lines that appear only once, use the -D option:
Finally, to count the number of occurrences of each line, use the -c option:
The xargs command allows the construction and execution of command lines from
standard input.
$ xargs
use
of
xargs
<CTRL+D>
use of xargs
The xargs command waits for an input from the standard stdin input. Three lines
are entered. The end of the user input is specified to xargs by the keystroke
sequence ⌃ Ctrl + D . xargs then executes the default command echo followed
by the three arguments corresponding to the user input, namely:
In the following example, xargs will run the command ls -ld on the set of folders
specified in the standard input:
$ xargs ls -ld
/home
/tmp
/root
<CTRL+D>
drwxr-xr-x. 9 root root 4096 5 avril 11:10 /home
dr-xr-x---. 2 root root 4096 5 avril 15:52 /root
drwxrwxrwt. 3 root root 4096 6 avril 10:25 /tmp
In practice, the xargs command executed the ls -ld /home /tmp /root command.
What happens if the command to be executed does not accept multiple arguments,
such as with the find command?
The xargs command attempted to execute the find command with multiple
arguments behind the -name option, which caused find to generate an error:
In this case, the xargs command must be forced to execute the find command
several times (once per line entered as standard input). The -L option followed by
an integer allows you to specify the maximum number of entries to be processed
with the command at one time:
The special feature of the xargs command is that it places the input argument at
the end of the called command. This works very well with the above example since
the files passed in will form the list of files to be added to the archive.
Using the example of the cp command, to copy a list of files in a directory, this list
of files will be added at the end of the command... but what the cp command
expects at the end of the command is the destination. To do this, use the -I option
to put the input arguments somewhere else than at the end of the line.
The -I option allows you to specify a character (the % character in the above
example) where the input files to xargs will be placed.
The yum-utils package is a collection of utilities, built for yum by various authors,
which make it easier and more powerful to use.
Note
While yum has been replaced by dnf in Rocky Linux 8, the package name has remained yum-utils , although it can be installed as
dnf-utils as well. These are classic YUM utilities implemented as CLI shims on top of DNF to maintain backwards compatibility with
yum-3 .
Examples of use:
• Display the dependencies of a package (it can be a software package that has
been installed or not installed), equivalent to dnf deplist <package-name>
• Display the files provided by an installed package (does not work for packages
that are not installed), equivalent to rpm -ql <package-name>
$ repoquery -l yum-utils
/etc/bash_completion.d
/etc/bash_completion.d/yum-utils.bash
/usr/bin/debuginfo-install
/usr/bin/find-repos-of-install
/usr/bin/needs-restarting
/usr/bin/package-cleanup
/usr/bin/repo-graph
/usr/bin/repo-rss
/usr/bin/repoclosure
/usr/bin/repodiff
/usr/bin/repomanage
/usr/bin/repoquery
/usr/bin/reposync
/usr/bin/repotrack
/usr/bin/show-changed-rco
/usr/bin/show-installed
/usr/bin/verifytree
/usr/bin/yum-builddep
/usr/bin/yum-config-manager
/usr/bin/yum-debug-dump
/usr/bin/yum-debug-restore
/usr/bin/yum-groups-manager
/usr/bin/yumdownloader
…
Note
This command is very useful to quickly build a local repository of a few rpms!
Example: yumdownloader will download the samba rpm package and all its
dependencies:
Options Comments
• pstree : the pstree command displays the current processes on the system in a
tree-like structure.
• killall : the killall command sends a kill signal to all processes identified by
name.
• fuser : the fuser command identifies the PID of processes that use the specified
files or file systems.
Examples:
$ pstree
systemd─┬─NetworkManager───2*[{NetworkManager}]
├─agetty
├─auditd───{auditd}
├─crond
├─dbus-daemon───{dbus-daemon}
├─firewalld───{firewalld}
├─lvmetad
├─master─┬─pickup
│ └─qmgr
├─polkitd───5*[{polkitd}]
├─rsyslogd───2*[{rsyslogd}]
├─sshd───sshd───bash───pstree
├─systemd-journal
├─systemd-logind
├─systemd-udevd
└─tuned───4*[{tuned}]
# killall httpd
# fuser -k /etc/httpd/conf/httpd.conf
The watch command regularly executes a command and displays the result in the
terminal in full screen.
The -n option allows you to specify the number of seconds between each execution
of the command.
Note
To exit the watch command, you must type the keys: ⌃ Ctrl + C to kill the process.
Examples:
Result:
• Display a clock:
watch -t -n 1 date
Contrary to what its name might suggest, the install command is not used to
install new packages.
This command combines file copying ( cp ) and directory creation ( mkdir ), with
rights management ( chmod , chown ) and other useful functionalities (like backups).
Options:
Options Remarks
-m sets permissions
Note
There are options for managing the SELinux context (see the manual page).
Examples:
install -d ~/samples
These two orders could have been carried out with a single command:
This command already saves time. Combine it with owner, owner group, and rights
management to improve the time savings:
Note
You can also create a backup of existing files thanks to the -b option:
As you can see, the install command creates a backup file with a ~ tilde
appended to the original file name.
options description
For example:
0 directories, 17 files
$ stat /root/anaconda-ks.cfg
File: /root/anaconda-ks.cfg
Size: 1352 Blocks: 8 IO Block: 4096 regular file
Device: 10302h/66306d Inode: 2757097 Links: 1
Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2024-01-20 13:04:57.012033583 +0800
Modify: 2023-09-25 14:04:48.524760784 +0800
Change: 2024-01-24 16:37:34.315995221 +0800
Birth: 2
• Size - Displays the file size in bytes. If this is a directory, it displays the fixed
4096 bytes occupied by the directory name.
• Blocks - Displays the number of allocated blocks. Attention, please! The size of
each block in this command is 512 bytes. The default size of each block in ls -ls
is 1024 bytes.
• Inode - Inode is a unique ID number the Linux kernel assigns to a file or directory.
• Links - Number of hard links. Hard links are sometimes referred to as physical
links.
• Access - The last access time of files and directories, i.e. atime in GNU/Linux.
• Modify - The last modification time of files and directories, i.e. mtime in GNU/
Linux.
• Change - The last time the property is changed, i.e. ctime in GNU/Linux.
For files:
atime - After accessing the file content using commands such as cat , less , more ,
and head , the atime of the file can be updated. Please pay attention! The atime of
the file is not updated in real-time, and for performance reasons, it needs to wait
for a period of time before it can be displayed. mtime - Modifying the file content
can update the mtime of the file (such as appending or overwriting the file content
through redirection), because the file size is a property of the file, the ctime will
also be updated simultaneously. ctime - Changing the owner, group, permissions,
file size, and links (soft and hard links) of the file will update ctime.
For directories:
atime - After using the cd command to enter a new directory that has never been
accessed before, you can update and fix the atime of that directory. mtime -
Performing operations such as creating, deleting, and renaming files in this
directory will update the mtime and ctime of the directory. ctime - When the
permissions, owner, group, etc. of a directory change, the ctime of the directory
will be updated.
Tip
• If you create a new file or directory, its atime , mtime , and ctime are exactly the same
• If the file content is modified, the mtime and ctime of the file will inevitably be updated.
• If a brand new file is created in the directory, the atime , ctime , and mtime of that directory will be updated simultaneously.
• If the mtime of a directory is updated, then the ctime of that directory must be updated.
6. VI Text Editor
In this chapter you will learn how to work with the VIsual editor.
Objectives: In this chapter, future Linux administrators will learn how to:
Knowledge:
Complexity:
Visual (VI) is a popular text editor under Linux despite its limited ergonomics. It is
indeed an editor entirely in text mode: each action is done with a key on the
keyboard or dedicated commands.
Very powerful, it is above all very practical since it is on the whole minimal for
basic applications. It is therefore accessible in case of system failure. Its
universality (it is present on all Linux distributions and under Unix) makes it a
crucial tool for the administrator.
6.1 vi command
Example:
vi /home/rockstar/file
Option Information
If the file exists at the location mentioned by the path, VI reads it and puts it in
commands mode.
If the file does not exist, VI opens a blank file, displaying an empty page on the
screen. When the file is saved, it will take the name specified with the command.
If the command vi is executed without specifying a file name, VI opens a blank file
and displays an empty page on the screen. When the file is saved, VI will ask for a
file name.
The vim editor takes the interface and functions of VI with many improvements.
Among these improvements, the user has syntax highlighting, which is useful for
editing shell scripts or configuration files.
During a session, VI uses a buffer file to record all the user's changes.
Note
The original file is not modified as long as the user has not saved his work.
Tip
A line of text is ended by pressing Enter ⏎ but if the screen is not wide enough, VI makes automatic line breaks, wrap configuration
by default. These line breaks may not be desired, this is the nowrap configuration.
In command mode, Click the Z key of uppercase status twice in a row to save and
exit.
You must add ! to the previous commands to force the exit without confirmation.
Warning
There is no periodic backup, so you must remember to save your work regularly.
• The ex mode.
The philosophy of VI is to alternate between the command mode and the insertion
mode.
The third mode, ex, is a footer command mode from an old text editor.
This is the default mode when VI starts up. To access it from any of the other
modes, simply press the ⎋ Esc key.
At this time, all keyboard typing is interpreted as commands and the corresponding
actions are executed. These are essentially commands for editing text (copy, paste,
undo, ...).
This is the text modification mode. To access it from the command mode, you have
to press special keys that will perform an action in addition to changing the mode.
The text is not entered directly into the file but into a buffer zone in the memory.
The changes are only effective when the file is saved.
This is the file modification mode. To access it, you must first switch to command
mode, then enter the ex command frequently starting with the character : .
← Left , n ← Left , h or n h
→ Right , n → Right , l or n l
↑ Up , n ↑ Up , k or n k
↓ Down , n ↓ Down , j or n j
$ or ⤓ End
0 or ⤒ Home
If the cursor is in the middle of a word w moves to the next word, b moves to
the beginning of the word.
w or n w
b or n b
• Move to line n :
n G
g g
Note
VI switches to insertion mode. So you will have to press the ⎋ Esc key to return to command mode.
i (insert)
a (append)
• characters,
• words,
• lines.
• delete,
• replace,
• copy,
• cut,
• paste.
6.5.1 Characters
x or n x
r + character
R + characters + ⎋ Esc
Note
6.5.2 Words
d + w or n + d + w
y + w or n + y + w
p or n + p
P or n + P
C + W + word + ⎋ Esc
Tip
It is necessary to position the cursor under the first character of the word to cut (or copy) otherwise VI will cut (or copy) only the
part of the word between the cursor and the end. To delete a word is to cut it. If it is not pasted afterwards, the buffer is emptied and
the word is deleted.
6.5.3 Lines
d + d or n + d + d
y + y or n + y + y
• Paste what has been copied or deleted once or n times after the current line:
p or n + p
• Paste what has been copied or deleted once or n times before the current line:
P or n + P
d + 0
d + $
y + 0
y + $
• Delete (cut) the contents from the cursor line to the last line of the file:
d + G
• Delete (cut) the contents from the cursor line to the last line of the screen:
d + L
• Copy the content from the cursor line to the end of the file:
y + G
• Copy the content from the cursor line to the end of the screen
y + L
• Cancel a cancellation
⌃ Ctrl + R
6.6 EX commands
The Ex mode allows you to act on the file (saving, layout, options, ...). It is also in
Ex mode where search and replace commands are entered. The commands are
displayed at the bottom of the page and must be validated with the Enter ⏎ key.
• Show/hide numbering:
/string
?string
Example:
Example:
/^Word
Example:
/Word$
Example:
• * : The number of times the previous character matches, 0 times, or any number
of times.
Example:
/W*d
Note: If you want to ignore case (temporary) when matching strings, Please type
the :set ic .
From the 1st to the last line of the text, replace the searched string by the specified
string:
:1,$ s/search/replace
Note: You can also use :0,$s/search/replace to specify starting at the absolute
beginning of the file.
From line n to line m , replace the searched string with the specified string:
:n,m s/search/replace
By default, only the first occurrence found of each line is replaced. To force the
replacement of each occurrence, you have to add /g at the end of the command:
:n,m s/search/replace/g
Browse an entire file to replace the searched string with the specified string:
:% s/search/replace
:g/^$/d
:n,md
:g/string/d
:g!/string/d
:g/^#/d
:w
:w file
:n,m w file
e!
:r file
:q
• Quit editing a file that has been modified during the session but not saved:
:q!
:wq or :x
It is also possible to enter the Ex commands in a file named .exrc in the user's
login directory. The commands will be read and applied at each VI or VIM startup.
There is a tutorial for learning how to use VI. It is accessible with the command
vimtutor .
vimtutor
This mode is a sub-item of the command mode. You can complete it by typing v or
V ; the former's operation content is at the character level, and the latter's
operation content is at the line level.
Info
You can use the arrow keys to mark the character or line content you want to operate on.
character level
• Delete (cut) - Type the v key to mark the character content you want to delete,
and then type x to delete it
• Copy - Type the v key to mark the character content to copy, and then type the
y key to copy it
line level
• Delete (cut) - Type the V key to mark the line to be deleted, and then type x
to delete it
• Copy - Type the V key to mark the line to copy, and then type the y key to
copy it
7. User Management
Objectives: In this chapter, future Linux administrators will learn how to:
users
Knowledge:
Complexity:
7.1 General
Each user must have a group called the user's primary group.
Groups other than the primary group are called the user's supplementary
groups.
Note
Each user has a primary group and can be invited into one or more supplementary groups.
Groups and users are managed by their unique numerical identifiers GID and UID .
The kernel recognizes Both UID and GID, meaning that the Super Admin is not
necessarily the root user, as long as the uid=0 user is the Super Admin.
• /etc/passwd
• /etc/shadow
• /etc/group
• /etc/gshadow
• /etc/skel/
• /etc/default/useradd
• /etc/login.defs
Danger
You should always use the administration commands instead of manually editing the files.
Note
Some commands in this chapter require administrator rights. By convention, we will specify the command sudo when commands are
to be run with administrator rights. For the examples to work properly, please ensure your account has the right to use the sudo
command.
• /etc/group
• /etc/gshadow
Example:
Option Description
-f The system chooses a GID if the one specified by the -g option already exists.
-r Creates a system group with a GID between SYS_GID_MIN and SYS_GID_MAX . These two variables are defined
in /etc/login.defs .
Note
Under Debian, the administrator should use, except in scripts intended to be portable to all Linux distributions, the addgroup and
delgroup commands as specified in the man :
$ man addgroup
DESCRIPTION
adduser and addgroup add users and groups to the system according to command line options and configuration information
in /etc/adduser.conf. They are friendlier front ends to the low-level tools like useradd, groupadd and usermod programs,
by default, choosing Debian policy conformant UID and GID values, creating a home directory with skeletal configuration,
running a custom script, and other features.
The groupmod command allows you to modify an existing group on the system.
Example:
Option Description
After modification, the files belonging to the group have an unknown GID . They
must be reassigned to the new GID .
groupdel group
Example:
Tip
• If a user has a unique primary group and you issue the groupdel command on that group, you will be prompted that there is a
specific user under the group and it cannot be deleted.
• If a user belongs to a supplementary group (not the primary group for the user) and that group is not the primary group for another
user on the system, then the groupdel command will delete the group without any additional prompts.
Examples:
Tip
When you delete a user using the userdel -r command, the corresponding primary group is also deleted. The primary group name is
usually the same as the username.
Tip
Each group has a unique GID . Multiple users can use a group as a supplementary group. By convention, The GID of the super
administrator is 0. The GIDS reserved for some services or processes is 201-999, called system groups or pseudo-user groups. The
GID for users is usually greater than or equal to 1000. These are related to /etc/login.defs, which we will talk about later.
Tip
Since a user is necessarily part of a group, it is best to create the groups before adding the users. Therefore, a group may not have
any members.
• 3: GID.
Note
Each line in the /etc/group file corresponds to a group. The primary user info is stored in /etc/passwd .
This file contains the security information about the groups (separated by : ).
• 2: Encrypted password.
Warning
The name of the group in /etc/group and /etc/gshadow must correspond one by one. That is, each line in the /etc/group file must
have a corresponding line in the /etc/gshadow file.
An ! in the password indicates it is locked. Thus, no user can use the password to
access the group (since group members do not need it).
7.3.1 Definition
• 1: Login name;
• 3: UID;
• 5: Comments;
• 6: Home directory;
• /etc/passwd
• /etc/shadow
useradd [-u UID] [-g GID] [-d directory] [-s shell] login
Example:
Option Description
-g GID GID of the primary group. The GID here can also be a group name .
-G GID1, GID of the supplementary groups. The GID here can also be a group name . It is possible to specify many
[GID2]... supplementary groups separated by commas.
-U Adds the user to a group with the same name created simultaneously. If not specified, the creation of a
group with the same name occurs when creating the user.
When invoking the useradd command without any options, the following default
settings are set for the new user:
• The user's UID and primary group GID values are automatically deduced. This is
usually a unique value between 1000 and 60,000.
Note
The default settings and values are obtained from the following configuration files:
$ tail -n 1 /etc/passwd
test1:x:1000:1000::/home/test1:/bin/bash
$ tail -n 1 /etc/shadow
test1:!!:19253:0:99999:7:::
• It is not recommended to start with numbers and underscores, although you may
be allowed to do so;
Warning
The user must create the home directory, except for the last directory.
The last directory is created by the useradd command, which takes the opportunity
to copy the files from /etc/skel into it.
Example:
Note
Under Debian, you will have to specify the -m option to force the creation of the login directory or set the CREATE_HOME variable in
the /etc/login.defs file. In all cases, the administrator should use the adduser and deluser commands as specified in the man ,
except in scripts intended to be portable to all Linux distributions:
$ man useradd
DESCRIPTION
**useradd** is a low-level utility for adding users. On Debian, administrators should usually use **adduser(8)**
instead.
Example:
Option Description
-b Defines the base directory for the user's home directory. If you do not specify this option, use the HOME
base_directory variable in the /etc/default/useradd file or /home/
-f Sets the number of days after the password expires before disabling the account.
Example:
Option Description
-m Associated with the -d option. Moves the contents of the old login directory to the new one. If the old
home directory does not exist, creation of a new home directory does not occur; Creation of the new home
directory occurs when it does not exist.
-l login Modifies the login name. After you modify the login name, you also need to modify the name of the home
directory to match it.
-L Locks the account permanently. That is, it adds an ! at the beginning of the /etc/shadow password field.
-a Appends the user's supplementary groups, which must be used together with the -G option.
-G Modifies the user's supplementary groups and overwrites previous supplementary groups.
Tip
After changing the identifier, the files belonging to the user have an unknown UID .
It must be reassigned to the new UID .
Where 1000 is the old UID and 1044 is the new one. Examples are as follows:
$ usermod -L test1
$ grep test1 /etc/shadow
test1:!
$6$n.hxglA.X5r7X0ex$qCXeTx.kQVmqsPLeuvIQnNidnSHvFiD7bQTxU7PLUCmBOcPNd5meqX6AEKSQvCLtbkdN
$ usermod -U test1
The difference between the -aG option and the -G option can be explained by the
following example:
Option Description
-r Deletes the user's home directory and mail files located in the /var/spool/mail/ directory
Tip
• 1: Login name;
• 3: UID;
• 5: Comments;
• 6: Home directory;
• 1: Login name.
• 3: The time when the password was last changed, the timestamp format, in days.
The so-called timestamp is based on January 1, 1970 as the standard time. Every
time one day goes by, the timestamp is +1.
• 4: Minimum lifetime of the password. That is, the time interval between two
password changes (related to the third field), in days. Defined by the
PASS_MIN_DAYS of /etc/login.defs , the default is 0, that is, when you change the
password for the second time, there is no restriction. However, if it is 5, it means
that it is not allowed to change the password within 5 days, and only after 5 days.
• 5: Maximum lifetime of the password. That is, the validity period of the password
(related to the third field). Defined by the PASS_MAX_DAYS of /etc/login.defs .
• 6: The number of warning days before the password expires (related to the fifth
field). The default is 7 days, defined by the PASS_WARN_AGE of /etc/login.defs .
• 7: Number of days of grace after password expiration (related to the fifth field).
• 8: Account expiration time, the timestamp format, in days. Note that an account
expiration differs from a password expiration. In case of an account
expiration, the user shall not be allowed to login. In case of a password
expiration, the user is not allowed to login using her password.
Danger
For each line in the /etc/passwd file there must be a corresponding line in the /etc/shadow file.
For time stamp and date conversion, please refer to the following command format:
Danger
By default, the primary group of the user creating the file is the group that owns
the file.
chown command
Examples:
Option Description
-R Recursively changes the owners of the directory and all files under the directory.
In the following example the group assigned will be the primary group of the
specified user.
The chgrp command allows you to change the owner group of a file.
Example:
Option Description
-R Recursively changes the groups of the directory and all files under the directory.
Note
It is possible to apply to a file an owner and an owner group by taking as reference those of another file:
For example:
Examples:
Option Description
-a USER Adds the user to the group. For the added user, this group is a supplementary group.
# gpasswd GroupeA
New Password:
Re-enter new password:
Note
In addition to using gpasswd -a to add users to a group, you can also use the usermod -G or usermod -aG mentioned earlier.
7.5.2 id command
id USER
Example:
$ sudo id alain
uid=1000(alain) gid=1000(GroupA) groupes=1000(GroupA),1016(GroupP)
The newgrp command can select a group from the user's supplementary groups as
the user's new temporary primary group. The newgrp command every time you
switch a user's primary group, there will be a new child shell child process). Be
careful! child shell and sub shell are different.
newgrp [secondarygroups]
Example:
$ su - test1
$ touch a.txt
$ ll
-rw-rw-r-- 1 test1 test1 0 10 7 14:02 a.txt
$ echo $SHLVL ; echo $BASH_SUBSHELL
1
0
# You can exit the child shell using the `exit` command
$ exit
$ logout
$ whoami
root
7.6 Securing
Examples:
Option Description
-l Permanently locks the user account. For root (uid=0) use only.
-n DAYS Defines the minimum password lifetime. Permanent change. For root (uid=0) use only.
-x DAYS Defines the maximum password lifetime. Permanent change. For root (uid=0) use only.
-w DAYS Defines the warning time before expiration. Permanent change. For root (uid=0) use only.
-i DAYS Defines the delay before deactivation when the password expires. Permanent change. For root (uid=0) use
only.
Use password -l , that is, add "!!" at the beginning of the password field of the user
corresponding to /etc/shadow .
Example:
[alain]$ passwd
Note
Users logged in to the system can use the passwd command to change their passwords (this process requires requesting the user's
old password). The root(uid=0) user can change the password of any user.
When managing user accounts by shell script, setting a default password after
creating the user may be useful.
Example:
Warning
chage [-d date] [-E date] [-I days] [-l] [-m days] [-M days] [-W days] [login]
Example:
Option Description
-I DAYS Defines the days to delay before deactivation, password expired. Permanent change.
-d LAST_DAY Defines the number of days since the password was last changed. You can use the days' timestamp style or
the YYYY-MM-DD style. Permanent change.
-E EXPIRE_DATE Defines the account expiration date. You can use the days' timestamp style or the YYYY-MM-DD style.
Permanent change.
-W WARN_DAYS Defines the number of days warning time before expiration. Permanent change.
Examples:
Configuration files:
• /etc/default/useradd
• /etc/login.defs
• /etc/skel
Note
Tip
If the options are not specified when creating a user, the system uses the default
values defined in /etc/default/useradd .
This file is modified by the command useradd -D ( useradd -D entered without any
other option displays the contents of the /etc/default/useradd file).
SKEL=/etc/skel
CREATE_MAIL_SPOOL=yes
Parameters Comment
HOME Defines the directory path of the upper level of the common user's home directory.
INACTIVE Defines the number of days of grace after password expiration. Corresponds to the 7th field of the
/etc/shadow file. -1 value means that the grace period feature is turned off.
EXPIRE Defines the account expiration date. Corresponds to the 8th field of the /etc/shadow file.
If you do not need a primary group with the same name when creating users, you
can do this:
Note
2. Private group, that is, when adding users, a group with the same name is created as its primary group. This group mechanism is
commonly used by RHEL and related downstream distributions.
SYS_UID_MAX 999
GID_MIN 1000
GID_MAX 60000
SYS_GID_MIN 201
SYS_GID_MAX 999
CREATE_HOME yes
USERGROUPS_ENAB yes
ENCRYPT_METHOD SHA512
UMASK 022 : This means that the permission to create a file is 755 (rwxr-xr-x).
However, for security, GNU/Linux does not have x permission for newly created
files. This restriction applies to root(uid=0) and ordinary users(uid>=1000). For
example:
HOME_MODE 0700 : The permissions of an ordinary user's home directory. Does not
work for root's home directory.
USERGROUPS_ENAB yes : "When you delete a user using the userdel -r command, the
corresponding primary group is also deleted." Why? That's the reason.
When a user is created, their home directory and environment files are created.
You can think of the files in the /etc/skel/ directory as the file templates you need
to create users.
• .bash_logout
• .bash_profile
• .bashrc
All files and directories placed in this directory will be copied to the user tree when
created.
7.8.1 su command
The su command allows you to change the identity of the connected user.
Examples:
$ sudo su - alain
[albert]$ su - root -c "passwd alain"
Option Description
Standard users will have to type the password for the new identity.
Tip
You can use the exit / logout command to exit users who have been switched. It should be noted that after switching users, there is
no new child shell or sub shell , for example:
$ whoami
root
$ echo $SHLVL ; echo $BASH_SUBSHELL
1
0
$ su - test1
$ echo $SHLVL ; echo $BASH_SUBSHELL
1
0
$ whoami
test1
$ su root
$ pwd
/home/test1
$ env
...
USER=test1
PWD=/home/test1
HOME=/root
MAIL=/var/spool/mail/test1
LOGNAME=test1
...
$ whoami
test1
$ su - root
$ pwd
/root
$ env
...
USER=root
PWD=/root
HOME=/root
MAIL=/var/spool/mail/root
LOGNAME=root
...
So, when you want to switch users, remember not to lose the - . Because the
necessary environment variable files are not loaded, there may be problems
running some programs.
8. File System
In this chapter, you will learn how to work with file systems.
Objectives: In this chapter, future Linux administrators will learn how to:
Knowledge:
Complexity:
8.1 Partitioning
The partition table, stored in the first sector of the disk (MBR: Master Boot
Record), records the division of the physical disk into partitioned volumes.
For MBR partition table types, the same physical disk can be divided into a
maximum of 4 partitions:
• Extended partition
Warning
There can be only one extended partition per physical disk. That is, a physical disk can have in the MBR partition table up to:
The extended partition cannot write data and format and can only contain logical partitions. The largest physical disk that the MBR
partition table can recognize is 2TB.
In the world of GNU/Linux, everything is a file. For disks, they are recognized in
the system as:
Mouse /dev/mouse
What we call devices are the files stored without /dev , identifying the different
hardware detected by the motherboard.
The service called udev is responsible for applying the naming conventions (rules)
and applying them to the devices it detects.
The number after the block device (storage device) indicates a partition. For MBR
partition tables, the number 5 must be the first logical partition.
Warning
Attention please! The partition number we mentioned here mainly refers to the partition number of the block device (storage device).
There are at least two commands for partitioning a disk: fdisk and cfdisk . Both
commands have an interactive menu. cfdisk is more reliable and better optimized,
so it is best to use it.
The only reason to use fdisk is when you want to list all logical devices with the
-l option. fdisk uses MBR partition tables, so it is not supported for GPT
partition tables and cannot be processed for disks larger than 2TB.
sudo fdisk -l
sudo fdisk -l /dev/sdc
sudo fdisk -l /dev/sdc2
The parted (partition editor) command can partition a disk without the drawbacks
of fdisk .
The parted command can be used on the command line or interactively. It also has
a recovery function capable of rewriting a deleted partition table.
Under the graphical interface, there is the very complete gparted tool: Gnome
PARtition EDitor.
The gparted command, when run without any arguments, will show an interactive
mode with its internal options:
• print all in this mode will have the same result as gparted -l on the command
line.
cfdisk device
Example:
The preparation, without LVM, of the physical media goes through five steps:
• Creation of the file systems (allows the operating system to manage the files, the
tree structure, the rights, ...);
• Mounting of file systems (registration of the file system in the tree structure);
The partition created by the standard partition cannot dynamically adjust the
resources of the hard disk, once the partition is mounted, the capacity is
completely fixed, this constraint is unacceptable on the server. Although the
standard partition can be forcibly expanded or shrunk through certain technical
means, it can easily cause data loss. LVM can solve this problem very well. LVM is
available under Linux from kernel version 2.4, and its main features are:
• a logical abstraction layer is added between the physical disk (or disk partition)
and the file system
The physical media: The storage medium of the LVM can be the entire hard disk,
disk partition, or RAID array. The device must be converted, or initialized, to an
LVM Physical Volume(PV), before further operations can be performed.
PV(Physical Volume) is the basic storage logic block of LVM. You can create a
physical volume by using a disk partition or the disk itself.
PE: The smallest unit of storage that can be allocated in a Physical Volume, default
to 4MB. You can specify an additional size.
LE: The smallest unit of storage that can be allocated in a Logical Volume. In the
same VG, PE, and LE are the same and correspond one to one.
The disadvantage is that if one of the physical volumes becomes out of order, then
all the logical volumes that use this physical volume are lost. You will have to use
LVM on raid disks.
Note
LVM is only managed by the operating system. Therefore the BIOS needs at least one partition without LVM to boot.
Info
In the physical disk, the smallest storage unit is the sector, in the file system, the smallest storage unit of GNU/Linux is the block,
which is called cluster in the Windows operating system. In RAID, the smallest storage unit is chunk.
There are several storage mechanisms when storing data to LV, two of which are:
• Linear volumes;
• Mirrored volumes.
Item PV VG LV
pvcreate command
The pvcreate command is used to create physical volumes. It turns Linux partitions
(or disks) into physical volumes.
Example:
You can also use a whole disk (which facilitates disk size increases in virtual
environments for example).
Option Description
-f Forces the creation of the volume (disk already transformed into physical volume). Use with extreme
caution.
vgcreate command
The vgcreate command creates volume groups. It groups one or more physical
volumes into a volume group.
Example:
lvcreate command
The lvcreate command creates logical volumes. The file system is then created on
these logical volumes.
Example:
Option Description
-n name Sets the LV name. A special file was created in /dev/name_volume with this name.
-l number Sets the percentage of the capacity of the hard disk to use. You can also use the number of PE. One PE
equals 4MB.
Info
After you create a logical volume with the lvcreate command, the naming rule of the operating system is - /dev/VG_name/LV_name , this
file type is a soft link (otherwise known as a symbolic link). The link file points to files like /dev/dm-0 and /dev/dm-1 .
pvdisplay command
The pvdisplay command allows you to view information about the physical
volumes.
pvdisplay /dev/PV_name
Example:
vgdisplay command
The vgdisplay command allows you to view information about volume groups.
vgdisplay VG_name
Example:
lvdisplay command
The lvdisplay command allows you to view information about the logical volumes.
lvdisplay /dev/VG_name/LV_name
Example:
The preparation with LVM of the physical support is broken down into the
following:
The Linux operating system is able to use different file systems (ext2, ext3, ext4,
FAT16, FAT32, NTFS, HFS, BtrFS, JFS, XFS, ...).
The mkfs (make file system) command allows you to create a Linux file system.
Example:
Option Description
Warning
Each file system has an identical structure on each partition. The system initializes
a Boot Sector and a Super block, and then the administrator initializes an Inode
table and a Data block.
Note
The boot sector is the first sector of bootable storage media, that is, 0 cylinder, 0
track, 1 sector(1 sector equals 512 bytes). It consists of three parts:
Item Description
MBR Stores the "boot loader"(or "GRUB"); loads the kernel, passes parameters; provides a menu interface at
boot time; transfers to another loader, such as when multiple operating systems are installed.
The size of the Super block table is defined at creation. It is present on each
partition and contains the elements necessary for its utilization.
After the system is initialized, a copy is loaded into the central memory. This copy is
updated as soon as modified, and the system saves it periodically (command sync ).
When the system stops, it copies this table in memory to its block.
The size of the inode table is defined at its creation and is stored on the partition.
It consists of records, called inodes, corresponding to the files created. Each record
contains the addresses of the data blocks making up the file.
Note
After the system is initialized, a copy is loaded into the central memory. This copy is
updated as soon as it is modified, and the system saves it periodically (command
sync ).
When the system stops, it copies this table in memory to its block.
Note
The size of the inode table determines the maximum number of files the FS can contain.
• Inode number;
• Table of several pointers (block table) to the logical blocks containing the file
pieces.
Its size corresponds to the rest of the partition's available space. This area contains
the catalogs corresponding to each directory and the data blocks corresponding to
the file's contents.
These tables are written to the hard disk when the system is shut down.
Attention
In the event of a sudden stop, the file system may lose its consistency and cause data loss.
It is possible to check the consistency of a file system with the fsck command.
In case of errors, solutions are proposed to repair the inconsistencies. After repair,
files that remain without entries in the inode table are attached to the logical
drive's /lost+found folder.
fsck command
The fsck command is a console-mode integrity check and repair tool for Linux file
systems.
Example:
To check the root partition, it is possible to create a forcefsck file and reboot or
run shutdown with the -F option.
Warning
Note
Linux meets the FHS (Filesystems Hierarchy Standard) (see man hier ), which
defines the folders' names and roles.
/sbin Commands necessary for system startup and repair system binaries
/proc This is a mount point for the proc filesystem, which provides information processes
about running processes and the kernel
/var This directory contains files which may change in size, such as spool and log variables
files
• To mount or unmount at the tree level, you must not be under its mount point.
• Mounting on a non-empty directory does not delete the content. It is only hidden.
The /etc/fstab file is read at system startup and contains the mounts to be
performed. Each file system to be mounted is described on a single line, the fields
being separated by spaces or tabs.
Note
Column Description
5 Enable or disable backup management (0:not backed up, 1:backed up). The dump command is used for
backup here. This outdated feature was initially designed to back up old file systems on tape.
6 Check order when checking the FS with the fsck command (0:no check, 1:priority, 2:not priority)
The mount -a command allows you to mount automatically based on the contents of
the configuration file /etc/fstab . The mounted information is then written to /etc/
mtab .
Warning
Only the mount points listed in /etc/fstab will be mounted on reboot. Generally speaking, we do not recommend writing USB flash
disks and removable hard drives to the /etc/fstab file because when the external device is unplugged and rebooted, the system will
prompt that the device cannot be found, resulting in a failure to boot. So what am I supposed to do? Temporary mount, for example:
# When not needed, execute the following command to pull out the USB flash disk
Shell > umount /mnt/usb
Info
It is possible to make a copy of the /etc/mtab file or to copy its contents to /etc/fstab . If you want to view the UUID of the device
partition number, type the following command: lsblk -o name,uuid . UUID is the abbreviation of Universally Unique Identifier .
mount command
The mount command allows you to mount and view the logical drives in the tree.
Example:
Option Description
Note
The mount command alone displays all mounted file systems. If the mount parameter is -o defaults , it is equivalent to -o
rw,suid,dev,exec,auto,nouser,async and these parameters are independent of the file system. If you need to browse special mount
options related to the file system, please read the "Mount options FS-TYPE" section in man 8 mount (FS-TYPE is replaced with the
corresponding file system, such as ntfs, vfat, ufs, etc.)
umount command
Example:
Option Description
Note
When disassembling, you must not stay below the mounting point. Otherwise, the following error message is displayed: device is
busy .
As in any system, it is important to respect the file naming rules to navigate the
tree structure and file management.
• Most files do not have a concept for file extension. In the GNU/Linux world, most
file extensions are not required, except for a few (for example, .jpg, .mp4, .gif,
etc.).
Note
While nothing is technically wrong with creating a file or directory with a space, it is generally a "best practice" to avoid this and
replace any space with an underscore.
Note
The . at the beginning of the file name only hides it from a simple ls .
• .tgz : data file archived with the tar utility and compressed with the gzip
utility;
Part Description
1 Inode number
2 File type (1st character of the block of 10), "-" means this is an ordinary file.
4 If this is a directory, this number represents how many subdirectories there are in that directory, including
hidden ones. If this is a file, it indicates the number of hard links. When the number 1 is, there is only one
hard link.
- Represents an ordinary file. Including plain text files (ASCII); binary files (binary); data format files (data);
various compressed files.
b Represents a block device file. It includes hard drives, USB drives, and so on.
c Represents a character device file. Interface device of serial port, such as mouse, keyboard, etc.
p Represents a pipe file. It is a special file type. The main purpose is to solve the errors caused by multiple
programs accessing a file simultaneously. FIFO is the abbreviation of first-in-first-out.
l Represents soft link files, also called symbolic link files, are similar to shortcuts in Windows. Hard link file,
also known as physical link file.
Each directory has two hidden files: . and ... You need to use ls -al to view, for
example:
# . Indicates that in the current directory, for example, you need to execute a
script in a directory, usually:
Shell > ./scripts
# .. represents the directory one level above the current directory, for
example:
Shell > cd /etc/
Shell > cd ..
Shell > pwd
/
# For an empty directory, its fourth part must be greater than or equal to 2.
Because there are "." and ".."
Shell > mkdir /tmp/t1
Shell > ls -ldi /tmp/t1
1179657 drwxr-xr-x 2 root root 4096 Nov 14 18:41 /tmp/t1
Special files
To communicate with peripherals (hard disks, printers, etc.), Linux uses interface
files called special files (device file or special file). These files allow the peripherals
to identify themselves.
These files are special because they do not contain data but specify the access
mode to communicate with the device.
• block mode;
• character mode.
Communication files
• Pipe files pass information between processes by FIFO (First In, First Out). One
process writes transient information to a pipe file, and another reads it. After
reading, the information is no longer accessible.
Link files
These files allow the possibility of giving several logical names to the same physical
file, creating a new access point to the file.
Soft link file This file is similar to a shortcut for Windows. It has permission of 0777 and points to the original file.
When the original file is deleted, you can use ls -l to view the output information of the soft link file. In
the output information, the file name of the soft link appears in red, and the pointed original file appears
in red with a flashing prompt.
Hard link file This file represents different mappings occupying the same inode number. They can be updated
synchronously (including file content, modification time, owner, group affiliation, access time, etc.). Hard-
linked files cannot span partitions and file systems and cannot be used in directories.
# When deleting the original file. "-s" represents the soft link option
Shell > touch /root/Afile
# The ln command does not add any options, indicating a hard link
Shell > ln /home/paul/letter /home/jack/read
# The essence of hard links is the file mapping of the same inode number in
different directories.
Shell > ls –li /home/*/*
666 –rwxr--r-- 2 root root … letter
666 –rwxr--r-- 2 root root … read
r Read. Allows reading a file ( cat , less , ...) and copying a file ( cp , ...).
w Write. Allows modification of the file content ( cat , >> , vim , ...).
- No right
Directory Description
permissions
w Write. Allows you to create, and delete files/directories in this directory, such as commands mkdir ,
rmdir , rm , touch , and so on.
- No right
Info
For a directory's permissions, r and x usually appear at the same time. Moving or renaming a file depends on whether the directory
where it is located has w permission, and so does deleting a file.
u Owner
g Owner group
o Others users
Info
In some commands, you can use a (all) to represent ugo. For example: chmod a+x FileName is equivalent to chmod u+x,g+x,o+x FileName
or chmod ugo+x FileName .
The display of rights is done with the command ls -l . It is the last 9 characters of
the block of 10. More precisely 3 times 3 characters.
[root]# ls -l /tmp/myfile
-rwxrw-r-x 1 root sys ... /tmp/myfile
1 2 3 4 5
Part Description
4 File owner
By default, the owner of a file is the one who created it. The group of the file is the
group of the owner who created the file. The others are those not concerned by the
previous cases.
Only the administrator and the owner of a file can change the rights of a file.
chmod command
The chmod command allows you to change the access permissions to a file.
Option Observation
-R Recursively change the permissions of the directory and all files under the directory.
Warning
The rights of files and directories are not dissociated. For some operations, it will be necessary to know the rights of the directory
containing the file. A write-protected file can be deleted by another user as long as the rights of the directory containing it allow this
user to perform this operation.
Number Description
4 r
2 w
1 x
0 -
Add the three numbers together to get one user type permission. E.g. 755=rwxr-
xr-x.
Info
Sometimes you will see chmod 4755 . The number 4 here refers to the special permission set uid. Special permissions will not be
expanded here for the moment, just as a basic understanding.
[root]# ls -l /tmp/fil*
-rwxrwx--- 1 root root … /tmp/file1
-rwx--x--- 1 root root … /tmp/file2
-rwx--xr-- 1 root root … /tmp/file3
SYMBOLIC REPRESENTATION
The principle is to remove the value defined by the mask at maximum rights
without the execution right.
For a directory:
Info
The /etc/login.defs file defines the default UMASK, with a value of 022. This means the permission to create a file is 755 (rwxr-xr-
x). However, for the sake of security, GNU/Linux does not have x permission for newly created files. This restriction applies to
root(uid=0) and ordinary users(uid>=1000).
# root user
Shell > touch a.txt
Shell > ll
-rw-r--r-- 1 root root 0 Oct 8 13:00 a.txt
The umask command allows you to display and modify the mask.
Example:
$ umask 033
$ umask
0033
$ umask -S
u=rwx,g=r,o=r
$ touch umask_033
$ ls -la umask_033
-rw-r--r-- 1 rockstar rockstar 0 nov. 4 16:44 umask_033
$ umask 025
$ umask -S
u=rwx,g=rx,o=w
$ touch umask_025
$ ls -la umask_025
-rw-r---w- 1 rockstar rockstar 0 nov. 4 16:44 umask_025
Option Description
Warning
umask does not affect existing files. umask -S displays the file rights (without the execute right) of the files that will be created. So, it
is not the display of the mask used to subtract the maximum value.
Note
In the above example, using commands to modify masks applies only to the currently connected session.
Info
The umask command belongs to bash's built-in commands, so when you use man umask , all built-in commands will be displayed. If you
only want to view the help of umask , you must use the help umask command.
To keep the value, you have to modify the following profile files
• /etc/profile
• /etc/bashrc
• ~/.bashrc
When the above file is written, it actually overrides the UMASK parameter of
/etc/login.defs . If you want to improve the security of the operating system, you
can set umask to 027 or 077.
9. Process Management
Objectives: In this chapter, future Linux administrators will learn how to:
process, linux
Knowledge:
Complexity:
9.1 Generalities
When a program runs, the system will create a process by placing the program
data and code in memory and creating a runtime stack. A process is an instance
of a program with an associated processor environment (ordinal counter, registers,
etc...) and memory environment.
The PID number represents the process at the time of execution. When the process
finishes, the number is available again for another process. Running the same
command several times will produce a different PID each time.
Note
Processes are not to be confused with threads. Each process has its memory context (resources and address space), while threads
from the same process share this context.
Example:
# ps -fu root
Option Description
Option Description
Without an option specified, the ps command only displays processes running from
the current terminal.
# ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 Jan01 ? 00:00/03 /sbin/init
Column Description
1 598 lvmetad 0
1 643 auditd -4
1 668 rtkit-daemon 1
1 670 sssd 0
• is not associated with any terminal and is owned by a system user (often root )
System processes are therefore called daemons (D*isk And Execution MON*itor).
The user's credentials are passed to the created process when a command is
executed.
By default, the process's actual UID and GID (of the process) are identical to the
actual UID and GID (the UID and GID of the user who executed the command).
When a SUID (and/or SGID ) is set on a command, the actual UID (and/or GID )
becomes that of the owner (and/or owner group) of the command and no longer
that of the user or user group that issued the command. Effective and real UIDs
are therefore different.
Each time a file is accessed, the system checks the rights of the process according
to its effective identifiers.
Therefore, the total processing time available is divided into small ranges, and each
process (with a priority) accesses the processor sequentially. The process will take
several states during its life among the states:
When a parent process dies, their children are said to be orphans. They are then
adopted by the init process, which will destroy them.
• Nice value: a parameter used to adjust the priority of an ordinary process. The
range is -20-19.
• synchronous: the user loses access to the shell during command execution. The
command prompt reappears at the end of the process execution.
• the command or script must not return any result on the screen
Example:
kill -9 1664
18 SIGCONT Resumes the process. Processes that use the SIGSTOP signal can use it to continue
running
19 SIGSTOP Suspends the process (Stops process). The effect of this signal is equivalent to
⌃ Ctrl + z
Signals are the means of communication between processes. The kill command
sends a signal to a process.
Tip
The complete list of signals taken into account by the kill command is available by typing the command:
$ man 7 signal
nohup command
Example:
nohup ignores the SIGHUP signal sent when a user logs out.
Note
nohup handles standard output and error but not standard input, hence the redirection of this input to /dev/null .
keys simultaneously. Access to the prompt is restored after displaying the number
of the process that has just been suspended.
The & statement executes the command asynchronously (the command is then
called job) and displays the number of job. Access to the prompt is then returned.
Example:
[CTRL]+[Z]
^Z
[1]+ Stopped
$ bg 1
[1] 15430
$
Whether it was put in the background when it was created with the & argument or
later with the ⌃ Ctrl + z keys, a process can be brought back to the foreground
with the fg command and its job number.
The jobs command displays the list of processes running in the background and
specifies their job number.
Example:
$ jobs
[1]- Running sleep 1000
[2]+ Running find / > arbo.txt
1. job number
3. a + : The process selected by default for the fg and bg commands when no job
number is specified
6. the command
The command nice allows the execution of a command by specifying its priority.
Usage example:
Unlike root , a standard user can only reduce the priority of a process and only
values between 0 and 19 will be accepted.
As shown in the example above, the first three commands indicate setting the Nice
value to "-5", while the second command is our recommended usage. The fourth
command indicates setting the Nice value to "5". For the fifth command, not typing
any options means that the Nice value is set to "10".
Tip
Directly typing the nice command will return the Nice value of the current shell.
You can lift the Nice value limit for each user or group by modifying the /etc/security/limits.conf file.
The renice command allows you to change the priority of a running process.
Example:
renice -n 15 -p 1664
Option Description
Tip
The pidof command, coupled with the xargs command (see the Advanced Commands course), allows a new priority to be applied in
a single command:
To adapt to different distributions, you should try to use command forms such as
nice -n 5 or renice -n 6 as much as possible.
The top command displays the processes and their resource consumption.
$ top
PID USER PR NI ... %CPU %MEM TIME+ COMMAND
2514 root 20 0 15 5.5 0:01.14 top
Column Description
PR Process priority.
NI Nice value.
The top command allows control of the processes in real-time and in interactive
mode.
The pgrep command searches the running processes for a process name and
displays the PID matching the selection criteria on the standard output.
The pkill command will send each process the specified signal (by default
SIGTERM).
pgrep process
pkill [option] [-signal] process
Examples:
pkill tomcat
Note
Before you kill a process, it's best to know exactly what it is for; otherwise, it can lead to system crashes or other unpredictable
problems.
In addition to sending signals to the relevant processes, the pkill command can
also end the user's connection session according to the terminal number, such as:
pkill -t pts/1
This command's function is roughly the same as that of the pkill command. The
usage is — killall [option] [ -s SIGNAL | -SIGNAL ] NAME . The default signal is
SIGTERM.
Options Description
Example:
killall tomcat
This command displays the progress in a tree style, and its usage is - pstree
[option] .
Option Description
$ pstree -pnhu
systemd(1)─┬─systemd-journal(595)
├─systemd-udevd(625)
├─auditd(671)───{auditd}(672)
├─dbus-daemon(714,dbus)
├─NetworkManager(715)─┬─{NetworkManager}(756)
│ └─{NetworkManager}(757)
├─systemd-logind(721)
├─chronyd(737,chrony)
├─sshd(758)───sshd(1398)───sshd(1410)───bash(1411)───pstree(1500)
├─tuned(759)─┬─{tuned}(1376)
│ ├─{tuned}(1381)
│ ├─{tuned}(1382)
│ └─{tuned}(1384)
├─agetty(763)
├─crond(768)
├─polkitd(1375,polkitd)─┬─{polkitd}(1387)
│ ├─{polkitd}(1388)
│ ├─{polkitd}(1389)
│ ├─{polkitd}(1390)
│ └─{polkitd}(1392)
└─systemd(1401)───(sd-pam)(1404)
orphan process: When a parent process dies, their children are said to be
orphans. The init process adopts these special state processes, and status
collection is completed until they are destroyed. Conceptually speaking, the
orphanage process does not pose any harm.
zombie process: After a child process completes its work and is terminated, its
parent process needs to call the signal processing function wait() or waitpid() to
obtain the termination status of the child process. If the parent process does not do
so, although the child process has already exited, it still retains some exit status
information in the system process table. Because the parent process cannot obtain
the status information of the child process, these processes will continue to occupy
resources in the process table. We refer to processes in this state as zombies.
Hazard:
How can we check for any zombie processes in the current system?
In this chapter, you will learn how to back up and restore your data using Linux.
Objectives: In this chapter, future Linux administrators will learn how to:
Knowledge:
Complexity:
Note
Throughout this chapter, the command structures use "device" to specify both a target location for backup and the source location
when restoring. The device can be either external media or a local file. You should get a feel for this as the chapter unfolds, but you
can always refer back to this note for clarification if you need to.
The backup will answer the need to conserve and restore data effectively.
The backup media should be kept in another room (or building) than the server so
that a disaster does not destroy the server and the backups.
In addition, the administrator must regularly check that the media are still
readable.
10.1 Generalities
Backups require a lot of discipline and rigor from the system administrator. System
administrators need to consider the following issues before performing backup
operations:
• Method?
• How often?
• Automatic or manual?
In addition to these issues, system administrators should also consider factors such
as performance, data importance, bandwidth consumption, and maintenance
complexity based on actual situations.
• Full backup: Refers to a one-time copy of all files, folders, or data in the hard
disk or database.
• Incremental backup: Refers to the backup of the data updated after the last
Full backup or Incremental backup.
• Differential backup: Refers to the backup of the changed files after the Full
backup.
• Hot backup: Refers to the backup when the system is operating normally. As the
data in the system is updated at any time, the backed-up data has a certain lag
relative to the system's real data.
• Periodic: Backup within a specific period before a major system update (usually
during off-peak hours)
Tip
Before a system change, it can be useful to make a backup. However, there is no point in backing up data every day that only
changes every month.
• Full recover: Data recovery based on Full backup or "Full backup + Incremental
backup" or "Full backup + Differential backup".
Tip
For security reasons, storing the restored directory or file in the /tmp directory before performing the recovery operation is
recommended to avoid situations where old files (old directory) overwrite new files (new directory).
• editor tools;
• graphical tools;
The commands we will use here are tar and cpio . If you want to learn about the
dump tool, please refer to this document.
• tar :
• easy to use;
• cpio :
• retains owners;
Note
Replication: A backup technology that copies a set of data from one data source to
another or multiple data sources, mainly divided into Synchronous Replication
and Asynchronous Replication. This is an advanced backup part for novice
system administrators, so this basic document will not elaborate on these contents.
Using a naming convention allows one to quickly target a backup file's contents
and thus avoid hazardous restorations.
• utility used;
• options used;
• date.
Tip
Note
In the Linux world, most files do not have the extension concept except for a few exceptions in GUI environments (such
as .jpg, .mp4, .gif). In other words, most file extensions are not required. The reason for artificially adding suffixes is to facilitate
recognition by human users. If the systems administrator sees a .tar.gz or .tgz file extension, for instance, then he knows how to
deal with the file.
• backup the atime, ctime, mtime, btime (crtime) of the file itself;
• External: Store backup files on external devices. External devices can be USB
drives, CDs, disks, servers, or NAS, and more.
tar implicitly backs up in relative mode even if the path of the information to be
backed up is mentioned in absolute mode. However, backups and restores in
absolute mode are possible. If you want to see a separate example of the usage of
tar , please refer to this document.
Warning
Before a restoration, it is important to consider and determine the most appropriate method to avoid mistakes.
Restorations are usually performed after a problem has occurred that needs to be
resolved quickly. A poor restoration can, in some cases, make the situation worse.
The default utility for creating backups on UNIX systems is the tar command.
These backups can be compressed by bzip2 , xz , lzip , lzma , lzop , gzip , compress
or zstd .
tar allows you to extract a single file or a directory from a backup, view its
contents, or validate its integrity.
The following command estimates the size in bytes of a possible tar file:
$ tar cf - /directory/to/backup/ | wc -c
20480
$ tar czf - /directory/to/backup/ | wc -c
508
$ tar cjf - /directory/to/backup/ | wc -c
428
Warning
Beware, the presence of "-" in the command line disturbs zsh . Switch to bash !
Here is an example of a naming convention for a tar backup, knowing that the
date will be added to the name.
Create a backup
Creating a non-compressed backup in relative mode is done with the cvf keys:
Example:
Key Description
c Creates a backup.
Tip
Example:
Key Description
Warning
With the P key, the path of the files to be backed up must be entered as absolute. If the two conditions ( P key and absolute path)
are not indicated, the backup is in relative mode.
Creating a compressed backup with gzip is done with the cvfz keys:
Key Description
Note
Note
Keeping the cvf ( tvf or xvf ) keys unchanged for all backup operations and simply adding the compression key to the end of the
keys makes the command easier to understand (such as: cvfz or cvfj , and others).
Creating a compressed backup with bzip2 is done with the keys cvfj :
Key Description
Note
Here is a ranking of the compression of a set of text files from least to most
efficient:
• compress ( .tar.Z )
• gzip ( .tar.gz )
• bzip2 ( .tar.bz2 )
• lzip ( .tar.lz )
• xz ( .tar.xz )
Key Description
Note
Note
If the backup was performed in relative mode, add files in relative mode. If the backup was done in absolute mode, add files in
absolute mode.
Key Description
Examples:
When the number of files in the backup increases, you can use pipe characters ( | )
and some commands ( less , more , most , and others) to achieve the effect of paging
viewing:
Tip
To list or retrieve the contents of a backup, it is not necessary to mention the compression algorithm used when the backup was
created. That is, a tar tvf is equivalent to tar tvfj , to read the contents. The compression type or algorithm must only be selected
when creating a compressed backup.
Tip
You should always check and view the backup file's contents before performing a restore operation.
The integrity of a backup can be tested with the W key at the time of its creation:
The integrity of a backup can be tested with the key d after its creation:
Tip
By adding a second v to the previous key, you will get the list of archived files as well as the differences between the archived files
and those present in the file system.
The W key is also used to compare the content of an archive against the filesystem:
You cannot verify the compressed archive with the W key. Instead, you must use
the d key.
Extract the etc/exports file from the /savings/etc.133.tar backup into the etc
directory of the current directory:
Extract all files from the compressed backup /backups/home.133.tar.bz2 into the
current directory:
Extract all files from the backup /backups/etc.133.P.tar to their original directory:
Warning
For security reasons, you should use caution when extracting backup files saved in absolute mode.
Once again, before performing extraction operations, you should always check the contents of the backup files (particularly those
saved in absolute mode).
Key Description
Tip
To extract or list the contents of a backup, it is not necessary to mention the compression algorithm used to create the backup. That
is, a tar xvf is equivalent to tar xvfj , to extract the contents, and a tar tvf is equivalent to tar tvfj , to list.
Warning
To restore the files in their original directory (key P of a tar xvf ), you must have generated the backup with the absolute path. That
is, with the P key of a tar cvf .
To extract a specific file from a tar backup, specify the name of that file at the end
of the tar xvf command.
The previous command extracts only the /path/to/file file from the backup.tar
backup. This file will be restored to the /path/to/ directory created, or already
present, in the active directory.
To extract only one directory (including its subdirectories and files) from a backup,
specify the directory name at the end of the tar xvf command.
To extract multiple directories, specify each of the names one after the other:
Specify a wildcard to extract the files matching the specified selection pattern.
keys:
Expanded Knowledge
Although wildcard characters and regular expressions usually have the same symbol or style, the objects they match are completely
different, so people often confuse them.
wildcard (wildcard character): used to match file or directory names. regular expression: used to match the content of a file.
The cpio command allows saving on several successive media without specifying
any options.
1. copy-out mode - Creates a backup (archive). Enable this mode through the -o or
--create options. In this mode, you must generate a list of files with a specific
command ( find , ls , or cat ) and pass it to cpio.
Note
5. copy-in mode – extracts files from an archive. You can enable this mode through
the -i option.
6. copy-pass mode – copies files from one directory to another. You can enable this
mode through the -p or --pass-through options.
Like the tar command, users must consider how the file list is saved (absolute
path or relative path) when creating an archive.
Secondary function:
Note
Some options of cpio need to be combined with the correct operating mode to work correctly. See man 1 cpio
[files command |] cpio {-o| --create} [-options] [< file-list] [> device]
Example:
The result of the find command is sent as input to the cpio command via a pipe
(character | , ⇧ Left Shift + \ ).
Here, the find /etc command returns a list of files corresponding to the contents
of the /etc directory (recursively) to the cpio command, which performs the
backup.
Options Description
-F Backup to specific media, which can replace standard input ("<") and standard output (">") in the cpio
command
Backup to a media:
cd /
find etc | cpio -o > /backups/etc.cpio
Warning
If the path specified in the find command is absolute, the backup will be performed in absolute.
If the path indicated in the find command is relative, the backup will be done in relative.
[files command |] cpio {-o| --create} -A [-options] [< fic-list] {F| > device}
Example:
Option Description
Compressing a backup
Unlike the tar command, there is no option to save and compress simultaneously.
So, it is done in two steps: saving and compressing.
The syntax of the first method is easier to understand and remember because it is
done in two steps.
For the first method, the backup file is automatically renamed by the gzip utility,
which adds .gz to the end of the file name. Similarly, the bzip2 utility
automatically adds .bz2 .
Example:
Options Description
-t Reads a backup.
After making a backup, you need to read its contents to ensure there are no errors.
In the same way, before performing a restore, you must read the contents of the
backup that will be used.
Example:
Options Description
Warning
By default, at the time of restoration, files on the disk whose last modification date is more recent or equal to the date of the backup
are not restored (to avoid overwriting recent information with older information).
On the other hand, the u option allows you to restore older versions of the files.
Examples:
The u option allows you to overwrite existing files at the location where the
restore takes place.
Tip
The creation of directories is perhaps necessary, hence the use of the d option
Restoring a particular file or directory requires the creation of a list file that must
then be deleted.
Note
It is, therefore, better to make a backup and compress it than to compress it during the backup.
Example:
$ gzip usr.tar
$ ls
usr.tar.gz
It keeps the same rights and the same last access and modification dates.
Example:
$ bzip2 usr.cpio
$ ls
usr.cpio.bz2
Example:
$ gunzip usr.tar.gz
$ ls
usr.tar
The file name is truncated by gunzip and the extension .gz is removed.
• .z ;
• -z ;
• _z ;
• -gz ;
Example:
$ bunzip2 usr.cpio.bz2
$ ls
usr.cpio
The file name is truncated by bunzip2 , and the extension .bz2 is removed.
• -bz ;
• .tbz2 ;
• tbz .
users .
Knowledge:
Complexity:
It is essential to understand the boot process of Linux to solve problems that might
occur.
The BIOS (Basic Input/Output System) performs the POST (power on self-test) to
detect, test, and initialize the system hardware components.
The Master Boot Record is the first 512 bytes of the boot disk. The MBR discovers
the boot device, loads the bootloader GRUB2 into memory, and transfers control to
it.
You can locate the GRUB 2 configuration file under /boot/grub2/grub.cfg , but you
should not edit this file directly.
You can find the GRUB2 menu configuration settings under /etc/default/grub . The
grub2-mkdconfig command uses these to generate the grub.cfg file.
# cat /etc/default/grub
GRUB_TIMEOUT=5
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="rd.lvm.lv=rhel/swap crashkernel=auto rd.lvm.lv=rhel/root
rhgb quiet net.ifnames=0"
GRUB_DISABLE_RECOVERY="true"
If you change one or more of these parameters, you must run the grub2-mkconfig
command to regenerate the /boot/grub2/grub.cfg file.
• GRUB2 looks for the compressed kernel image (the vmlinuz file) in the /boot
directory.
• GRUB2 loads the kernel image into memory and extracts the contents of the
initramfs image file into a temporary folder in memory using the tmpfs file
system.
11.1.5 systemd
systemd is the parent of all system processes. It reads the target of the /etc/
systemd/system/default.target link (e.g., /usr/lib/systemd/system/multi-user.target )
to determine the default target of the system. The file defines the services to start.
systemd then places the system in the target-defined state by performing the
following initialization tasks:
3. Initialize SELinux
5. Initialize the hardware based on the arguments given to the kernel at boot time
6. Mount the file systems, including virtual file systems like /proc
1. Prevent Single user mode access - If an attacker can boot into single user mode, he
becomes the root user.
2. Prevent access to GRUB console - If an attacker manages to use the GRUB console,
he can change its configuration or collect information about the system by using the
cat command.
3. Prevent access to insecure operating systems. If the system has dual boot, an
attacker can select an operating system like DOS at boot time that ignores access
controls and file permissions.
1. Log in to the operating system as root user and execute the grub2-mkpasswd-pbkdf2
command. The output of this command is as follows:
Enter password:
Reenter password:
PBKDF2 hash of your password is
grub.pbkdf2.sha512.10000.D0182EDB28164C19454FA94421D1ECD6309F076F1135A2E5BFE91A50
88BD9EC87687FE14794BE7194F67EA39A8565E868A41C639572F6156900C81C08C1E8413.40F6981C
22F1F81B32E45EC915F2AB6E2635D9A62C0BA67105A9B900D9F365860E84F1B92B2EF3AA0F83CECC6
8E13BA9F4174922877910F026DED961F6592BB7
You need to enter your password in the interaction. The ciphertext of the password
is the long string "grub.pbkdf2.sha512...".
2. Paste the password ciphertext in the last line of the /etc/grub.d/00_header file. The
pasted format is as follows:
cat <<EOF
set superusers='frank'
password_obkdf2 frank
grub.pbkdf2.sha512.10000.D0182EDB28164C19454FA94421D1ECD6309F076F1135A2E5BFE91A50
88BD9EC87687FE14794BE7194F67EA39A8565E868A41C639572F6156900C81C08C1E8413.40F6981C
22F1F81B32E45EC915F2AB6E2635D9A62C0BA67105A9B900D9F365860E84F1B92B2EF3AA0F83CECC6
8E13BA9F4174922877910F026DED961F6592BB7
EOF
You can replace the 'frank' user with any custom user.
cat <<EOF
set superusers='frank'
password frank rockylinux8.x
EOF
4. Restart the operating system to verify GRUB2's encryption. Select the first boot
menu item, type the e key, and then enter the corresponding user and password.
Enter username:
frank
Enter password:
Sometimes, you may see in some documents that the grub2-set-password ( grub2-
setpassword ) command is used to protect the GRUB2 bootloader:
Log in to the operating system as the root user and execute the gurb2-set-password
command as follows:
[root] # grub2-set-password
Enter password:
Confirm password:
[root] # reboot
Select the first boot menu item and type the e key, and then enter the
corresponding user and password:
Enter username:
root
Enter password:
11.3 Systemd
• provide many features, such as parallel start of system services at system startup,
on-demand activation of daemons, support for snapshots, or management of
dependencies between services.
Note
systemd introduces the concept of unit files, also known as systemd units.
Note
There are many types of units: Device unit, Mount unit, Path unit, Scope unit, Slice unit, Snapshot unit, Socket unit, Swap unit, and
Timer unit.
• At startup, systemd creates listening sockets for all system services that support
this type of activation and passes these sockets to these services as soon as they
start. This makes it possible to restart a service without losing a single message
sent to it by the network during its unavailability. The corresponding socket
remains accessible while all messages queue up.
• System services that use D-BUS for inter-process communications can start on-
demand the first time the client uses them.
• systemd stops or restarts only running services. Previous versions (before RHEL7)
attempted to stop services directly without checking their current status.
• System services do not inherit any context (like HOME and PATH environment
variables). Each service operates in its execution context.
All service unit operations are subject to a 5-minute default timeout to prevent a
malfunctioning service from freezing the system.
Due to space limitations, this document will not provide a detailed introduction to
systemd . If you have an interest in exploring systemd further, there is a very
detailed introduction in this document.
Service units end with the .service file extension and have a similar purpose to
init scripts. The use of systemctl command is to display , start , stop , or restart
a system service:
systemctl Description
systemctl list-units --type service --all Displays the status of all services
The systemctl command is also used for the enable or disable of a system service
and displaying associated services:
systemctl Description
systemctl list-unit-files --type service Lists all services and checks if they are running
systemctl list-dependencies --after Lists the services that start before the specified unit
systemctl list-dependencies --before Lists the services that start after the specified unit
Examples:
To check the activation status of all units, you can list them with:
[Unit]
Description=Postfix Mail Transport Agent
After=syslog.target network.target
Conflicts=sendmail.service exim.service
[Service]
Type=forking
PIDFile=/var/spool/postfix/pid/master.pid
EnvironmentFile=-/etc/sysconfig/network
ExecStartPre=-/usr/libexec/postfix/aliasesdb
ExecStartPre=-/usr/libexec/postfix/chroot-update
ExecStart=/usr/sbin/postfix start
ExecReload=/usr/sbin/postfix reload
ExecStop=/usr/sbin/postfix stop
[Install]
WantedBy=multi-user.target
The representation of systemd targets is by target units. Target units end with the
.target file extension, and their sole purpose is to group other systemd units into a
chain of dependencies.
For example, the graphical.target unit that starts a graphical session starts system
services such as the GNOME display manager ( gdm.service ) or the accounts
service ( accounts-daemon.service ) and also activates the multi-user.target unit.
Similarly, the multi-user.target unit starts other essential system services, such as
NetworkManager ( NetworkManager.service ) or D-Bus ( dbus.service ) and activates
another target unit named basic.target .
systemctl get-default
This command searches for the target of the symbolic link located at /etc/systemd/
system/default.target and displays the result.
$ systemctl get-default
graphical.target
Example:
The Rescue mode provides a simple environment for repairing your system in
cases where a normal boot process is impossible.
In rescue mode, the system attempts to mount all local file systems and start
several important system services but does not enable a network interface or allow
other users to connect to the system simultaneously.
On Rocky 8, the rescue mode is equivalent to the old single user mode and requires
the root password.
To change the current target and enter rescue mode in the current session:
systemctl rescue
Emergency mode provides the most minimalist environment possible and allows
the system to be repaired even in situations where it is unable to enter rescue
mode. In emergency mode, the system mounts the root file system only for reading.
It will not attempt to mount any other local file system, will not activate any
network interface, and will start some essential services.
To change the current target and enter emergency mode in the current session:
systemctl emergency
You can manage log files with the journald daemon, a component of systemd 'in
addition to ' rsyslogd .
The journald daemon captures Syslog messages, kernel log messages, messages
from the initial RAM disk and the start of boot, and messages written to the
standard output and the standard error output of all services, then indexes them
and makes them available to the user.
The native log file's format, which is a structured and indexed binary file, improves
searches and allows for faster operation. It also stores metadata information, such
as timestamps or user IDs.
journalctl
The command lists all log files generated on the system. The structure of this
output is similar to that used in /var/log/messages/ but it offers some
improvements:
• shows the conversion of timestamps to the local time zone of your system
journalctl -f
This command returns a list of the ten most recent log lines. The journalctl utility
then continues to run and waits for new changes to occur before displaying them
immediately.
Filtering messages
journalctl -p priority
You must replace priority with one of the following keywords (or a number):
• debug (7),
• info (6),
• notice (5),
• warning (4),
• err (3),
• crit (2),
• alert (1),
Objectives: In this chapter, future Linux administrators will learn how to:
Knowledge:
Complexity:
12.1 Generalities
The scheduling of tasks is managed with the cron utility. It allows the periodic
execution of tasks.
It is reserved to the administrator for system tasks but can be used by normal users
for tasks or scripts that they have access to. To access the cron utility, we use:
crontab .
• Backups;
• Program execution.
crontab is short for cron table, but can be thought of as a task scheduling table.
Warning
To set up a schedule, the system must have the correct time set.
Tip
If the crond daemon is not running, you will have to initialize it manually and/or automatically at startup. Indeed, even if tasks are
scheduled, they will not be launched.
12.3 Security
In order to implement a schedule, a user must have permission to use the cron
service.
This permission varies according to the information contained in the files below:
• /etc/cron.allow
• /etc/cron.deny
Warning
File /etc/cron.allow
Warning
File /etc/cron.deny
By default, /etc/cron.deny exists and is empty and /etc/cron.allow does not exist.
[root]# vi /etc/cron.allow
user1
[root]# vi /etc/cron.deny
user2
When a user schedules a task, a file with his name is created under /var/spool/
cron/ .
This file contains all the information the crond needs to know regarding all tasks
created by this user, the commands or programs to run, and when to run them
(hour, minute, day ...).
Example:
Option Description
Warning
crontab without option deletes the old schedule file and waits for the user to enter new lines. You have to press ctrl + d to exit
this editing mode.
Only root can use the -u user option to manage another user's schedule file.
• No need to restart.
On the other hand, the following points must be taken into account:
Note
It is important to understand that the purpose of scheduling is to perform tasks automatically, without the need for external
intervention.
• Each line has six fields, 5 for the time and 1 for the order;
[root]# crontab –e
10 4 1 * * /root/scripts/backup.sh
1 2 3 4 5 6
1 Minute(s) From 0 to 59
2 Hour(s) From 0 to 23
Warning
The tasks to be executed must use absolute paths and if possible, use redirects.
In order to simplify the notation for the definition of time, it is advisable to use
special symbols.
Wildcards Description
/ Defines a step
Examples:
For the root user, crontab also has some special time settings:
Setting Description
@monthly Runs command on the first day of the month just after midnight
In this chapter you will learn how to work with and manage the network.
network, linux, ip
Knowledge:
Complexity:
13.1 Generalities
• the IP address;
Example:
• pc-rocky ;
• 192.168.1.10 ;
• 255.255.255.0 .
IP addresses are used for the proper routing of messages (packets). They are
divided into two parts:
• the host address (HostID) by performing a bitwise logical AND between the IP
address and the complement of the mask.
There are also specific addresses within a network, which must be identified. The
first address of a range as well as the last one have a particular role:
A MAC address is a physical identifier written in the factory onto the device. This
is sometimes referred to as the hardware address. It consists of 6 bytes often given
in hexadecimal form (for example 5E:FF:56:A2:AF:15). It is composed of: 3 bytes of
the manufacturer identifier and 3 bytes of the serial number.
Warning
This last statement is nowadays a little less true with virtualization. There are also software solutions for changing the MAC address.
the data is destined). Also to avoid confusion in a URL, the IPv6 address is written
in square brackets [ ], colon, port address.
Client machines can be part of a DNS (Domain Name System, e.g., mydomain.lan )
domain.
A set of computers can be grouped into a logical, name-resolving, set called a DNS
domain. A DNS domain is not, of course, limited to a single physical network.
In order for a computer to be part of a DNS domain, it must be given a DNS suffix
(here mydomain.lan ) as well as servers that it can query.
Memory aid
To remember the order of the layers of the OSI model, remember the following sentence: Please Do Not Touch Steven's Pet
Alligator.
Layer Protocols
Layer 2 (Data Link) supports network topology (token-ring, star, bus, etc.), data
splitting and transmission errors. Unit: the frame.
Layer 7 (Application) represents the contact with the user. It provides the services
offered by the network: http, dns, ftp, imap, pop, smtp, etc.
The Linux kernel assigns interface names with a specific prefix depending on the
type. Traditionally, all Ethernet interfaces, for example, began with eth. The prefix
was followed by a number, the first being 0 (eth0, eth1, eth2...). The wifi interfaces
were given a wlan prefix.
On Rocky8 Linux distributions, systemd will name interfaces with the new
following policy where "X" represents a number:
• ...
Note
The historical network management command is ifconfig . This command has been replaced by the ip command, which is already
well known to network administrators.
The ip command is the only command to manage IP address, ARP, routing, etc..
The hostname command displays or sets the host name of the system
Option Description
Tip
To assign a host name, it is possible to use the hostname command, but the changes
will not be retained at the next boot. The command with no arguments displays the
host name.
NETWORKING=yes
HOSTNAME=pc-rocky.mondomaine.lan
The RedHat boot script also consults the /etc/hosts file to resolve the host name of
the system.
When the system boots, Linux evaluates the HOSTNAME value in the /etc/sysconfig/
network file.
It then uses the /etc/hosts file to evaluate the main IP address of the server and its
host name. It deduces the DNS domain name.
It is therefore essential to fill in these two files before any configuration of network
services.
Tip
To know if this configuration is well done, the commands hostname and hostname -f must answer with the expected values.
The /etc/hosts file is a static host name mapping table, which follows the following
format:
The /etc/hosts file is still used by the system, especially at boot time when the
system FQDN is determined.
Tip
RedHat recommends that at least one line containing the system name be filled in.
If the DNS service (Domain Name Service) is not in place, you must fill in all the
names in the hosts file for each of your machines.
The /etc/hosts file contains one line per entry, with the IP address, the FQDN, then
the host name (in that order) and a series of aliases (alias1 alias2 ...). The alias is
an option.
The NSS (Name Service Switch) allows configuration files (e.g., /etc/
passwd , /etc/group , /etc/hosts ) to be substituted for one or more centralized
databases.
passwd: files
shadow: files
group: files
In this case, Linux will first look for a host name match ( hosts: line) in the /etc/
hosts file ( files value) before querying DNS ( dns value)! This behavior can
simply be changed by editing the /etc/nsswitch.conf file.
The resolution of the name service can be tested with the getent command that we
will see later in this course.
#Generated by NetworkManager
domain mondomaine.lan
search mondomaine.lan
nameserver 192.168.1.254
Tip
It allows for the addition of DNS servers from the configuration file of a network
interface. It then dynamically populates the /etc/resolv.conf file which should
never be edited directly, otherwise the configuration changes will be lost the next
time the network service is started.
13.8 ip command
The ip command from the iproute2 package allows you to configure an interface
and its routing table.
Display interfaces:
[root]# ip link
[root]# ip neigh
All historical network management commands have been grouped under the ip
command, which is well known to network administrators.
The DHCP protocol (Dynamic Host Control Protocol) allows you to obtain a
complete IP configuration via the network. This is the default configuration mode of
a network interface under Rocky Linux, which explains why a system connected to
the network of an Internet router can function without additional configuration.
For each Ethernet interface, a ifcfg-ethX file allows for the configuration of the
associated interface.
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=dhcp
HWADDR=00:0c:29:96:32:e3
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=dhcp
• Specify the MAC address (optional but useful when there are several interfaces):
HWADDR=00:0c:29:96:32:e3
Tip
If NetworkManager is installed, the changes are taken into account automatically. If not, you have to restart the network service.
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=none
IPADDR=192.168.1.10
NETMASK=255.255.255.0
• Here we are replacing "dhcp" with "none" which equals static configuration:
BOOTPROTO=none
• IP Address:
IPADDR=192.168.1.10
• Subnet mask:
NETMASK=255.255.255.0
PREFIX=24
Warning
13.11 Routing
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=none
HWADDR=00:0c:29:96:32:e3
IPADDR=192.168.1.10
NETMASK=255.255.255.0
GATEWAY=192.168.1.254
• In the example shown, the 192.168.1.0/24 network is reachable directly from the
eth0 device, so there is a metric at 1 (does not traverse a router).
• All other networks than the previous one will be reachable, again from the eth0
device, but this time the packets will be addressed to a 192.168.1.254 gateway.
The routing protocol is a static protocol (although it is possible to add a route to a
dynamically assigned address in Linux).
www.free.fr = 212.27.48.10
212.27.48.10 = www.free.fr
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=none
HWADDR=00:0c:29:96:32:e3
IPADDR=192.168.1.10
NETMASK=255.255.255.0
GATEWAY=192.168.1.254
DNS1=172.16.1.2
DNS2=172.16.1.3
DOMAIN=rockylinux.lan
In this case, to reach the DNS, you have to go through the gateway.
#Generated by NetworkManager
domain mondomaine.lan
search mondomaine.lan
nameserver 172.16.1.2
nameserver 172.16.1.3
13.13 Troubleshooting
The ping command sends datagrams to another machine and waits for a response.
It is the basic command for testing the network because it checks the connectivity
between your network interface and another.
The -c (count) option allows you to stop the command after the countdown in
seconds.
Example:
Tip
"Pinging" the inner loop does not detect a hardware failure on the network interface.
It simply determines whether the IP software configuration is correct.
To determine the functionality of the network card, we must ping its IP address. If
the network cable is not connected to the network card, it should be in a "down"
state.
If the ping does not work, first check the network cable to your network switch and
reassemble the interface (see the if up command), then check the interface itself.
Examples:
The dig command is used to query DNS servers. It is verbose by default, but the
+short option can change this behavior.
The getent (get entry) command gets an NSSwitch entry ( hosts + dns )
Example:
Querying only a DNS server may return an erroneous result that does not consider
the contents of a hosts file, although this should be rare nowadays.
To take the /etc/hosts file into account, the NSSwitch name service must be
queried, which will take care of any DNS resolution.
Example:
Tip
This command is interesting, followed by a redirection to fill in the configuration files of your interfaces automatically:
Option Description
-b --broadcast Displays the broadcast address of the given IP address and the network mask.
-n --netmask Calculates the network mask for the given IP address. Assumes that the IP address
is part of a complete class A, B, or C network. Many networks do not use default
network masks, in which case an incorrect value will be returned.
-n --network Indicates the network address of the given IP address and mask.
13.13.4 ss command
The ss (socket statistics) command displays the listening ports on the network.
ss [-tuna]
Example:
[root]# ss –tuna
tcp LISTEN 0 128 *:22 *:*
The commands ss and netstat (to follow) will be very important for the rest of
your Linux life.
When implementing network services, it is common to check with one of these two
commands that the service is listening on the expected ports.
Warning
The netstat command is now deprecated and is no longer installed by default on Rocky Linux. You may still find some Linux versions
that have it installed, but it is best to move on to using ss for everything that you would have used netstat for.
The netstat command (network statistics) displays the listening ports on the
network.
netstat -tapn
Example:
A misconfiguration can cause multiple interfaces to use the same IP address. This
can happen when a network has multiple DHCP servers, or the same IP address is
manually assigned numerous times.
When the network is malfunctioning, and when an IP address conflict could be the
cause, it is possible to use the arp-scan software (requires the EPEL repository):
Example:
$ arp-scan -I eth0 -l
Tip
As the above example shows, MAC address conflicts are possible! Virtualization technologies and the copying of virtual machines
cause these problems.
Example:
Example:
Example:
13.15 In summary
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=none
HWADDR=00:0c:29:96:32:e3
IPADDR=192.168.1.10
NETMASK=255.255.255.0
GATEWAY=192.168.1.254
DNS1=172.16.1.1
DNS2=172.16.1.2
DOMAIN=rockylinux.lan
14.1 Generalities
Note
Installing from source is not covered here. As a rule, you should use the package method unless the software you want is not
available via the package manager. The reason for this is that dependencies are generally managed by the package system, whereas
with source, you need to manage the dependencies manually.
The package: This is a single file containing all the data needed to install the
program. It can be executed directly on the system from a software repository.
The source files: Some software is not provided in packages ready to be installed,
but via an archive containing the source files. It is up to the administrator to
prepare these files and compile them to install the program.
RPM is the format used by all RedHat based distributions (RockyLinux, Fedora,
CentOS, SuSe, Mandriva, ...). Its equivalent in the Debian world is DPKG (Debian
Package).
Option Description
The rpm command also allows you to query the system package database by adding
the -q option.
Example:
rpm -qa
Example:
Option Description
--last The list of packages is given by installation date (the last installed packages appear first).
Warning
After the -q option, the package name must be exact. Metacharacters (wildcards) are not supported.
Tip
However, it is possible to list all installed packages and filter with the grep command.
Example: list all installed packages with a specific name using grep :
dnf is the manager used by many RedHat based distributions (RockyLinux, Fedora,
CentOS, ...). Its equivalent in the Debian world is APT (Advanced Packaging Tool).
The dnf command allows you to install a package by specifying only the short
name.
Example:
Option Description
The dnf install command allows you to install the desired package without
worrying about its dependencies, which will be resolved directly by dnf itself.
Transaction Summary
===============================================================================
=============================================
Install 7 Packages
In case you don't remember the exact name of the package, you can search for it
with the command dnf search name . As you can see, there is a section that contains
the exact name and another one that contains the package correspondence, all of
which are highlighted for easier searching.
The dnf remove command removes a package from the system and its
dependencies. Below is an excerpt of the dnf remove httpd command.
The dnf list command lists all the packages installed on the system and present
in the repository. It accepts several parameters:
Parameter Description
all Lists the installed packages and then those available on the repositories.
The dnf info command, as you might expect, provides detailed information about a
package:
Available Packages
Name : firewalld
Version : 0.9.3
Release : 7.el8_5.1
Architecture : noarch
Size : 501 k
Source : firewalld-0.9.3-7.el8_5.1.src.rpm
Repository : baseos
Summary : A firewall daemon with D-Bus interface providing a dynamic
firewall
URL : http://www.firewalld.org
License : GPLv2+
Description : firewalld is a firewall service daemon that provides a dynamic
customizable
: firewall with a D-Bus interface.
Sometimes you only know the executable you want to use but not the package that
contains it, in this case you can use the command dnf provides */package_name
which will search the database for you for the desired match.
The dnf autoremove command does not need any parameters. Dnf takes care of
searching for candidate packages for removal.
dnf autoremove
Last metadata expiration check: 0:24:40 ago on Wed 23 Mar 2022 06:16:47 PM CET.
Dependencies resolved.
Nothing to do.
Complete!
Option Description
The dnf repolist command lists the repositories configured on the system. By
default, it lists only the enabled repositories but can be used with these
parameters:
Parameter Description
--enabled Default
Example:
dnf repolist
repo id repo name
appstream Rocky Linux 8 -
AppStream
baseos Rocky Linux 8 - BaseOS
epel Extra Packages for
Enterprise Linux 8 - aarch64
epel-modular Extra Packages for
Enterprise Linux Modular 8 - aarch64
extras Rocky Linux 8 - Extras
powertools Rocky Linux 8 -
PowerTools
rockyrpi Rocky Linux 8 -
Rasperry Pi
...
repo id repo
name
status
appstream Rocky Linux 8 -
AppStream
enabled
appstream-debug Rocky Linux 8 - AppStream
- Source disabled
appstream-source Rocky Linux 8 - AppStream
- Source disabled
baseos Rocky Linux 8 -
BaseOS
enabled
Using the -v option enhances the list with a lot of additional information. Below
you can see part of the result of the command.
dnf repolist -v
...
Repo-id : powertools
Repo-name : Rocky Linux 8 - PowerTools
Repo-revision : 8.5
Repo-distro-tags : [cpe:/o:rocky:rocky:8]: , , 8, L, R, c, i, k, n, o,
u, x, y
Repo-updated : Wed 16 Mar 2022 10:07:49 PM CET
Repo-pkgs : 1,650
Repo-available-pkgs: 1,107
Repo-size : 6.4 G
Repo-mirrors : https://mirrors.rockylinux.org/mirrorlist?
arch=aarch64&repo=PowerTools-8
Repo-baseurl : https://example.com/pub/rocky/8.8/PowerTools/x86_64/os/
(30 more)
Repo-expire : 172,800 second(s) (last: Tue 22 Mar 2022 05:49:24 PM CET)
Repo-filename : /etc/yum.repos.d/Rocky-PowerTools.repo
...
Using Groups
Groups are a collection of a set of packages (you can think of them as a virtual packages) that logically groups a set of applications to
accomplish a purpose (a desktop environment, a server, development tools, etc.).
dnf grouplist
Last metadata expiration check: 1:52:00 ago on Wed 23 Mar 2022 02:11:43 PM CET.
Available Environment Groups:
Server with GUI
Server
Minimal Install
KDE Plasma Workspaces
Custom Operating System
Available Groups:
Container Management
.NET Core Development
RPM Development Tools
Development Tools
Headless Management
Legacy UNIX Compatibility
Network Servers
Scientific Support
Security Tools
Smart Card Support
System Tools
Fedora Packager
Xfce
The dnf groupinstall command allows you to install one of these groups.
Transaction Summary
===============================================================================
=
Is this ok [y/N]:
Note that it is good practice to enclose the group name in double quotes as without
the command it will only execute correctly if the group name does not contain
spaces.
The dnf clean command cleans all caches and temporary files created by dnf . It
can be used with the following parameters.
Parameters Description
The DNF manager relies on one or more configuration files to target the
repositories containing the RPM packages.
These files are located in /etc/yum.repos.d/ and must end with .repo in order to be
used by DNF.
Example:
/etc/yum.repos.d/Rocky-BaseOS.repo
Each .repo file consists of at least the following information, one directive per line.
Example:
By default, the enabled directive is absent which means that the repository is
enabled. To disable a repository, you must specify the enabled=0 directive.
Package Confusion
The creation of module streams in the AppStream repository caused a lot of people confusion. Since modules are packaged within a
stream (see our examples below), a particular package would show up in our RPMs, but if an attempt was made to install it without
enabling the module, nothing would happen. Remember to look at modules if you attempt to install a package and it fails to find it.
Modules come from the AppStream repository and contain both streams and
profiles. These can be described as follows:
• module profiles: What a module profile does is take into consideration the use
case for the module stream when installing the package. Applying a profile
adjusts the package RPMs, dependencies and documentation to account for the
module's use. Using the same postgresql stream in our example, you can apply a
profile of either "server" or "client". Obviously, you do not need the same
packages installed on your system if you are just going to use postgresql as a
client to access a server.
You can obtain a list of all modules by executing the following command:
This will give you a long list of the available modules and the profiles that can be
used for them. The thing is you probably already know what package you are
interested in, so to find out if there are modules for a particular package, add the
package name after "list". We will use our postgresql package example again here:
Notice in the listing the "[d]". This means that this is the default. It shows that the
default version is 10 and that regardless of which version you choose, if you do not
specify a profile, then the server profile will be the profile used, as it is the default
as well.
Using our example postgresql package, let's say that we want to enable version 12.
To do this, you simply use the following:
Here the enable command requires the module name followed by a ":" and the
stream name.
To verify that you have enabled postgresql module stream version 12, use your list
command again which should show you the following output:
Here we can see the "[e]" for "enabled" next to stream 12, so we know that version
12 is enabled.
Now that our module stream is enabled, the next step is to install postgresql , the
client application for the postgresql server. This can be achieved by running the
following command:
===============================================================================
=========================================================
Package Architecture
Version
Repository Size
===============================================================================
=========================================================
Installing group/module packages:
postgresql x86_64 12.
12-1.module+el8.6.0+1049+f8fc4c36 appstream 1.
5 M
Installing dependencies:
libpq x86_64 13.
5-1.el8 appstream
197 k
Transaction Summary
===============================================================================
=========================================================
Install 2 Packages
Total download size: 1.7 M
Installed size: 6.1 M
Is this ok [y/N]:
It's also possible to directly install packages without even having to enable the
module stream! In this example, let's assume that we only want the client profile
applied to our installation. To do this, we simply enter this command:
===============================================================================
=========================================================
Package Architecture
Version
Repository Size
===============================================================================
=========================================================
Installing group/module packages:
postgresql x86_64 12.
12-1.module+el8.6.0+1049+f8fc4c36 appstream 1.
5 M
Installing dependencies:
libpq x86_64 13.
5-1.el8 appstream
197 k
Installing module profiles:
postgresql/client
Enabling module streams:
postgresql 12
Transaction Summary
===============================================================================
=========================================================
Install 2 Packages
Answering "y" to the prompt will install everything you need to use postgresql
version 12 as a client.
After you install, you may decide that for whatever reason, you need a different
version of the stream. The first step is to remove your packages. Using our example
postgresql package again, we would do this with:
This will display similar output as the install procedure above, except it will be
removing the package and all of its dependencies. Answer "y" to the prompt and hit
enter to uninstall postgresql .
Once this step is complete, you can issue the reset command for the module using:
Dependencies resolved.
===============================================================================
=========================================================
Package Architecture
Version Repository Size
===============================================================================
=========================================================
Disabling module profiles:
postgresql/
client
Resetting modules:
postgresql
Transaction Summary
===============================================================================
=========================================================
Is this ok [y/N]:
Answering "y" to the prompt will then reset postgresql back to the default stream
with the stream that we had enabled (12 in our example) no longer enabled:
You can also use the switch-to sub-command to switch from one enabled stream to
another. Using this method not only switches to the new stream, but installs the
needed packages (either downgrade or upgrade) without a separate step. To use
this method to enable postgresql stream version 13 and use the "client" profile,
you would use:
There may be times when you wish to disable the ability to install packages from a
module stream. In the case of our postgresql example, this could be because you
want to use the repository directly from PostgreSQL so that you could use a newer
version (at the time of this writing, versions 14 and 15 are available from this
repository). Disabling a module stream, makes installing any of those packages
impossible without first enabling them again.
And if you list out the postgresql modules again, you will see the following showing
all postgresql module versions disabled:
It provides packages that are not included in the official RHEL repositories. These
are not included because they are not considered necessary in an enterprise
environment or deemed outside the scope of RHEL. We must not forget that RHEL
is an enterprise class distribution, and desktop utilities or other specialized
software may not be a priority for an enterprise project.
14.5.2 Installation
Installation of the necessary files can be easily done with the package provided by
default from Rocky Linux.
export http_proxy=http://172.16.1.10:8080
Then:
Once installed you can check that the package has been installed correctly with the
command dnf info .
The package, as you can see from the package description above, does not contain
executables, libraries, etc... but only the configuration files and GPG keys for
setting up the repository.
Another way to verify the correct installation is to query the rpm database.
Now you need to run an update to let dnf recognize the repository. You will be
asked to accept the GPG keys of the repositories. Clearly, you have to answer YES
in order to use them.
dnf update
Once the update is complete you can check that the repository has been configured
correctly with the dnf repolist command which should now list the new
repositories.
dnf repolist
repo id repo name
...
epel Extra Packages for Enterprise Linux 8 - aarch64
epel-modular Extra Packages for Enterprise Linux Modular 8 - aarch64
...
[epel]
name=Extra Packages for Enterprise Linux $releasever - $basearch
# It is much more secure to use the metalink, but if you wish to use a local
mirror
# place its address here.
#baseurl=https://download.example/pub/epel/$releasever/Everything/$basearch
metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-
$releasever&arch=$basearch&infra=$infra&content=$contentdir
enabled=1
gpgcheck=1
countme=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-8
[epel-debuginfo]
name=Extra Packages for Enterprise Linux $releasever - $basearch - Debug
# It is much more secure to use the metalink, but if you wish to use a local
mirror
# place its address here.
#baseurl=https://download.example/pub/epel/$releasever/Everything/$basearch/
debug
metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-
$releasever&arch=$basearch&infra=$infra&content=$contentdir
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-8
gpgcheck=1
[epel-source]
name=Extra Packages for Enterprise Linux $releasever - $basearch - Source
# It is much more secure to use the metalink, but if you wish to use a local
mirror
# place it's address here.
#baseurl=https://download.example/pub/epel/$releasever/Everything/source/tree/
metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-source-
$releasever&arch=$basearch&infra=$infra&content=$contentdir
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-8
gpgcheck=1
At this point, once configured, we are ready to install the packages from EPEL. To
start, we can list the packages available in the repository with the command:
5-14.el8 epel
CCfits-devel.aarch64 2.
5-14.el8 epel
...
From the command we can see that to install from EPEL we must force dnf to
query the requested repository with the options --disablerepo and --enablerepo ,
this is because otherwise a match found in other optional repositories (RPM
Fusion, REMI, ELRepo, etc.) could be newer and therefore have priority. These
options are not necessary if you have only installed EPEL as an optional repository
because the packages in the repository will never be available in the official ones.
At least in the same version!
Support consideration
One aspect to consider regarding support (updates, bug fixes, security patches) is that EPEL packages have no official support from
RHEL and technically their life could last the space of a development of Fedora (six months) and then disappear. This is a remote
possibility but one to consider.
So, to install a package from the EPEL repositories you would use:
Transaction Summary
===============================================================================
===============================================================================
Install 1 Package
14.5.4 Conclusion
EPEL is not an official repository for RHEL, but it can be useful for administrators
and developers who work with RHEL or derivatives and need some utilities
prepared for RHEL from a source they can feel confident about.
The dnf-plugins-core package adds plugins to dnf that will be useful for managing
your repositories.
Note
Not all plugins will be presented here but you can refer to the package
documentation for a complete list of plugins and detailed information.
Examples:
If you just want to obtain the remote location url of the package:
After running a dnf update , the running processes will continue to run but with the
old binaries. In order to take into account the code changes and especially the
security updates, they have to be restarted.
The needs-restarting plugin will allow you to detect processes that are in this case.
Options Description
Examples:
All of the examples in this document use root actions, with ordinary users actions
commented separately. In the markdown code block, the command description will
be indicated with # on the previous line.
It is well known that the basic permissions of GNU/Linux can be viewed using ls -
l:
Shell > ls -l
- rwx r-x r-x 1 root root 1358 Dec 31 14:50 anaconda-ks.cfg
↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓
1 2 3 4 5 6 7 8 9 10
Part Description
1 File type. - indicates that this is an ordinary file. Seven file types will be introduced later.
2 Permissions of owner user, the meaning of rwx respectively means: read, write, execute.
5 Number of subdirectories ( . and .. included). For a file, it represents the number of hard links, and 1
represents itself.
8 For files, it shows the size of the file. For directories, it shows the fixed value of 4096 bytes occupied by
the file naming. To calculate the total size of a directory, use du -sh
- Represents an ordinary file. Including plain text files (ASCII); binary files (binary); data format files (data);
various compressed files.
b Block device file. Including all kinds of hard drives, USB drives and so on.
c Character device file. Interface device of serial port, such as mouse, keyboard, etc.
p Pipe file. It is a special file type, the main purpose is to solve the errors caused by multiple programs
accessing a file at the same time. FIFO is the abbreviation of first-in-first-out.
l Soft link files, also called symbolic link files, are similar to shortcuts in Windows. Hard link file, also
known as physical link file.
For file:
4 r(read) Indicates that you can read this file. You can use commands such as cat ,
head , more , less , tail , etc.
2 w(write) Indicates that the file can be modified. Commands such as vim can be used.
For directory:
4 r(read) Indicates that the contents of the directory can be listed, such as ls -l .
2 w(write) Indicates that you can create, delete, and rename files in this directory, such
as commands mkdir , touch , rm , etc.
1 x(execute) Indicates that you can enter the directory, such as the command cd .
Info
In GNU/Linux, in addition to the basic permissions mentioned above, there are also
some special permissions, which we will introduce one by one.
What is ACL? ACL(Access Control List), the purpose is to solve the problem that
the three identities under Linux can not meet the needs of resource permission
allocation.
For example, the teacher gives lessons to the students, and the teacher creates a
directory under the root directory of OS. Only the students in this class are allowed
to upload and download, and others are not allowed. At this point, the permissions
for the directory are 770. One day, a student from another school came to listen to
the teacher, how should permissions be assigned? If you put this student in the
owner group, he will have the same permissions as the students in this class -
rwx. If the student is put into the other users, he will not have any permissions. At
this time, the basic permission allocation cannot meet the requirements, and you
need to use ACL.
There is a similar feature in the Windows operating system. For example, to assign
permissions to a user for a file, for a user-defined directory/file, right-click --->
Properties ---> Security ---> Edit ---> Add ---> Advanced ---> Find now, find the
corresponding user/group ---> assign specific permissions ---> apply, and
complete.
The same is true of GNU/Linux: add the specified user/group to the file/directory
and grant the appropriate permissions to complete the ACL permission assignment.
How do I enable an ACL? You need to find the file name of the device where the
mount point is located and its partition number. For example, on my machine, you
could do something like this:
When you see the line "Default mount options: user_xattr acl", it indicates that
ACL has been enabled. If it is not enabled, you can also enable it temporarily --
mount -o remount,acl / . It can also be enabled permanently:
To view ACL, you need to use the getfacle command -- getfacle FILE_NAME
If you want to set ACL permissions, you need to use the setfacl command.
Option Description
Use the teacher's example mentioned at the beginning of the article to illustrate
the use of ACL.
user:tom:r-x
group::rwx
mask::rwx
other::---
When using the getfacl command, what does the "mask:: rwx" in the output
message mean? The mask is used to specify the maximum valid permissions. The
permissions given to the user are not real permissions, the real permissions can
only be obtained by using the "logical and" of the user's permissions and mask
permissions.
Info
"Logical and" means: that if all are true, the result is true; if there is one false, the result is false.
r r r
r - -
- r -
- - -
Info
Because the default mask is rwx, for any user's ACL permissions, the result is their own permissions.
What is the recursion of ACL permissions? For ACL permissions, this means that
when the parent directory sets ACL permissions, all subdirectories and sub-files
will have the same ACL permissions.
Info
Now there is a question: if I create a new file in this directory, does it have ACL
permission? The answer is no, because the newly created file is after the command
setfacl-m u:tom:rx -R /project is executed.
If you want the newly created directory/file to also have ACL permissions, you need
to use default ACL permissions.
Info
The default and recursion of using ACL permissions require that the operating object of the command be a directory! If the operation
object is a file, an error prompt will be output.
15.3.2 SetUID
• The executor of the command obtains the identity of the owner of the program file
when executing the program.
• The identity change is only valid during execution, and once the binary program
is finished, the executor's identity is restored to the original identity.
Why does GNU/Linux need such strange permissions? Take the most common
passwd command as an example:
As you can see, the ordinary users only has r and x, but the owner's x becomes s,
proving that the passwd command has SUID permissions.
It is well known that the ordinary users (uid >= 1000) can change his own
password. The real password is stored in the /etc/shadow file, but the permission
of the shadows file is 000, and the ordinary users does not have any permissions.
Since the ordinary users can change their password, they must have written the
password to the /etc/shadow file. When an ordinary user executes the passwd
command, it will temporarily change to the owner of the file -- root. For shadow
file, root can not be restricted by permissions. This is why passwd command needs
SUID permission.
Warning
When the owner of an executable binary file/program does not have x, the use of capital S means that the file cannot use SUID
permissions.
Warning
Because SUID can temporarily change the Ordinary users to root, you need to be especially careful with files with this permission
when maintaining the server. You can find files with SUID permissions by using the following command:
15.3.3 SetGID
• The executor of the command obtains the identity of the owner group of the
program file when executing the program.
• The identity change is only valid during execution, and once the binary program
is finished, the executor's identity is restored to the original identity.
The locate command uses the mlocate.db database file to quickly search for files.
Because the locate command has SGID permission, when the executor (ordinary
users) executes the locate command, the owner group is switched to slocate.
slocate has r permission for the /var/lib/mlocate/mlocate.db file.
The SGID is indicated by the number 2, so the locate command has a permission
of 2711.
# or
Shell > chmod g-s FILE_NAME
Warning
When the owner group of an executable binary file/program does not have x, use uppercase S to indicate that the file's SGID
permissions cannot be used correctly.
SGID can be used not only for executable binary file/program, but also for
directories, but it is rarely used.
• For files created by ordinary users in this directory, the default owner group is the
owner group of the directory.
For example:
Warning
Because SGID can temporarily change the owner group of ordinary users to root, you need to pay special attention to the files with
this permission when maintaining the server. You can find files with SGID permissions through the following command:
• If there is no Sticky Bit, ordinary users with w permission can delete all files in
this directory (including files created by other users). Once the directory is given
SBIT permission, only root user can delete all files. Even if ordinary users have w
permission, they can only delete files created by themselves (files created by
other users cannot be deleted).
Can the file or directory have 7755 permission? No, they are aimed at different
objects. SUID is for executable binary files; SGID is used for executable binaries
and directories; SBIT is only for directories. That is, you need to set these special
permissions according to different objects.
Info
root (uid=0) users are not restricted by the permissions of SUID, SGID, and SBIT.
15.3.5 chattr
The most commonly used permissions (also called attribute) are a and i.
Description of attribute i
file × × × √ -
directory x √ √ √ x
(Directory and (Files in the (Files in the (Files in the
files under the directory) directory) directory)
directory)
# Allow modification
Shell > vim /tmp/diri/f1
qwer-tom
Description of attribute a
file × × √ √ -
directory x x √ √ √
(Directory and (Files in the (Files in the (Files in the
files under the directory) directory) directory)
directory)
Question
What happens when I set the ai attribute on a file? You cannot do anything with the file other than to view it.
What about the directory? Allowed are: free modification, appending file contents, and viewing. Disallowed: delete and create files.
15.3.6 sudo
• Through the root user, assign the commands that can only be executed by the
root user (uid=0) to ordinary users for execution.
We know that only the administrator root has permission to use the commands
under /sbin/ and /usr/sbin/ in the GNU/Linux directory. Generally speaking, a
company has a team to maintain a set of servers. This set of servers can refer to a
single computer room in one geographic location, or it can refer to a computer
room in multiple geographical locations. The team leader uses the permissions of
the root user, and other team members may only have the permissions of the
ordinary user. As the person in charge has a lot of work, there is no time to
maintain the daily work of the server, most of the work needs to be maintained by
ordinary users. However, ordinary users have many restrictions on the use of
commands, and at this point, you need to use sudo permissions.
To grant permissions to ordinary users, you must use the root user (uid=0).
You can empower ordinary users by using the visudo command, what you're
actually changing is the /etc/sudoers file.
95 ## user MACHINE=COMMANDS
96 ##
97 ## The COMMANDS section may have other options added to it.
98 ##
99 ## Allow root to run any commands anywhere
100 root ALL=(ALL) ALL
↓ ↓ ↓ ↓
1 2 3 4
...
Part Description
1 User name or owner group name. Refers to which user/group is granted permissions. If it is an owner
group, you need to write "%", such as %root.
2 Which machines are allowed to execute commands. It can be a single IP address, a network segment, or
ALL.
For example:
# You can use the "-c" option to check for errors in /etc/sudoers writing.
Shell > visudo -c
# To use the available sudo command, ordinary users need to add sudo before the
command.
Shell(tom) > sudo /sbin/shutdown -r now
Warning
Because sudo is a "ultra vires" operation, you need to be careful when dealing with /etc/sudoers files!
Rocky Linux Admin Guide (English version) Copyright © 2023 The Rocky Enterprise Software Foundation