Linux
Linux
https://www.golinuxhub.com/2018/06/scenario-based-interview-question-
beginner-experience-linux.html
https://svrtechnologies.com/linux-interview-questions/new-54-linux-admin-
interview-questions-and-answers-pdf
Basic commands in Linux:
Command Description
uname -a Displays version of linux, kernel release date, processor type, etc.
cat file 1 > file2 Output is written to file2 instead of displaying on screen
chmod u=rwx,g=rx,o=r
myfile
4 stands for
"read",
2 stands for
"write", Change permission of all files in given directory
1 stands for
"execute", and
0 stands for "no
permission."
Chmod -R [option]
name of directory
less /etc/passwd. :
finding user
1. Hard Links
Each hard linked file is assigned the same Inode value as the original,
therefore they reference the same physical file location. Hard links more
flexible and remain linked even if the original or linked files are moved
throughout the file system, although hard links are unable to cross
different file systems.
ls -l command shows all the links with the link column shows number of
links.
Links have actual file contents
Removing any link, just reduces the link count, but doesn’t affect other
links.
We cannot create a hard link for a directory to avoid recursive loops.
If original file is removed then the link will still show the content of the file.
Command to create a hard link is:
$ ln [original filename] [link name]
2. Soft Links
A soft link is similar to the file shortcut feature which is used in Windows
Operating systems. Each soft linked file contains a separate Inode value
that points to the original file. As similar to hard links, any changes to the
data in either file is reflected in the other. Soft links can be linked across
different file systems, although if the original file is deleted or moved, the
soft linked file will not work correctly (called hanging link).
ls -l command shows all links with first column value l? and the link points
to original file.
Soft Link contains the path for original file and not the contents.
Removing soft link doesn’t affect anything but removing original file, the
link becomes “dangling” link which points to nonexistent file.
A soft link can link to a directory.
Link across filesystems: If you want to link files across the filesystems,
you can only use symlinks/soft links.
Command to create a Soft link is:
$ ln -s [original filename] [link name]
grep -i [options]
pattern [filesname] Search pattern without considering case sensitiveness.
grep -r pattern *
kill [options] pid Stop a process. If the process refuses to stop, use kill -9 pid.
ln -s source [name of
link]
man [command] Display the help information for the specified command.
passwd [name Change the password or allow (for the system administrator) to
[password]] change any password.
ps -lf Displays priority and nice value along with above info
tar [options] filename Store and extract files from a tarfile (.tar) or tarball (.tar.gz or .tgz).
Free [option] Display used and free memory in disk and buffer
and cache used by kernel as well as swap memory.
Free -m Display in megabyte
Free -k Display in kilobyte
Free -t Display in terabyte
lsblk Show disk partition space and mount points (same as fdisk -l)
Setfacl:
To give permission to another user or group apart from user who owns the file we
cannot use chown or chmod command. We will use setfacl command.
To set the permission for any user
Curl command:
CURL is a tool to transfer data from or to a server, using one of the supported
protocols (DICT, FILE, FTP, FTPS, GOPHER, HTTP, HTTPS, IMAP). Basically, CURL is
used to download content from internet.
To save the content to a file all you must do is specify the minus o (-o) switch as
follows:
To get the command to run in the background you then need to use the
ampersand (&) as follows:
You can download from multiple URLs using a single curl command. Capitol o (O)
is used to download content of URL. It will create filename posts in directory.
curl -O http://www.mysite.com/page1.html -O
http://www.mysite.com/page2.html
By default, the curl command returns the following information as it downloads a
URL:
You can use curl to fill in an online form and submit the data as if you had filled it
in online.
Logrotate:
Following are the key files that you should be aware of for logrotate to work
properly.
/etc/logrotate.conf – Log rotation configuration for all the log files are specified
in this file.
If you want to rotate a log file (for example, /tmp/output.log) for every 1KB,
create the logrotate.conf as shown below.
$ cat logrotate.conf
/tmp/output.log {
Weekly
size 1k
rotate 4
compress
maxage 100
size 1k – logrotate runs only if the filesize is equal to (or greater than) this size.
create – rotate the original file and create the new file with specified permission,
user and group.
rotate – limits the number of log file rotation. So, this would keep only the recent 4
rotated log files.
Compress- It will compress the rotated file
Monthly- It will do logrotate operation every week.
Maxage- It will delete logroate file after 100 days
Find command:
To start searching the whole drive you would type the following:
find /
If however, you want to start searching for the folder you are currently in then
you can use the following syntax:
find .
to search for a file called myresume.odt across the whole drive you would use the
following syntax:
The first part of the find command is obviously the word find.
The second part is where to start searching from.
The next part is an expression which determines what to find.
Finally the last part is the name of the thing to find.
If you want to find all the empty files and folders in your system use the
following command:
find / -empty
If you want to find all of the executable files on your computer use the
following command:
find / -exec
To find all of the files that are readable use the following command:
find / -read
When you search for a file you can use a pattern. For example, maybe you
are searching for all files with the extension mp3.
You have an entire folder full of music files in a bunch of different formats. You
want to find all the *.mp3 files from the artist JayZ, but you don’t want any of the
remixed tracks. Using a find command with a couple of grep pipes will do the
trick:
In this example, we are using find to print all the files with a *.mp3 extension,
piping it to grep –i to filter out and prints all files with the name “JayZ” and then
another pipe to grep –vi which filters out and does not print all filenames with the
string (in any case) “remix”.
NMAP command:
nmap 172.16.0.0/24
Starting Nmap 5.21 (http://nmap.org) at 2015-06-23 09:39 EST
Nmap scan report for 172.16.0.1
Host is up (0.0043s latency).
Not shown: 998 closed ports
PORT STATE SERVICE
22/tcp open ssh
80/tcp open http
443/tcp open https
-------------------------------------------------------------------------------------------------------------
Crontab Examples
A line in crontab file like below removes the tmp files from /home/someuser/tmp
each day at 6:30 PM.
DIG Command
Non-authoritative answer:
www.tecmint.com canonical name = tecmint.com.
Name: tecmint.com
Address: 50.116.66.136
-------------------------------------------------------------------------------------------------------------
What is Kernel:
A kernel is the lowest level of software that interfaces with the hardware in
computer. It is responsible for interfacing all applications to the physical hardware
and allowing processes to get information from each other.
The kernel is highly involved in resource management. It must make sure that
there is enough memory available for an ap+plication to run, as well as to place
an application in the right location in memory. It tries to optimize the usage of the
processor so that it can complete tasks as quickly as possible. It also aims to avoid
deadlocks, which are problems that completely halt the system when one
application needs a resource that another application is using.
Linux is Monolithic kernel. Kernel file is stored in /boot folder.
Kernel modules, also known as a loadable kernel module (LKM), are essential to
keeping the kernel functioning with all your hardware without consuming all your
available memory.
A module typically adds functionality to the base kernel for things like devices, file
systems, and system calls. LKMs have the file extension .ko and are typically
stored in the /lib/modules directory. Because of their modular nature you can
easily customize your kernel by setting modules to load, or not load, during
startup with by editing your /boot/config file.
-------------------------------------------------------------------------------------------------------------
Swap space in Linux is used when the physical memory (RAM) is full. If the system
needs more memory resources and the RAM is full, inactive pages in memory are
moved to the swap space. Swap space is located on hard drives, which have a
slower access time than physical memory.
Swap space can be a dedicated swap partition, a swap file. There is no need to be
alarmed if you find the swap partition filled to 50%. The fact that swap space is
being used does not mean a memory bottleneck but rather proves how efficiently
Linux handles system resources. Also, a swapped-out page stays in swap space
until there is a need for it, that is when it gets moved in (swap-in).
Swap should equal 2x physical RAM for up to 2 GB of physical RAM, and then an
additional 1x physical RAM for any amount above 2 GB, but never less than 32
MB.
60
Directory structure
Directories are also known as folders because they can be thought of as folders in
which files are kept in a sort of physical desktop analogy.
The following table provides a very brief list of the standard, well-known, and
defined top-level Linux directories and their purposes.
Directory Description
/dev This directory contains the device files for every hardware
device attached to the system. These are not device drivers,
rather they are files that represent each device on the computer
and facilitate access to those devices.
/etc Contains the local system configuration files for the host
computer.
/home Home directory storage for user files. Each user has a
Directory Description
subdirectory in /home.
/lib Contains shared library files that are required to boot the
system.
/root This is not the root (/) filesystem. It is the home directory for the
root user.
/sbin System binary files. These are executables used for system
Directory Description
administration.
/var Variable data files are stored here. This can include things like
log files, MySQL, and other database files, web server data files,
email inboxes, and much more.
Program start with S used during startup. Program starts with K used during
shutdown.
-------------------------------------------------------------------------------------------------------------
Unmounting is done with the unmount command. This command also accepts
multiple paths to unmount several at once:
sudo umount /mnt/music /mnt/photos /dev/sda5
If targeted device is busy, then we won’t be able to unmount it and will show
error message.
#mount
Information about mounted file system is stored in /etc/mtab folder. To view its
content use following command:
# cat /etc/mtab
Creating filesystems
After creating partition using fdisk or gdisk command, we need to make filesystem
on them.
#sudo mkfs.ext4 /dev/sdb1
-------------------------------------------------------------------------------------------------------------
TAR (Method of compressing file in linux):
The standard archival program for Unix-like operating systems is Tar.
Compression programs work on one file or stream of data and produce one
compressed file or stream, so this splits the job into two parts: archival and
compression.
Use the following command to compress an entire directory or a single file on
Linux. It’ll also compress every other directory inside a directory you specify–in
other words, it works recursively.
-c: Create an archive.
-z: Compress the archive with gzip.
-v: Display progress in the terminal while creating the archive, also known as
“verbose” mode. The v is always optional in these commands, but it’s helpful.
-f: Allows you to specify the filename of the archive.
Let’s say you have a directory named “stuff” in the current directory and you want
to save it to a file named archive.tar.gz. You’d run the following command:
Or, let’s say there’s a directory at /usr/local/something on the current system and
you want to compress it to a file named archive.tar.gz.
For example, let’s say you want to compress /home/ubuntu, but you don’t want
to compress the /home/ubuntu/Downloads and /home/ubuntu/.cache
directories. Here’s how you’d do it:
tar -czvf archive.tar.gz /home/ubuntu --exclude=/home/ubuntu/Downloads --
exclude=/home/ubuntu/.cache
You could archive an entire directory and exclude all .mp4 files with the following
command:
tar also supports bzip2 compression. This allows you to create bzip2-compressed
files, often named .tar.bz2, .tar.bz, or .tbz files. To do so, just replace the -z for
gzip in the commands here with a -j for bzip2.
Extract an Archive
Once you have an archive, you can extract it with the tar command. The following
command will extract the contents of archive.tar.gz to the current directory.
It’s the same as the archive creation command we used above, except the -
x switch replaces the -c switch.
You may want to extract the contents of the archive to a specific directory. You
can do so by appending the -C switch to the end of the command. For example,
the following command will extract the contents of the archive.tar.gz file to the
/tmp directory.
tar -xzvf archive.tar.gz -C /tmp
ls -l /proc
total 0
dr-xr-xr-x 9 root root 0 Mar 31 21:34 1
dr-xr-xr-x 9 root root 0 Mar 31 21:34 10
dr-xr-xr-x 9 avahi avahi 0 Mar 31 21:34 1034
dr-xr-xr-x 9 root root 0 Mar 31 21:34 1036
dr-xr-xr-x 9 root root 0 Mar 31 21:34 1039
dr-xr-xr-x 9 root root 0 Mar 31 21:34 1041
dr-xr-xr-x 9 root root 0 Mar 31 21:34 1043
dr-xr-xr-x 9 root root 0 Mar 31 21:34 1044
you can check that there is entry for every running process in /proc file system.
ls -ltr /proc/7494
Output:
total 0
-rw-r--r-- 1 mandeep mandeep 0 Apr 1 01:14 oom_score_adj
dr-xr-xr-x 13 mandeep mandeep 0 Apr 1 01:14 task
-r--r--r-- 1 mandeep mandeep 0 Apr 1 01:16 status
-r--r--r-- 1 mandeep mandeep 0 Apr 1 01:16 stat
-r--r--r-- 1 mandeep mandeep 0 Apr 1 01:16 cmdline
-r--r--r-- 1 mandeep mandeep 0 Apr 1 01:17 wchan
-------------------------------------------------------------------------------------------------------------
System calls used for Process management:
Fork () :- Used to create a new process
Exec() :- Execute a new program
Wait():- wait until the process finishes execution
Exit():- Exit from the process
Getpid():- get the unique process id of the process
Getppid():- get the parent process unique id
Nice():- to bias the existing property of process
-------------------------------------------------------------------------------------------------------------
What is Process?
A process refers to a program in execution; it’s a running instance of a program.
Types of Processes
To find the process ID and parent process ID of the current shell, run:
$ echo $$
$ echo $PPID
run a program ex. cloudcmd, it will start a process in the system. You can start a
foreground process as follows, it will be connected to the terminal and a user can
send input it:
# cloudcmd
To start a process in the background (non-interactive), use the & symbol, here,
the process doesn’t read input from a user until it’s moved to the foreground.
# cloudcmd &
# jobs
# bg
# jobs
# fg %1
Linux Background Process Jobs
During execution, a process changes from one state to another depending on its
environment/circumstances. In Linux, a process has the following possible states:
Running – here it’s either running (it is the current process in the system) or
it’s ready to run (it’s waiting to be assigned to one of the CPUs).
Waiting – in this state, a process is waiting for an event to occur or for a
system resource.
Stopped – in this state, a process has been stopped, usually by receiving a
signal. For instance, a process that is being debugged.
Zombie – here, a process is dead, it has been halted but it’s still has an entry
in the process table.
Linux also has some commands for controlling processes such as kill, pkill, pgrep
and killall.
$ kill 2308
$ pgrep -u tecmint top
$ pkill glances
$ kill -l
List All Linux Signals
To send a signal to a process, use the kill, pkill or pgrep commands we mentioned
earlier on. But programs can only respond to signals if they are programmed to
recognize those signals.
The following are kill commands examples to kill the Firefox application using its
PID once it freezes:
$ pidof firefox
$ kill 9 2687
OR
OR
To kill all mysql instances along with child processes, use the command as follow.
# killall mysqld
$ killall firefox
The kernel scheduler is a unit of the kernel that determines the most suitable
process out of all runnable processes to execute next.
By default, all the processes are considered equally urgent and are allotted the
same amount of CPU time. The nice parameter is used to change priority. The
Linux kernel then reserves CPU time for each process based on its relative priority
value.
It ranges from - 20 to 19 and can take only integer values. A value of minus 20
represents the highest priority level, whereas 19 represents the lowest.
Ex. the following command line starts the process "large-job," setting the nice
value to 12:
nice -12 large-job
You can change the priority of a job that is already running using renice. For
example:
renice 17 -p 1134
This changes the nice value of the job with process id 1134 to 17. In this case, no
dash is used for the command option when specifying the nice value.
SUID : If setuid bit is set, when the file is executed by a user, the process will have
the same rights as the owner of the file being executed.
SGID : Same as above, but inherits group privileges of the file on execution, not
user privileges. Similar way when you create a file within the directory, it will
inherit the group ownership of the directories.
Sticky bit : Sticky bit was used on executables in linux so that they would remain
in the memory more time after the initial execution, hoping they would be
needed in the near future. But mainly it is on folders, to imply that a file or folder
created inside a stickybit enabled folder could only be deleted by the owner.
These are run when changing the runlevel and always call the master
script /etc/init.d/rc, which guarantees the correct order of the relevant
scripts.
What is initrd image and what is its function in the linux booting process?
The initial RAM disk (initrd) is an initial root file system that is mounted prior to
when the real root file system is available. The initrd is bound to the kernel and
loaded as part of the kernel boot procedure. The kernel then mounts this initrd as
part of the two-stage boot process to load the modules to make the real file
systems available and get at the real root file system.
What is Runlevel?
However, there isn’t just one single set of startup scripts init executes. There are
multiple run levels with their own startup scripts – for example, one runlevel may
bring up networking and launch the graphical desktop, while another runlevel
may leave networking disabled and skip the graphical desktop.
The global file table entry contains information such as the inode of the file,
byte offset, and the access restrictions.
stdin, stdout, and stderr
Standard input 0 The default data stream for input, for example in a command
pipeline. In the terminal, this defaults to keyboard input from the
user.
Standard output 1 The default data stream for output, for example when a comman
prints text.
Standard error 2 The default data stream for output that relates to an error
occurring. In the terminal, this defaults to the user's screen.
-------------------------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------------------------
What is inode?
An inode stores basic information about a regular file, directory, or other file
system object.
Each object in the filesystem is represented by an inode. Each file under Linux has
following attributes:
32820 /etc/passwd
You can also use stat command to find out inode number and its attribute:
$ stat /etc/passwdOutput:
File: `/etc/passwd'
-------------------------------------------------------------------------------------------------------------
Locking and unlocking a user: A super user can lock and unlock a user account. To
lock an account, one needs to invoke passwd with the -l option.
# passwd -l Shantanu
# passwd -u Shantanu
Linux group
Linux group is a mechanism to organize a collection of users. Like the user ID,
each group is also associated with a unique ID called the GID (group ID). There are
two types of groups – a primary group and a supplementary group. Each user is a
member of a primary group and of zero or ‘more than zero’ supplementary
groups. The group information is stored in /etc/group and the respective
passwords are stored in the file.
Creating a group with default settings: To add a new group with default settings,
run the groupadd command as a root user, as shown below:
# groupadd employee
If you wish to add a password, then type gpasswd with the group name, as follow:
# gpasswd employee
Ex. We are adding new user Ironman to group Superhero by using following
command.
useradd -G Superhero Ironman
Ex. If Ironman user already in system and we are adding then
Usermod -G Superhero Ironman
To check content of group file:
Cat /etc/group
-------------------------------------------------------------------------------------------------------------
TOP command:
The top program provides a dynamic real-time view of a running system. It can
display system memory usage, CPU usage, summery of tasks, system uptime and
user session.
If you want to quit, simply press “q”.
System time, uptime and user sessions
At the very top left of the screen, top displays the current time. This is followed by
the system uptime, which tells us the time for which the system has been
running. For instance, in our example, the current time is “15:39:37”, and the
system has been running for 90 days, 15 hours and 26 minutes.
Next comes the number of active user sessions. In this example, there are two
active user sessions.
Memory usage
The “memory” section shows information regarding the memory usage of the
system. The lines marked “Mem” and “Swap” show information about RAM and
swap space respectively.
The Linux kernel also tries to reduce disk access times in various ways. It
maintains a “disk cache” in RAM, where frequently used regions of the disk are
stored. In addition, disk writes are stored to a “disk buffer”, and the kernel
eventually writes them out to the disk. The total memory consumed by them is
the “buff/cache” value.
Tasks
The “Tasks” section shows statistics regarding the processes running on your
system. The “total” value is simply the total number of processes. For example, in
the above screenshot, there are 27 processes running.
CPU usage
The CPU usage section shows the percentage of CPU time spent on various tasks.
The us value is the time the CPU spends executing processes in userspace.
Similarly, the sy value is the time spent on running kernelspace processes.
PID
This is the process ID, an unique positive integer that identifies a process.
USER
This is the “effective” username (which maps to an user ID) of the user
PR and NI
The “NI” field shows the “nice” value of a process. The “PR” field shows
the scheduling priority of the process from the perspective of the kernel.
These three fields are related with to memory consumption of the processes.
“VIRT” is the total amount of memory consumed by a process. “RES” is the
memory consumed by the process in RAM, and Finally, “SHR” is the amount of
memory shared with other processes.
S
As we have seen before, a process may be in various states. This field shows the
process state in the single-letter form.
TIME+
This is the total CPU time used by the process since it started, precise to the
hundredths of a second.
COMMAND
The COMMAND column shows the name of the processes.
-------------------------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------------------------
Iptables:
There are 3 types of built in chains in Iptables
INPUT: Packet coming into the PC.
FORWARD: Packets passing through PC.
OUTPUT: Packet leaving out of PC.
There are commonly used switches in Iptables.
-s: Source address
-d: Destination address
-p: Protocol
-j: Action
-P: Specify default policy for chain
-L: List chain rules
-A: Append rule to end of chain
-I: Append rule to start of chain
-i: Interface
EX. Allow HTTP traffic for Apache web server over port 80 so it may service web
request.
Iptables -A INPUT -j ACCEPT -p tcp --destination-port 80 -i eth0
EX. Allow FTP traffic for VSFTPD over port 21 to service FTP request
Iptables -A INPUT -j ACCEPT -p tcp – destination-port 21 -i eth0
Once we apply all rules to allow appropriate traffic we can apply catch-all rule to
block traffic which don’t wish to allow. This rule must be applied at last
Iptables -A INPUT -j DROP -p tcp -i eth0
-------------------------------------------------------------------------------------------------------------
Hard Links and Soft Links
Symbolic link (Symlinks/Soft links) are links between files. It is nothing but a
shortcut of a file (in windows terms).
You can delete the soft links without affecting the actual file or directory it is
pointing to. The reason is because the inode of the linked file is different from
that of the inode of the symbolic link. But if you delete the source file of the
soft link, soft link of that file no longer works, or it becomes “dangling link”
which points to nonexistent file. Inode number will be different for both soft
link and original file.
Soft link can span across filesystem.
-------------------------------------------------------------------------------------------------------------
What is Daemon?
Some daemons are launched via System V init scripts, which are scripts that are
run automatically when the system is booting up.
The Unix program which spawns all other processes. As of 2016, for major Linux
init[1]
distributions, it has been replaced by systemd.[2]
systemd Replacement of init, the Unix program which spawns all other processes.
-------------------------------------------------------------------------------------------------------------
When a process dies on Linux, the process’s status becomes EXIT_ZOMBIE and
the process’s parent is notified that its child process has died with the SIGCHLD
signal. The parent process is then supposed to execute the wait () system call to
read the dead process’s exit status and other information. This allows the parent
process to get information from the dead process. After wait () is called, the
zombie process is completely removed from memory.
If a parent process isn’t programmed properly and never calls wait (), its zombie
children will stick around in memory until they’re cleaned up.
Linux provides us a utility called ps for viewing information related with the processes on a
system which stands as abbreviation for “Process Status”. ps command is used to list the
currently running processes and their PIDs along with some other information depends on
different options
Zombie process have PID assigned to it. So, if more zombie process are there then
they may accommodate finite number of PID in system and will restrict other
process to launch.
One way is by sending the SIGCHLD signal to the parent process. This signal tells
the parent process to execute the wait() system call and clean up its zombie
children. Send the signal with the kill command, replacing pid in the command
below with the parent process’s PID:
httpd is stopped
Starting httpd:
-------------------------------------------------------------------------------------------------------------
In Raid 0 (Striping) the data will be written to disk using shared method. Half of
the content will be in one disk and another half will be written to other disk.
Let us assume we have 2 Disk drives, for example, if we write data “TECMINT” to
logical volume it will be saved as ‘T‘ will be saved in first disk and ‘E‘ will be saved
in Second disk and ‘C‘ will be saved in First disk and again ‘M‘ will be saved in
Second disk and it continues in round-robin process.
In this situation if any one of the drive fails we will lose our data. But while
comparing to Write Speed and performance RAID 0 is Excellent. We need at least
minimum 2 disks to create a RAID 0 (Striping).
1. High Performance.
2. Zero Fault Tolerance.
While we save any data, it will write to both 2TB Drives. Minimum two drives are
needed to create a RAID 1 or Mirror. If a disk failure occurred, we can reproduce
the raid set by replacing a new disk.
1. Good Performance.
2. Full Fault Tolerance.
RAID 5 is mostly used in enterprise levels. Parity info will be used to rebuild the
data. It rebuilds from the information left on the remaining good drives. This will
protect our data from drive failure.
Assume we have 4 drives, if one drive fails and while we replace the failed drive
we can rebuild the replaced drive from parity information. Parity information are
Stored in all 4 drives, if we have 4 numbers of 1TB hard-drive. The parity
information will be stored in 256GB in each driver and other 768GB in each drive
will be defined for Users. RAID 5 can be survive from a single Drive failure, if
drives fails more than 1 will cause loss of data’s.
1. Excellent Performance
2. Reading will be extremely very good in speed.
3. Fault tolerance
RAID 6 Two Parity Distributed Disk
RAID 6 is same as RAID 5 with two parity distributed system. Mostly used in many
arrays. We need minimum 4 Drives, even if there 2 Drive fails we can rebuild the
data while replacing new drives.
Very slower than RAID 5, because it writes data to all 4 drivers at same time. Will
be average in speed while we are using a Hardware RAID Controller. If we have 6
numbers of 1TB hard-drives 4 drives will be used for data and 2 drives will be used
for Parity.
1. Poor Performance.
2. Read Performance will be good.
RAID 10 can be called as 1+0 or 0+1. This will do both works of Mirror & Striping.
Assume, we have 4 Number of drives. While I’m writing some data to my logical
volume it will be saved under All 4 drives using mirror and stripe methods.
If I’m writing a data “TECMIN*T” in RAID 10 it will save the data as follow. First “T”
will write to both disks and second “E” will write to both disk, this step will be
used for all data write. It will make a copy of every data to other disk too.
Same time it will use the RAID 0 method and write data as follow “T” will write to
first disk and “E” will write to second disk. Again “C” will write to first Disk and
“M” to second disk.
1. Good read and write performance.
2. Here Half of the Space will be lost in total capacity.
3. Fault Tolerance.
What is Fstab?
The configuration file /etc/fstab contains the necessary information to automate
the process of mounting partitions. The fstab file can be used to define how disk
partitions, various other block devices, or remote filesystems should be mounted
into the filesystem.
In general, fstab is used for internal devices, CD/DVD devices, and network shares
(samba/nfs/sshfs). Removable devices such as flash drives *can* be added to
fstab.
fields description
The device/partition (by /dev location or UUID) that contain a file
<device> system.
The directory on your root file system (/mount) from which it will
be possible to access the content of the device/partition. You may
<mount use any name you wish for the mount point, but you must create
point> the mount point before you mount the partition.
<file
system
type> Type of file system Ex. Vfat, ntfs, ext4, ext3, swap
Mount options of access to the device/partition Ex. Default, Sync,
<options> ro, rw
This field sets whether the backup utility dump will backup file
system. If set to "0" file system ignored, "1" file system is backed
up.
<dump> Dump is seldom used and if in doubt use 0.
Fsck order is to tell fsck what order to check the file systems, if set
to "0" file system is ignored.
1. 0 == do not check.
2. 1 == check this partition first.
<pass 3. 2 == check this partition(s) next
num>
Backward compatibility.
Max file size 16GB to 16TB.
Ext4 file system have option to Turn Off journaling feature.
Other features like checksum, Sub Directory Scalability, Multiblock
Allocation.
JFS
The Journaled File System (JFS) was developed by IBM for AIX UNIX which was
used as an alternative to system ext. JFS is an alternative to ext4 currently and is
used where stability is required with the use of very few resources. When CPU
power is limited JFS comes handy.
Btrfs
What is journaling?
A system crashes, sometimes the loss of data occurs. Using a journal allows data
recovery of files.
When a user submits a change to a file, the first thing the file system does is to
mark the changes in a journal file. The size of the journal file is a set size which
when full, older entries are overwritten.
If a crash occurs, the journal entries and files are compared. Data is written to the
files that are in the journal, but not yet on the disk. The process recovers the data
to its wanted state.
There are three types of Journaling: writeback, ordered and data.
1. writeback
Here, only the metadata is journaled, and data is written to the file on the disk. In
a crash, the file system is recoverable, but the physical data can be corrupted.
2. ordered (default)
The physical data is written first before the metadata is journaled. The ordered
mode allows the data and file system to be uncorrupted if a system crashes
before the journal is written.
3. data
In the data mode, the metadata and file contents are journaled. System
performance can be poorer than the other two modes, but the fault tolerance is
much better.
https://www.oracle.com/technetwork/articles/servers-storage-dev/oom-killer-
1911807.html
When a server that's supporting a database or an application server goes down,
the root cause of the issue can be traced to the system running low on memory
and killing an important process to remain operational.
The Linux kernel allocates memory upon the demand of the applications running
on the system. Because many applications allocate their memory up front and
often don't utilize the memory allocated, the kernel was designed with the ability
to over-commit memory to make memory usage more efficient. This over-commit
model allows the kernel to allocate more memory than it has physically available.
If a process utilizes the memory it was allocated, the kernel then provides these
resources to the application. When too many applications start utilizing the
memory they were allocated, the over-commit model sometimes becomes
problematic and the kernel must start killing processes to stay operational. The
mechanism the kernel uses to recover memory on the system is referred to as
the out-of-memory killer or OOM killer for short.
We can make the OOM killer more likely to kill our oracle process by doing the
following.
echo 10 > /proc/2592/oom_adj
What is quota? How to control the limit of memory blocs and file to users.
Quotas are used to limit the amount of disk space a user or group can use on the
VPS.
Installing Quota
The mount file fstab needs to be opened for editing using the following
command:
Save the file and enable the new mount options by remounting the file system as
follows:
#mount -o remount /
The following command will create a new quotas file in the root directory of the
file system.
#quotacheck -cum /
#edquota ftpuser
Which opens the quota file for editing
1. Indicates the name of the file system that has a quota enabled
2. Indicates the amount of blocks currently used by the user
3. Indicates the soft block limit for the user on the file system
4. Indicates the hard block limit for the user on the file system
5. Indicates the amount of inodes currently used by the user
6. Indicates the soft inode limit for the user on the file system
7. Indicates the hard inode limit for the user on the file system
The blocks refer to the amount of disk space, while the inodes refer to the
number of files/folders that can be used. Most of the time the block amount will
be used in the quota.
The hard block limit is the absolute maximum amount of disk space that a user or
group can use. Once this limit is reached, no further disk space can be used. The
soft block limit defines the maximum amount of disk space that can be used.
However, unlike the hard limit, the soft limit can be exceeded for a certain
amount of time. This time is known as the grace period.
In the example above, a soft limit off 9,785Mb and hard limit of 10Mb are used.
To see the quota in action an FTP/SFTP transfer can be started, where multiple
files will be uploaded with a total size of 12 Mb for example. The FTP/SFTP client
will indicate a transfer error, meaning that the user will be unable to upload any
files. Of course, 10Mb isn't a meaningful quota. In this guide every user will get a
soft limit of 976 Mb and a hard limit of 1Gb. The configuration looks as follows:
Generating Reports
It is possible to generate a report from the different quotas. The following
command is used:
#repquota -a
edquota -t
The command gives the following output and specifies the different time unites
that could be used. For this guide, a grace period of 7 days is used.
1. CPU:
If any process is consuming more resources, then we need to reduce priority of
process by doing renice. We many need to kill unnecessary process using kill.
A. First command to use is uptime. This is will give idea about load average. In this
numbers shows average number of process waiting for CPU resources in last 1, 5,
15 min. If number is more than 1 then more resources are being used by process
and some action needs to make.
B. Use top command. It would show list of processes with CPU utilization and with
priority value. Another command can be used is vmstat. If there is a spike in
"%usr" value then this indicates that system has spent time in running a user
space process, otherwise, if there is high usage percentage for "%sys" then it says
that system was busy in running a kernel thread.
let’s spin up"sha1sum" command to create load on processor and see how it
behaves now. I've started two of these processes in background. Now, you could
see that the processor is busy in executing a user-space task as shown in the
below snap: -
Using "vmstat" to print stats with 1 second interval of 10 samples and showing in
kilo bytes with time stamps:
At this time, we come to know that processor is busy and there 2 tasks running
now which are consuming high CPU cycles, so check what are those processes
now using "top" command: -
2. Memory (RAM):
We will use Free command to analyses memory usage. If a system is showing high
memory usage (almost no free memory left out) and swap space is being
consumed, then it certainly indicates that system is under memory pressure.
Use ps command to find out top 10 processes consuming memory.
Check out page faults (major faults) using "sar -B" command, if there are major
faults then it would certainly delay since page needs to be moved from disk to
memory.
https://www.thomas-krenn.com/en/wiki/
Linux_Performance_Measurements_using_vmstat
Example: To check the page faults happened between certain time stamp using
sar command: -
4. Network issue:
Make sure that network drivers/firmware of system are updated. Network
interface speed is matching with router/gateway speed.
The netstat can provide details about open network connections and stack
statistics.
The ethtool command would be the ideal one when someone wish to see packet
drops/loss at hardware level. This command could also be used to identify
network card driver/firmware version.
To check network bandwidth and throughout between server and client use
netcat command.
If there is a need to analyze issues at network packet level, then one could use
tcpdump command to capture dump data to analyze further.
There are at times in-correct DNS configuration may also lead to a glitch in
network performance.
You have plenty of space, but still cannot write on the drive, what could be the
issue
You are out of inodes. It's likely that you have a directory somewhere with
many very small files.
How will you restrict IP so that the restricted IP’s may not use the FTP Server?
We can block suspicious IP by integrating tcp_wrapper. We need to enable the
parameter “tcp_wrapper=YES” in the configuration file at ‘/etc/vsftpd.conf’. And
then add the suspicious IP in the ‘host.deny’ file at location ‘/etc/host.deny’.
Block IP Address
# vi /etc/hosts.deny
Add the IP address that you want to block at the bottom of the file.
vsftpd:172.16.16.1
You need to search for the string “Amazon” in all the “.txt” files in the current
directory. How will you do it?
Answer: We need to run the find command to search for the text “Amazon” in the
current directory, recursively.
You want to send a message to all connected users as “Server is going down for
maintenance”, what will you do?
Answer: This can be achieved using the wall command. The wall command sends
a message to all connected users on the sever.
# echo please save your work, immediately. The server is going down for Maintenance at 12:30 Pm
As the disk space utilization was so high in the server, the Administrator has
removed few files from the server but still the disk utilization is showing as high.
What would be the reason? df shows disk is full but du shows it still have
memory.
In Linux even if we remove a file from the mounted file system, that will still be in
use by some application and for this application, it remains available. It’s because
file descriptor in /proc/ filesystem is held open. Check for files on located under
mount points. Frequently if you mount a directory (say a sambafs) onto a
filesystem that already had a file or directories under it, you lose the ability to see
those files, but they're still consuming space on the underlying disk
So, if there are such open descriptors to files already removed, space occupied by
them considered as used. You find this difference by checking them using the "df"
and "du" commands. While df is to show the file system usage, du shows we still
have space. du works from files while df works at filesystem level, reporting what
the kernel says it has available.
You can find all unlinked but held open files with:
This will list the filename which is open with the pid in which it is running. We can
kill those Pids and which will stop these processes and will recover the disk space
responsible for this file.
2. Check process.
6. Check bugs with software version. So, check problem with certification and
authentication.
NFS Services
After installing packages and starting services on both the machines, we need to
configure both the machines for file sharing.
Setting Up the NFS Server
/nfsshare 192.168.0.101(rw,sync,no_root_squash)
Some other options we can use in “/etc/exports” file for file sharing is as follows.
ro: we can provide read only access to the shared files i.e client will only be
able to read.
rw: This option allows the client server to both read and write access within
the shared directory.
sync: Sync confirms requests to the shared directory only once
the changes have been committed.
no_subtree_check: This option prevents the subtree checking. When a shared
directory is the subdirectory of a larger file system, nfs performs scans of
every directory above it, to verify its permissions and details. Disabling
the subtree check may increase the reliability of NFS but reduce security.
no_root_squash: This phrase allows root to connect to the designated
directory.
Setting Up the NFS Client
First we need to find out that shares available on the remote server or NFS Server.
/nfsshare 192.168.0.101
The above command will mount that shared directory in “/mnt/nfsshare” on the
client server. You can verify it following command.
We can test our NFS server setup by creating a test file on the server end and
check its availability at nfs clientside or vice-versa.
At the nfsserver end
I have created a new text file named “nfstest.txt’ in that shared directory.
Go to that shared directory in client server and you’ll find that shared file without
any manual refresh or service restart.
[root@nfsclient]# ll /mnt/nfsshare
total 4
If you want to unmount that shared directory from your server after you are done
with the file sharing, you can simply unmount that particular directory with
“umount” command. See this example below.
You can see that the mounts were removed by then looking at the filesystem
again.
You’ll see that those shared directories are not available any more.
Table 29.1. NIS Terminology
Term Description
NIS servers and clients share an NIS domain name. Typically,
NIS domain
this name does not have anything to do with DNS.
name
This service enables RPC and must be running to run
rpcbind(8)
an NIS server or act as an NIS client.
ypbind(8)
NIS master server
This server acts as a central repository for host configuration information and
maintains the authoritative copy of the files used by all the NISClients.
The passwd, group, and other various files used by NIS clients are stored on the
master server.
NIS slave servers
NIS clients
The copies of all NIS files are stored on the master server. The databases used to
store the information are called NIS maps. In FreeBSD, these maps are stored
in /var/yp/[domainname] where [domainname] is the name of the NIS domain.
nisdomainname="test-domain"
nis_server_enable="YES"
nis_yppasswdd_enable="YES"
Initializing the NIS Maps
NIS maps (Database for NIS file) are generated from the configuration files
in /etc on the NIS master, with one exception: /etc/master.passwd. This is to
prevent the propagation of passwords to all the servers in the NIS domain.
Therefore, before the NIS maps are initialized, configure the primary password
files:
# cp /etc/master.passwd /var/yp/master.passwd
# cd /var/yp
# vi master.passwd
Every time a new user is created, the user account must be added to the
master NIS server and the NIS maps rebuilt. Until this occurs, the new user will
not be able to login anywhere except on the NIS master. For example, to add the
new user jsmith to the test-domain domain, run these commands on the master
server:
# pw useradd jsmith
# cd /var/yp
# make test-domain
To set up an NIS slave server, log on to the slave server and edit /etc/rc.conf as for
the master server. Do not generate any NIS maps, as these already exist on the
master server. When running ypinit on the slave server, use -s (for slave) instead
of -m (for master). This option requires the name of the NISmaster in addition to
the domain name, as seen in this example:
C. Setting Up an NIS Client
An NIS client binds to an NIS server using ypbind(8). This daemon broadcasts RPC
requests on the local network. These requests specify the domain name
configured on the client. If an NIS server in the same domain receives one of the
broadcasts, it will respond to ypbind, which will record the server's address. If
there are several servers available, the client will use the address of the first
server to respond and will direct all its NIS requests to that server.
2. nisdomainname="test-domain"
nis_client_enable="YES"
First search for App Store in the GNOME 3 Application Menu. You should see the
following icon as marked in the screenshot below.
You can generate kickstart file and configure some paraments like
Basic configuration: Default language, Time zone, Keyboard, Root password
Installation method: Installation method, Installation source (CD-ROM, FTP,
HTTP)
Boot loader option: Install type, GRUB option
Partition Information: U can add new partition and give appropriate resources
Network configuration: You can add network device
Authentication: We can specify how user will authenticate once installation
completes
Firewall configuration:
Display configuration:
1) mode=0 (balance-rr)
This mode is based on Round-robin policy and it is the default mode. This mode
offers fault tolerance and load balancing features. It transmits the packets in
Round robin fashion that is from the first available slave through the last.
This mode is based on Active-backup policy. Only one slave is active in this band,
and another one will act only when the other fails. The MAC address of this bond
is available only on the network adapter part to avoid confusing the switch. This
mode also provides fault tolerance.
6) mode=5 (balance-tlb)
1) Create the bond file (ifcfg-bond0) and specify the IP address, netmask &
gateway.
# vi /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
IPADDR=192.x.x.x
NETMASK=255.255.255.0
GATEWAY=192.x.x.1
TYPE=Bond
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=static
2) Edit the files of eth0 & eth1 and make sure you enter the master and slave
entry.
# vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
HWADDR=08:00:27:5C:A8:8F
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=no
MASTER=bond0
SLAVE=yes
# vi /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=no
MASTER=bond0
SLAVE=yes
# vi /etc/modprobe.d/bonding.conf
# ifconfig bond0
# cat /proc/net/bonding/bond0
What is SELinux?
It is an access control implementation and security feature for the Linux kernel. It
is designed to protect the server against misconfigurations and/or compromised
daemons.
It improves security on accessing services/files which improve security. SELinux is
short form of Security Enhanced Linux.
SETTING OF SELINUX
SELinux is set in three modes.
Enforcing – SELinux security policy is enforced. IF this is set SELinux is enabled and
will try to enforce the SELinux policies strictly
Permissive – SELinux prints warnings instead of enforcing. This setting will just
give warning when any SELinux policy setting is breached
Disabled – No SELinux policy is loaded. This will totally disable SELinux policies.
And SELinux is set in two levels
Targeted – Targeted processes are protected,
Mls – Multi Level Security protection.
GET SELINUX STATUS
Example1:Is SELinux enabled or not on your box? use below command to get the
status.
#getenforce
The output will be either “Enabled” or “Disabled”
Example2: To see SELinux status in simplified way you can use sestatus
#sestatus
Sample output:
SElinux status : enabled
SELinux mount : /selinux
Current mode : enforcing
Mode from config file : enforcing
Policy version : 21
Policy from config file : targeted
From the above output we can see that SElinux is enabled and it is in enforced
mode.
and to see detailed status you can use -b option, this will give on which services
SElinux is enabled and which services are disabled.
DISABLING SELINUX
Example4:How to disable SElinux
We can do it in two ways
1)Permanent way : edit /etc/selinux/config
change the status of SELINUX from enforcing to disabled
SELINUX=enforcing
to
SELINUX=disabled
Save the file and exit.
2)Temporary way : Execute below command
echo 0 > /selinux/enforce
or
setenforce 0
ENABLING SELINUX
Example5: How about enabling SELinux
1)Permanent way: edit /etc/selinux/config
change the status of SELINUX from disabled to enforcing
SELINUX=disabled
to
SELINUX=enforcing
Save the file and exit.
2)Temporary way : Execute below command
echo 1 > /selinux/enforce
As we know that /etc/fstab is used for permanent mounting of file systems but it
will be useful only if you have less mount points connected to your /etc/fstab file
But Autofs mounts the file systems on user’s demand.
Bydefault the mount point’s configured in Autofs is in unmounted state till the
user access the mount point, once user try to access the mount point it will
mount automatically and if user dont use the mount point for some time then it
will automatically go to unmount state.
I am familiar with Linux file system and boot process. We currently have REHL 7.4
I installed and upgrade various Linux machines and diagnosis then Monitoring memory
Status
https://www.digitalocean.com/community/tutorials/how-to-use-traceroute-and-mtr-to-diagnose-
network-issues