EX200 Fast Track Exam Prep
EX200 Fast Track Exam Prep
EX200 Fast Track Exam Prep
grep: short form of 'global search for the regular expression'. The grep filter
searches a file for a particular pattern of characters and displays all lines that
contain that pattern to the standard output
find: The find command searches for files in real time by parsing the file-system
hierarchy.
A running program, or process, needs to read input from somewhere and write output
to somewhere. A command run from the shell prompt normally reads its input from the
keyboard and sends its output to its terminal window.
A process uses numbered channels known as file descriptors to get input and send
output.
All processes start with at least three file descriptors. Standard input (channel
0) reads input from the keyboard. Standard output (channel 1) sends normal output
to the terminal. Standard error (channel 2) sends error messages to the terminal.
I/O redirection changes how the process gets its input or prints its output.
Instead of getting input from the keyboard, or sending output and errors to the
terminal, the process reads from or writes to files. Redirection lets one save
messages to a file that are normally sent to the terminal window.
===============================================
9. Locate all files which are owned by user student and place them into /root/found
directory.
12. Search all lines from /usr/share/dict/words which are having the word
"command". Copy the lines in the file named /root/command.txt without disturbing
the original order of display.
12a. Search all lines from /usr/share/dict/words which are starting with the word
"command". Copy the lines in the file named /root/command1.txt without disturbing
the original order of display.
12b. Search all lines from /usr/share/dict/words which are ending with the word
"command". Copy the lines in the file named /root/command2.txt without disturbing
the original order of display.
=================================================
### Create, manage, and delete local users and groups, and administer local
password policies
In Linux system the user account information get stored in /etc/passwd file
Each line of the file describes a single user, and contains seven colon-separated
fields:
name:password:UID:GID:GECOS:directory:shell
apratim:x:1000:1000:Apratim Guha:/home/apratim:/bin/bash
- Primary group
- Supplementary / Secondary group
- Private (Linux)
- Shared (Windows)
The /etc/group file is a text file that defines the groups on the system. There
is one entry per line, with the following format:
group_name:password:GID:user_list
apratim:x:1000:
Sudoers allows particular users to run various commands as the root user, without
needing the root password.
Rule set ::
user MACHINE=COMMAND
## Allow members of wheel group to run any commands anywhere on behalf of any user
%wheel ALL=(ALL) ALL
Which system
group
To help useradd and passwd along with other commands the system additionally read
the following files to get default values of some parameters
- /etc/default/useradd
- /etc/login.defs
============================================================
5. Ensure all current users as well as future users need to change there password
after every 45 days with a warning period of 5 days. The password inactivity period
will remain as 2 days.
5a. Create a group named sysadmin with group id as 10001. Create a user alex who
belongs to sysadmin as a secondary group. Also create another user alice who also
belongs to sysadmin as a secondary group. Create another user robin who does not
have access to an interactive shell on the system, and who is not a member of
sysadmin. All users should have the password set as redhat@123.
6. Create a group called backupadmin. Ensure members of backupadmin can run any
command as superuser without the requirement of any password.
7. Create a user called sarah with UID 6001 and password as redhat@123.
===========================================================
## Permission type
- read
- write
- execute
chmod modes:
- numeric mode / octel mode
- symbolic mode
numeric mode
------------
read - 4
write - 2
execute - 1
no perm - 0
syntax:
chmod n1n2n3 file
chmod -R n1n2n3 directory
symbolic mode
-------------
syntax:
chmod whowhatwhich file
chmod -R whowhatwhich dir
owner : u
group : g
others : o
all users : a
add/append : +
delete/remove : -
exactly : =
read : r
write : w
execute : x(files), X(directories)
no perm : -, “ “
## Changing access right by changing the ownership and file group of a file /
directory
syntax:
--------------------------------------------
We have three special permission through which we can bypass these three known
facts.
By running umask command you will come to know about what permission to be
withdrawn from which category of user
$umask
0002
To change umask value for a particular user modify the following file
- ~/.bashrc
========================================================
7a. Ensure files created by user sarah will have permission as 750 for directories
and 640 for files.
========================================================
Keeping the time across multiple systems over the network in sync is critical for
log file analysis in case we are using a centralised log server in the network. The
Network Time Protocol (NTP) is a standard way for machines to provide and obtain
correct time information on the Internet.
The chronyd service keeps the usually-inaccurate local hardware clock (RTC) on
track by synchronising it to the configured NTP servers. By default, the chronyd
service uses servers from the NTP Pool Project for the time synchronisation and
does not need additional configuration.
We can use the /etc/chrony.conf configuration file to configure NTP clients.
Daemons are processes that either wait or run in the background, performing various
tasks. Generally, daemons start automatically at boot time and continue to run
until shutdown or until they are manually stopped. It is a convention for names of
many daemon programs to end in the letter d.
Services are that component of the OS through which we manage the daemons. It can
change the one time state of the daemon. It does not mean that after reboot it will
retain the same state as it was. Post boot state always depends upon the default
state of that daemon as defined by the administrator.
In Red Hat Enterprise Linux, the first process that starts (PID 1) is systemd.
'Systemd’ uses units to manage different types of objects.
systemctl enable : to put the unit in system startup automatic post reboot
systemctl disable : to remove the unit from automatic starting post reboot
===========================================================
============================================================
note : a device can have one set of connection or also can have multiple set of
connection, but at any point of time only one connection can function as active
connection
Against each connection we have configuration file created here which begins with
ifcfg-*
Against each connection we have configuration file created here and the naming
pattern is name.nmconnection (RHEL 9)
===========================================================
2. TCP/IP Configuration for your system will be as followed (do it in console mode
)
IP Address :: 192.168.X.101
Subnet Mask :: 255.255.255.0
Gateway :: 192.168.X.1
DNS Server :: 192.168.X.1
===========================================================
What is archiving?
For years Red Hat uses 'tar utility' for the purpose of archiving
It is a mechanism through which files are stored in the storage devices with much
lesser footprint without disturbing the actual contents.
- Bzip ver 2
|_ bzip2 bunzip2 --> .bz2
- XZ
|_ xz unxz --> .xz
All the above three utilities replace the files with their compressed version in
the filesystem. They can be used for a single file only.
All the above three compression utilities can be used in integrated form along with
tar utility as an options
===========================================================
10. Create a compressed archive file called /root/usrlocal.tgz for the directory
/usr/local. Ensure file is being compressed through gzip compression utility.
11. Create an archive file called /root/etc.tar for the directory /etc. Compress
the file using bzip2 compression utility.
===========================================================
DNF is the primary package management tool for installing, updating, removing, and
managing software packages in Red Hat Enterprise Linux 9. DNF performs dependency
resolution when installing, updating, and removing software packages. DNF can
manage packages from installed repositories in the system or from .rpm packages.
The main configuration file for DNF is at /etc/dnf/dnf.conf, and all the repos are
at /etc/yum.repos.d.
Command Purpose
dnf list To list all the available packages in the dnf database
dnf install Installs the specified packages
dnf remove Removes the specified packages
dnf search Searches package metadata for keywords
dnf info Lists description
dnf update Updates each package to the latest version
dnf repolist Lists repositories
dnf history Displays what has happened in past transactions
dnf provides dnf provides function is used to find which package a
specific file belongs to
To enable support for a new third-party repository, we need to create a file in the
/etc/yum.repos.d/ directory. Repository configuration files must end with a .repo
extension. The repository definition contains the URL of the repository, a name,
whether to use GPG to check the package signatures, and if so, the URL pointing to
the trusted GPG key.
Sample .repo file
[repo_id]
name=Description of the repository
baseurl=URI of the repository
enabled=1
gpgcheck=0/1
gpgkey=URI of the gpg key (if gpgcheck is set as 1)
===========================================================
3. YUM repositories are available under the following URL (start doing using ssh
from here)
https://mirror.stream.centos.org/9-stream/BaseOS/x86_64/os/
https://mirror.stream.centos.org/9-stream/AppStream/x86_64/os/
===========================================================
What is scheduling ?
Daemon : crond
Package : cronie
Command : crontab
===============================================================
15. The user sarah must configure a cron job that runs daily at 2:30 PM and
executes the command "echo Hello World".
15a. User Sarah must configure a cron job everyday for an interval of 3 minutes and
execute the command logger "The exam is going on".
================================================================
Bottleneck
It's a hold up situation where a system process is impacted which impact the system
performance and may generates a deadlock situation
Tuned is a Linux feature that monitors a system and optimise its performance under
certain workloads. Tuned uses profiles to do this. A profile is a set of rules that
defines certain system parameters such as disk settings, kernel parameters, network
optimization settings, and many other aspects of the system.
We can use the 'tuned-adm' command-line tool to set the tuned profiles.
====================================================
4. Check your system's default tuning profile and set it as per recommended for
this system.
====================================================
This is a multi layer security model which controls which application can access
which file/directory
SELinux does more than just file and process labelling. Network traffic is also
tightly enforced by the SELinux policy. One of the methods that SELinux uses for
controlling network traffic is labelling network ports; for example, in the
targeted policy, port 22/TCP has the label ssh_port_t associated with it. The
default HTTP ports, 80/TCP and 443/TCP, have the label http_port_t associated with
them.
Whenever a process wants to listen on a port, SELinux checks to see whether the
label associated with that process (the domain) is allowed to bind that port label.
This can stop a rogue service from taking over ports otherwise used by other
(legitimate) network services.
If you decide to run a service on a nonstandard port, SELinux almost certainly will
block the traffic. In this case, you must update SELinux port labels.
===============================================================
14. Your organization is deploying a new custom web application. The web
application is running on a nonstandard port; in this case, 82/TCP. One of your
junior administrators has already configured the application on your system. You
need to deploy httpd package in the local system which must run even after reboot
and ensure that the application is accessible from the network by other hosts.
================================================================
The first partition on the first SATA disk will be identified as /dev/sda1
-----------------------------------------
A swap space is an area of a disk under the control of the Linux kernel memory
management subsystem. The kernel uses swap space to supplement the system RAM by
holding inactive pages of memory. The combined system RAM plus swap space is called
virtual memory.
============================================================
20. Create a swap partition of 700MiB without disturbing the current swap volume.
It should be persistent across reboot.
=============================================================
Using LVM we can create a storage pool by accommodating storage spaces from
multiple physical disks which we call a volume group. Then we can create one or
more logical volumes from that volume group. Logical volumes can be treated in the
same way, the way the partitions are treated.
Components of LVM
-----------------
Physical devices : Physical devices are the storage devices used to save data
stored in a logical volume. These are block devices and could be disk partitions,
whole disks, RAID arrays, or SAN disks. A device must be initialised as an LVM
physical volume in order to be used with LVM. The entire device will be used as a
physical volume.
Physical volumes (PVs) : You must initialise a device as a physical volume before
using it in an LVM system. LVM tools segment physical volumes into physical extents
(PEs), which are small chunks of data that act as the smallest storage block on a
physical volume.
Volume groups (VGs) : Volume groups are storage pools made up of one or more
physical volumes. This is the functional equivalent of a whole disk in basic
storage. A PV can only be allocated to a single VG. A VG can consist of unused
space and any number of logical volumes.
Logical volumes (LVs) : Logical volumes are created from free physical extents in a
volume group and provide the "storage" device used by applications, users, and the
operating system. LVs are a collection of logical extents (LEs), which map to
physical extents, the smallest storage chunk of a PV. By default, each LE maps to
one PE. Setting specific LV options changes this mapping; for example, mirroring
causes each LE to map to two PEs.
== Physical Volume
- pvscan
- pvcreate
- pvdisplay
- pvs
- pvremove
== Volume Group
- vgcreate
- vgcreate -s <PE_Size>
- vgdisplay
- vgs
- vgremove
- vgchange
== Logical Volume
- lvcreate -L <Volume_Size>
- lvcreate -l <PE_Count>
- lvdisplay
- lvs
- lvremove
=============================================================
18. Create a logical volume called database under the volume group called datastore
and should have 15 extents. Logical extent size should be 32 MiB in the logical
volume. Format the logical volume using ext3 file system and mount it under
/database.
19. You have an logical volume which is mounted with /image file system. Extend
/image filesystem by 1 GiB.
==============================================================
Benefits of Automounter
- Users do not need to have root privileges to run the mount and umount commands.
- NFS shares configured in the automounter are available to all users on the
machine
- NFS shares are not permanently connected like entries in /etc/fstab, freeing
network and system resources.
- The automounter is configured on the client side; no server-side configuration is
required.
- The automounter uses the same options as the mount command, including security
options.
- The automounter supports both direct and indirect mount-point mapping, for
flexibility in mount-point locations.
- autofs creates and removes indirect mount points, eliminating manual management.
- autofs is a service that is managed like other system services.
- Install the autofs package along with nfs-utils through dnf install autofs nfs-
utils
- Add a master map file to /etc/auto.master.d. This file identifies the base
directory used for mount points and identifies the mapping file used for creating
the automounts. The name of the master map file is arbitrary (although typically
meaningful), but it must have an extension of .autofs for the subsystem to
recognize it.
vim /etc/auto.master.d/demo.autofs
- Create the mapping files. Each mapping file identifies the mount point, mount
options, and source location to mount for a set of automounts.
vim /etc/auto.demo
=================================================================
=================================================================
To be created under /usr/local/bin. Need to search file in /usr and store under
/root/filefound with below condition
1. File size should be more than 30k and less than 50k
To be created under /usr/local/bin. Need to search file in /usr and store under
/root/filefound with below condition
1. File size should be more than 30k and less than 50k
=========================================================
One task that every system administrator should be able to accomplish is resetting
a lost root password. If the administrator is still logged in, either as an
unprivileged user but with full sudo access, or as root, this task is trivial. When
the administrator is not logged in, this task becomes slightly more involved.
Several methods exist to set a new root password. One common method is - a system
administrator could, for example, boot the system using a Live CD, mount the root
file system from there, and edit /etc/shadow.
On Red Hat Enterprise Linux 9, it is possible to have the scripts that run from the
initramfs. The script rd.break pauses the booting process at certain points,
provides a root shell, and then continues when that shell exits. This is mostly
meant for debugging, but you can also use this method to reset a lost root
password.
======================================================
======================================================