RHCSA Notes
RHCSA Notes
Vi stuff:
/keyword: forward search
?keyword: backward search
dd: delete current line
d + [end, ->, <-..]: delete in direction
yy: copy current line
y: copy selection
p: paste
v: visual mode, used to select text
u: undo
Ctrl+r: redo
o: insert in new line
:%s/old_word/new_word/g: substitute/replace globally
Globbing:
#touch script{0..100}: create 100 files: script0, script01…script100
#ls ?cript: search for any character followed by “cript”
#ls ?cript[0-9][0-9]: e.g script01
#ls [hm]ost: e.g most, host
#ls [!hm]ost: doesn’t start with h/m
NOTE: bzip2 and xz both have same syntax as gzip, xz has a better compression overall.
#sed -i s/oldword/newword/g /home/myfile : replace oldword with newword, -i (in-place editing) will modify
the file, if not specified, result will be printed out, you can use /gi instead of /g to ignore case.
#sed 's/[hH]ello/BONJOUR/g' sedfile : regex, replace hello or Hello with BONJOUR
#sed -n ‘5,10p’ myfile : prints (p) lines 5 to 10.
#sed ‘5,10d’ myfile : delete (d) lines 5 to 10 and print the rest.
#sed -n -e '5,7p' -e '10,13p' myfile.txt : execute multiple sed commands
-n : only print what explicitly asked to be printed, e.g ‘4,10p’.
-e : editor-command pass a command to the editor.
#visudo : special vi editor for sudoers file, visudo -c checks the validity of sudoers.
- To add user to sudoers file, add him to the group ‘wheel’, for example #usermod -G wheel myuser, changes
will only take effects after the user logs out (#exit)
- You can also only grant users execution of some specific commands, to do so, visudo then add something like:
linda ALL=/usr/sbin/useradd, /sbin/passwd, /usr/sbin/usermod
explanation : linda (user) ALL (all machines, or ‘localhost’ for local machine)=*commands separated by ‘,’*
Users & Groups management:
#useradd -c “this is bill” bill: create user bill with comment “this is bill”
#useradd -D: list default settings, these settings are stored in /etc/default/useradd
#usermod -aG wheel bill: (-a) append (-G) group wheel, without -a, existing groups will be removed from
account.
#userdel -rf bill: force (-f) delete user bill along with home directory and mail spool (-r)
#passwd bill: change password for bill
-l : lock
-u: unlock
#echo password | passwd --stdin bill: change password without prompts.
- The /etc/login.defs file provides default configuration information for several user account parameters. The
useradd, usermod, userdel, and groupadd commands, and other user and group utilities take default values
from this file. Each line consists of a directive name and associated value.
- Files in /etc/skel are copied automatically to home directories when users are created.
- Understanding /etc/passwd
Permissions:
#chown anna /tmp/file: change user ownership to anna
#chown anna:students /tmp/file: change user & group ownership
#chgrp students /tmp/file: change group membership to students (=chown :students /tmp/file)
#chmod 755 /tmp/file: change permission to rwxr-xr-x
#chmod +x: same as chmod a+x (add execution permissions to all users/groups/others)
#chmod -x,o+rw: all -x and rw for others
- umask is a command that subtracts permissions during files/directories creations.
#umask 022 would subtracts write permission from groups and others
- default umask value can be changed on /etc/profile (not recommended) or per user profile in
/home/myuser/.bash_profile (recommended)
- if umask is 022, and you create a file, without umask, it will get 666 since in Linux, you’ll never get execute (x)
right by default, applying umask will give you a file with permissions 644.
- setuid allows you to execute a file in user owner context, for example if myprogram has owner root and setuid
is set, then any user can execute myprogram with root privileges, passwd command for example has setuid
enabled which allows it to write on /etc/shadow
#chmod u+s /data/myprogram: setuid on myprogram, chmod 4XXX to use numeric mode.
Result is an ‘s’: -rwsr-xr—(capital S means x isn’t set)
NOTE: setuid only works on complied binaries and NOT shell scripts (#!/bin/bash)
- setgid on files allows you to execute files as group owner
- setgid on directory means that all files under this directory will inherit the group owner
#chmod g+s /data/myfile: adds setgid to myfile
#chmod g+s /data/: adds setgid to /data/, all files created under /data/ will have the same group ownership
as /data/ (very useful for shared directories).
Result is an ‘s’ in group section: drwxrws--- (capital S means x isn’t set)
- The sticky bit is a permission bit that protects the files within a directory. If the directory has the sticky bit set,
a file can be deleted only by the owner of the file, the owner of the directory, or by root. This special
permission prevents a user from deleting other users' files from public directories such as /tmp :
drwxrwxrwt 7 root sys 400 Sep 3 13:37 tmp (capital T means x isn’t set)
- nmcli is a powerful command-line utility to perform network-related activities, nmtui is a text-based utility less
complex than nmcli.
- nmcli uses ‘connections’, a connection is basically the link between the network interface and the configuration
file containing addresses...
#nmcli connection show
#nmcli connection add con-name myTestConnection ifname ens33 ipv4.method manual ipv4.addresses
192.168.192.20 ipv4.dns 8.8.8.8 ipv4.gateway 192.168.192.2 type ethernet : add a new connection to ens33
NOTE: if ipv4.manual isn't set, interface will rely on DHCP and NOT the manual configuration provided
#nmcli conection mod myTestConnection +ipv4.addresses 10.0.0.10 : modify current connection and add a
secondary ip address
#nmcli connection up MyTestConnection : will load the newly created connection
#nmcli general hostname server.example.local : change persistent hostname
#man nmcli-examples : check load of useful examples
NOTE: nmcli & nmtui + other tools write network parameters in the following config file
/etc/sysconfig/network-scripts/ifcfg-myTestConnection, you can make changes directly within the file, after
that, restart NetworkManager service, or use #nmcli connection up myConnectionName.
#systemctl status NetworkManager : Check NetworkManager status
#ping -c 1 google.com : 1 ping
#ping -f google.com : send packets to google and display % of packet loss
#dig myserver.com : get DNS information about server
Managing processes:
- process is a program under execution
- job is a concept used by shell – commands started by shell and can be managed by “jobs” command
#command & : run in background, this will display job number and process ID, eg: [1] 133468
#fg 1 : bring job with jobID=1 to the foreground
- If job is running in foreground, pause it using ctl+Z and then use bg to make it in background, ctl+C to kill.
#ps aux or ps -ef: print all processes along with the command used to start them
#ps -fax : print a hierarchical view of all processes
#ps -fU student : prints processes owned by student
#ps -f --forest -C sshd : shows hierarchical view of process sshd
#ps L : show format specifier (such as ppid, pid, cmd, user …)
#ps -eo pid,ppid,user,cmd,%cpu,%mem : shows more details using specifiers
#pgrep sshd: displays all PIDs having a name containing ‘sshd’
#free -h : displays memory usage in human-readable format (-m for Mo, -g for Go …)
#uptime : displays information about load average in the last 5, 10 and 15 minues
- As a rule of thumb, the load average should not be higher than the number of CPU cores in your system. You
can find out the number of CPU cores in your system by using the #lscpu command. If the load average over a
longer period is higher than the number of CPUs in your system, you may have a performance problem.
#top: show a real time overview on processes
f : to add/remove columns
W : to write current settings to file
k : to kill a process by PID | r : renice a task by PID
M : sort by memory usage | P : sort by CPU usage
#kill -15 12345 : nicely (SIGTERM) kill process ID 12345 (recommended), you can use -9 (SIGKILL) to brutally kill
(not recommended).
#pkill myprogram : kill process myprogram
#killall dd : kill all dd processes (useful, it uses name instead of PID)
NOTE: by default, all processes start with same priority value (20), When using nice or renice to adjust process
priority, you can select from values ranging from –20 to 19. The default niceness of a process is set to 0 (which
results in the priority value of 20). By applying a negative niceness, you increase the priority. Use a positive
niceness to decrease the priority. It is a good idea not to use the ultimate values immediately. Instead, use
increments of 5 and see how it affects the application.
NOTE: Some processes has ‘rt’ (real-time) tag as propriety, these are kernel processes, there is no way to change
that or set priority to ‘rt’.
#nice -n -10 /opt/myprogram : start myprogram with nice value=-10
#renice -n 0 12345 : set nice value of process 12345 to 0
- To offer the best possible performance right from the start, RHEL 8 comes with tuned. It offers a daemon that
monitors system activity and provides some profiles. In the profiles, an administrator can automatically tune a
system for best possible latency, throughput, or power consumption.
#systemctl status tuned
#tuned-adm list : list available profiles + active profile (also, tuned-adm active to only show active profile)
#tuned-adm profile desktop : switch to desktop profile, use tuned-adm recommend for best suggestion
Managing Software:
- RHEL8 comes with 2 repos, appStream repo for apps and BaseOS for kernel packages.
- In order to create a local repository based on RH ISO image, you need to:
- Mount the iso image locally, eg. dd if=/dev/sr0 of=/rh.iso and then mount iso
- create 2 files in /etc/yum.repos.d
- File1 for appStream apps, e.g appstream.repo and file2 for baseOS packages
[appstream] [baseOS]
name=appstream name=baseOS
baseurl=file:///repo/AppStream baseurl=file:///repo/BaseOS
gpgcheck=0 gpgcheck=0
- run #yum repolist to verify the availability of the newly created repository
#yum search nmap: search for nmap in all repos
#yum install nmap : install nmap
#yum remove nmap : remove nmap
#yum update nmap : install latest version if any
#yum provides */nmap : deep search, search for nmap in packages
#yum info nmap : gives more info about nmap package
#yum list all : lists all packages in all repos
#yum list installed: lists all installed packages
#yum clean all : clean all repos metadata and cache entries, yum will download and cache everything next time
- yum also provides group management, for ex. If you want to download all packages suitable for end-user
laptops and PCs, there is a ‘Workstation’ group containing all packages, similarly, ‘Virtualization Host’ group
contains all packages if you want to deal with virtualization
#yum group list [hidden]: list available groups / ‘hidden’ list also groups that are rarely used
#yum group info “Workstation” : shows packages + optional packages
#yum group install Workstation [--with-optional]: install Workstation group / Also install optional packages
#yum group list ‘Virtualization Host’ : list groups within this group
#yum history : show all yum actions history (install, remove …)
#yum history undo 4 : will undo action 4, in this example, it will install nmap
- Yum also supports ‘modules’, A delivery mechanism to install RPM packages. In a module different versions
(streams) and profiles can be provided.
- Stream : specific version of module
- Profile : A collection of packages that are installed together for a particular use case
- Package : RPM packages
#yum module list : List all available modules from all repos
#yum module list postgresql : list postgresql modules
- if you need to execute rpm command on *.rpm file, use yumdownloader to download package from repo.
#yum install dnf-utils : will install useful utilities including yumdownloader
#yumdownloader nmap : download nmap package locally for further analysis
- repoquery command can be used to query packages (like -qp) directory in their repos, see repoquery –help
#subscription-manager register : to register server on RH (will ask for username + password)
#subscription-manager attach --auto : to attach server to subscription and start using RH repos
Working with systemd:
- the Systemd System and Service Manager is used to start stuff. The stuff is referred to as units. Units can be
many things. One of the most important unit types is the service.
#systemctl -t help : displays various units types (-t) that can be managed using systemd
#systemctl list-unit-files : list all unit files + status (enabled, disabled …)
#systemctl list-units : Lists active units in the system (--type=[target|service|socket…])
#systemctl status vsftpd : Check status of vsftpd service
Loaded: loaded (/usr/lib/systemd/system/vsftpd.service; disabled; vendor
preset: disabled)
- The above means that the service won’t automatically start after next boot.
#systemctl enable vsftpd : enable vsftpd, it will start automatically next bootup, status will be enabled.
#systemctl start vsftpd : Start service vsftpd
- system unit location are :
- /usr/lib/systemd/system : Never touch it, it is copied during RPM installation
- /run/systemd/system : contains unit files that have automatically been generated, rarely used
- /etc/systemd/system : contains custom unit files, use #systemct edit myservice.service to auto-create
the file and its configuration
#systemctl cat vsftpd.service : cat the unit file of vsftpd service
- Service unit file example:
[Unit]
Description=Vsftpd ftp daemon
After=network.target
[Service]
Type=forking
ExecStart=/usr/sbin/vsftpd /etc/vsftpd/vsftpd.conf
[Install]
WantedBy=multi-user.target
#systemctl show vsftpd.service : shows all parameters that can be used for vsftpd, use #man 5 systemd-
system.conf to see explanations of parameters
IMPORTANT : When you change a configuration of a service, use #systemctl daemon-reload and #systemctl
restart myService.service for the configuration to take effect
- Other types of units also exist, such as sockets (e.g cockpit) and mounts (e.g tmp), each has its own
parameters, check man (5) pages of system.mount, system.socket for more info, also check study guide.
- Target units are like group of units; they are units logically grouped which makes it easier to manage at once
instead of one by one
Tasks scheduling:
#systemctl status crond : Check the status of crond daemon which takes care of executing cron tasks
IMPORTANT #man 5 crontab : Contains all Cron time specifications
#crontab -e : edit crontab for current user | #crontab -l : list cron jobs for the current user
#crontab -e -u anna : edit crontab of user anna (needs to be root)
#crontab -r -u anna : remove crontab entries of user anna
- Check /etc/crontab to see the format used by crontab
- When crontab entries are created, a file with the username (e.g anna) is created under /var/spool/cron, DO
NOT edit that file directly, it is managed by cron.
- Alternatively (second way), you can also directly create files (file name doesn’t matter) under /etc/cron.d with
the following format:
* * * * * <username> <command>
30 0 1 * * anna logger hello
NOTE: when using #crontab -e, you don’t have to specify the username since it will automatically be executed
using current user who issued the crontab -e command, if modifying /etc/crontab or creating files under
/etc/cron.d, username must be specified.
- Third way to schedule Cron jobs, is by creating files under /etc/cron.hourly, /etc/cron.daily,
/etc/cron.weekly, /etc/cron.monthly. In these directories, you typically find scripts (not files that meet the
crontab syntax requirements) that are put in there from RPM package files. When opening these scripts, notice
that no information is included about the time when the command should be executed. That is because the
exact time of execution does not really matter. The only thing that does matter is that the job is launched once
an hour, day, week, or month.
- crontab is concise, if laptop/server is off when command is scheduled, it won’t be executed after. That’s why
anacrontab is useful, it is used to schedule jobs hourly, daily, weekly or monthly, the exact time of execution
doesn’t matter, even if server/laptop is off, anacrontab will make sure to execute your script/job. Use
/etc/cron.hourly and the rest to configure your jobs, anacrontab will make sure to execute them.
- to deny users from using cron, put their names in /etc/cron.deny, users not listed in this file are allowed, it
works like blacklists, you can also use /etc/cron.allow to allow only specific users to use cron, users not listed
are disallowed (whitelisting). Both files cannot exist at the same time, if neither of the 2 exist, then only root is
allowed
- To create your own custom tmp file config, create a conf file under /etc/tmpfiles.d/mytmp.conf
- put some configuration such as (check man tmpfiles.d for all possible options):
#Type Path Mode UID GID Age Argument
d /etc/mytmpdirectory/ 0755 root root 30s -
r /etc/mytmpdirectory/myfile
#systemd-tmpfiles --create /etc/tmpfiles.d/mytmp.conf : tell tmpfiles daemon about the new entry
- Then you can either wait for systemd-tmpfiles-cleaner.timer to be triggered or use #systemd-tmpfiles --clean
to trigger it manually
Configuring Logging:
- Service can write log message using:
- Direct write | -rsyslogd | -journald
#journalctl : check journald log messages (-f for live monitoring, -n 20 for last 20 lines)
NOTE : Because the journal that is written by journald is not persistent between reboots, messages are also
forwarded to the rsyslogd service, which writes the messages to different files in the /var/log directory. rsyslogd
also offers features that do not exist in journald, such as centralized logging and filtering messages by using
modules. Numerous modules are available to enhance rsyslog logging, such as output modules that allow
administrators to store messages in a database.
To get more information about what has been happening on the system, administrators have to take three
approaches:
- Monitor the files in /var/log that are written by rsyslogd.
- Use the journalctl command to get more detailed information from the journal.
- Use the #systemctl status <unit> command to get a short overview of the last significant events
If there are services that do not have their own rsyslogd facility that need to write log messages to a specific log
file anyway, these services can be configured to use any of the local0 through local7 facilities.
To determine which types of messages should be logged, different severities can be used in rsyslog.conf lines.
These severities are the syslog priorities.
Priorities:
- When a specific priority is used, all messages with that priority and higher are logged according to the
specifications used in that specific rule.
- If you want to only send a specific priority, use ‘=’ : e.g cron.=debug -/var/log/cron.debug
IMPORTANT: use #man 5 rsyslog.conf to get list of facilities & priorities to use.
- journalctl is another way to check logs, by default it is not persistent, create a directory #mkdir
/var/log/journal and set Storage=auto in /etc/systemd/journal.conf for persistence. #systemctl status <unit>
also reveals recent logs from journald.
- journalctl has many filters that can be used such as #journalctl _UID=1000 (show all logs created by uid 1000),
#journalctl _SYSTEMD_UNIT=sshd.service shows all sshd related logs
Use man #man systemd.journal-fields for all possible fields and filters.
#journalctl -p [err|notice|warning..] : filter using priorities
#journalctl -o verbose : This shows different options that are used when writing to the journal.
Managing Storage:
- Hard disks can be partitioned into multiple partitions, we distinguish 2 types of partition schemes:
MBR (Master Boot Record) : Invented in 1980s, has a limitation of 2To and can only support 4 primary
partitions, if more is needed, the 4th partition can be extended to include more logical partitions up to
15, it also uses BIOS partition table.
GPT (GUID Partition Table) : More modern scheme that supports disks with up to 8 Zb in disk space, up
to 128 partitions can be created, it uses UEFI (Unified Extensible Firmware Interface) instead of BIOS, it
identifies partitions using GUID and has a backup copy of the GUID partition table is created by default
at the end of the disk which is not possible with MDR.
NOTE: If you get an error message indicating that the partition table is in use, type #partprobe to update the
kernel partition table.
- List of common FS
- Once partition is created, you need to make a file system (like formatting), use mkfs.XXX command..
#mkfs.xfs /dev/nvme0n1p1: format partition using xfs FS
#mkfs.ext4 /dev/nvme0n1p2: format partition using xfs ext4 FS
#mount /dev/nvme0n1p1 /mnt : mount the partition on /mnt/, THIS MOUNT IS NOT PERSISTENT!!
#umount /dev/nvme0n1p1 : unmount the partition, you can also use #umount /mnt/
NOTE: if you get “target is busy.” While unmounting, use lsof (list open files) to see which files/programs are
being used on this directory, e.g #lsof /mnt/ will list open files on /mnt
- It is always better to give a label to partitions, Labels are used to set a unique name for a file system, which
allows the file system to be mounted in a consistent way, even if the underlying device name changes, this can
be done while making fs, e.g #mkfs.xfs -L blabla, or after, use tune2fs -L for ext4 and xfs_admin -L for xfs
#tune2fs -L books /dev/nvme0n1p1: change label for an ext4 partition
#xfs_admin -L articles /dev/nvme0n1p2 : change label for an xfs partition
[Mount]
What=LABEL=employees
Where=/opt/employees
Type=xfs
Options=defaults
#mkdir /opt/employees
#systemctl daemon-reload
#systemctl enable --now opt-employees.mount
#lsblk or #mount : Make sure partitions are properly mounted
NOTE: when you create an entry on /etc/fstab, a systemd unit file is generated that can be used to control the
mount point, when the entry is removed, the unit is also removed.
- Virtual Data Optimizer (VDO) is another advanced storage solution offered in RHEL 8. VDO was developed to
reduce disk space usage on block devices by applying deduplication and compression features. VDO creates
volumes, implementing deduplication on top of any type of existing block device. On top of the VDO device, you
would either create a file system, or you would use it as a physical volume in an LVM setup
#yum install -y vdo kmod-kvdo: to install the required packages.
#vdo create --name=vdo1 --device=/dev/sde --vdoLogicalSize=1T : create the VDO device with a logical size of
1TiB.
#mkfs.xfs -K /dev/mapper/vdo1 : put an XFS file system on top of the device, -K is important to format faster.
#mkdir /vdo1 : create a mount point where the VDO device can be automatically mounted.
#vi /etc/fstab
UUID=xxx /vdo1 xfs x-systemd.requires=vdo.service,discard 0 0
Boot Procedure:
- To modify grub runtime configuration, press arrow key when grub menu is displayed, ‘e’ to edit, ‘c’ for cmds
- To modify grub persistent config, edit the file /etc/default/grub, then:
#grub2-mkconfig /boot/grub2/grub.cfg : If using BIOS
#grub2-mkconfig /boot/efi/EFI/redhatgrub.cfg : If using UEFI
Note : To check if your system is BIOS/UEFI : #mount | grep -i efi : if nothing you’re using BIOS
- You can also use systemd target units to boot the system in a specific state, these target units have a common
property “AllowIsolate=yes”, they are as follow:
poweroff.target runlevel 0
rescue.target runlevel 1
multi-user.target runlevel 3
graphical.target runlevel 5
reboot.target runlevel 6
- “wants” in systemd defines which units systemd target wants when started, for this, wantedBy=XXX.target is
created under [unit] stanza to express dependency, e.g if vsftpd has wantedBy=multi-user.target it means that
vsftpd, if enabled, will automatically starts when multi-user.target (or other targets that include it) is started,
when started a symbolic link is created under /etc/system/system/multi-user.target.wants/ directory
- in grub menu, press ‘e’, add rd.break to the end and ctrl-x to continue booting, you will fall in a shell
#mount -o remount,rw /sysroot
#chroot /sysroot
#passwd root
#touch /.autorelabel IMPORTANT
- ctrl-D ctrl-D
NOTE: you can also use #load_policy -i to load SELinux policy and use #restorecon -v /etc/shadow
Shell Scripting:
- By default, a shell script is executed in a subshell, if you want to execute script in the current shell context, use
#source myscript or #. Myscript
- it is better to check the cert guide, it contains complete explanation of all basic stuff to know
Managed Network Services:
SSH:
#ssh-keygen : Generate private/public key pair for current user, pub key distributed to target servers and private
key is stored securely locally allowing for key-based (password-less) authentication, a passphrase protecting the
private key is prompted, it is recommended to set it for extra security.
#ssh-copy-id 192.168.192.50: copy identity files to target server
#ssh-agent /bin/bash: create space to cache the passphrase
#ssh-add: adds current passphrase to cache so next SSH session won’t prompt for one
- To change SSH server settings, change configurations on /etc/ssh/sshd_config, some common settings include:
AllowUsers student linda lora PasswordAuthentication yes
PermitRootLogin Yes AllowGroups XXX YYY
#man sshd_config for more options
- After making changes #systemctl restart sshd
#scp /tmp/myfile [email protected]:/tmp/ : Copy file to server
#sftp 192.168.192.50 --> #get /tmp/myfile : get file over secure ftp
HTTP:
#yum install -y httpd : Install apache HTTP server
#yum enable --now httpd
- Main config file is /etc/httpd/conf/httpd.conf
- HTML content is by default on /var/www/html
SELINUX:
Firewall:
Date/Time:
- In recent RHEL versions, timedatectl replaces date, nzselect, hwclock and other times/dates commands
#timedatectl status : show current date/time information
#timedatectl set-time 15:50:38 : set time, format is HH:MM:SS
#timedatectl set-time 2021-01-30 : set date only, format is YYYY-MM-DD
#timedatectl set-ntp [true|false] : enable/disable NTP
#timedatectl set-timezone Africa/Casablanca : Set time zone
#timedatectl list-timezones : List known time zones
Mounting NFS:
- NFS server setup isn’t part of RHCSA, all needs to be done is to mount a remote NFS share.
#showmount -e 192.168.192.50 : show NFS shares in a remote server (it reads /etc/export file in remote server)
#mount 192.168.192.50:/data /mnt : mount remote NFS “/data” in local /mnt directory (NOT PERSISTENT)
- To mount NFS share persistently, use nfs FS and _netdev as mount option:
192.168.192.193:/data /nfs nfs _netdev 0 0
Mounting SAMBA:
- SAMBA is the open source implementation of SMB protocol, allowing Windows clients (beside Linux) to access
Linux file shares.
- SAMBA server setup isn’t part of RHCSA.
- To mount a remote SAMBA share locally, first install required packages:
#yum group install “Network File System Client”
#smbclient -L 192.168.192.50 : discovers samba shares on remote server if any
#mount -o username=sambauser //192.168.192.50/sambafolder /myshare : mount remote samba share
to /myshare (NOT PERSISTENT), -o username is to specify remote samba user
- To mount samba share persistently in /etc/fstab, use cifs as file system and _netdev as option with username
and password
//192.168.192.50/sambafolder /myshare cifs
_netdev,username=sambauser,password=P@ssw0rd 0 0
AutoFS:
- autoFS has the ability to mount network shares only when they are needed instead of at boot
#yum install autofs
- The config file is /etc/auto.master which only contains directory to use and config file related to it, e.g:
/data /etc/auto.data
- The /etc/auto.data will then contain information about /data subdirectories to mount, for example the
following will create a mount point /data/mynfs mounting a remote NFS share, you can get inspirations from
existing files such as /etc/auto.misc
mynfs -rw 192.168.168.50:/data
#systemctl restart autofs
Containers:
For rootless containers, this can be achieved using user system units:
#mkdir -p ~/.config/systemd/user
#podman generate systemd --name mynewcontainer > ~/.config/systemd/user/container-
mynewcontainer.service
- You can now manage this systemd user unit, always use --user to manage it:
#systemctl --user daemon-reload
#systemctl --user status container-mynewcontainer.service
#systemctl --user enable --now container-mynewcontainer.service
NOTE: system user units only starts when user logs in and stops when user logs out (you can confirm using
#exit), to keep services (containers in our example) persistent even after user logs out, use the following
command:
#loginctl enable-linger student : enable linger for user student