Linux Basic Course Notes by Altnix
Linux Basic Course Notes by Altnix
sadhiq
1 www.altnix.com
Linux PracticaIs
Index
1. Switching TerminaIs...........................(
2 - 3 )
2. Fdisk..................................
.......( 3 - 5 )
3. Format..................................
......( 6 - 7 )
4. RunIeveIs................................
.....( 8 - 8 )
5. SymIinks & HardIinks..........................(
9 - 16 )
6. Archiving & Compression.......................( 17
- 28 )
7. Daemons&Process............................
...( 29 - 38 )
8. FiIe
Permissions..............................(
39 - 41 )
9. Umask..................................
.......( 42 - 43 )
10. Administarative cmnds and LowIeveI cmnds....( 44 - 44 )
11. Understanding UNIX / Linux fiIe system........( 45 - 49 )
12. FSTAB..................................
.......( 50 - 56 )
13. Bash...................................
.......( 57 - 63 )
14. SheII
Scripting...............................( 64
- 73 )
15. RPM...................................
........( 74 - 76 )
16. UserAdministration............................
( 77 - 83 )
17. PAM...................................
........( 84 - 90 )
18. LVM...................................
........( 91 - 98 )
19. The Linux ScheduIers..........................(
99 - 106 )
20. QUOTA.................................
........( 107 - 110 )
21. KerneI CompiIation............................(
111 - 119 )
22. KerneI
Tuning.................................(
b.sadhiq
2 www.altnix.com
120 - 128 )
23. Networking................................
....( 129 - 138 )
24. FTP...................................
........( 139 - 145 )
25. NFS...................................
........( 146 - 154 )
26. Network Info Service (NIS)................... ( 155 - 160 )
27. InstaIIation of autofs........................( 161 -
162 )
28. DHCP: Dynamic Host Configuration ProtocoI.....( 163 - 171 )
29. TcpWrappers...............................
....( 172 - 176 )
30. Xinetd..................................
......( 177 - 179 )
31. SAMBA.................................
........( 180 - 194 )
32. FIREWALL /
IPTABLES...........................( 195 - 212
)
33. DNS...................................
........( 213 - 227 )
34. APACHE.................................
.......( 228 - 248 )
35. SendMaiI.................................
.....( 249 - 262 )
36. SQUID..................................
.......( 263 - 274 )
37. Vi/Vim
ExampIes...............................(
275 - 282 )
Linux PracticaIs
Switching TerminaIs
Linux have 6 ttys by default, which we call as vcs & driver assigned to it is tty (/dev/tty*)
To switch from gui use CTRL ALT F1 & to switch between the terminals use ALT -F2,
F3...
To check current terminal use $ps command.
Basic Commands
b.sadhiq
3 www.altnix.com
$df -h -> same as my computer in windows
$fdsik -l -> list partition
$man <cmd> -> manual
$clear -> clear the screen
$^l -> clear the screen
$ls -> list content
$ls -l -> list content in long listing format
$ls -al -> list all subcontent in long listing format
$ll -> an alias for the above
$ls -R -> list content recursively
$l. -> list hidden files
$ls -F -> list content and classify them
$alias -> display all aliases for current user
$alias <statement> -> make alias eg alias c='clear'
$unalias <alias> -> remove alias eg unalias c
$exit -> log out from the system
$logout -> log out from the system
$^d -> log out from the system
$tree -> list content in a tree (hierarchial) diagram
$tree -d -> list subdirectories only - no files
$tree -p -> list content with their permissions
$cd <directory> -> change directory to...
$cd .. -> change to parent directory
$cd - -> change to previous directory
$cd -> change to home directory
$cd ~ -> change to home directory
$pushd -> change dir with pwd
$cat -> display a content of a file
$pwd -> print work (current) directory
$pwd -P -> print parent working dir of this symlink dir
$mkdir <directory> -> make directory
$mkdir -p <directory> -> make parent directories also if it does not exist
$touch -> make a 0 byte file if it does not exist
$cp -> copy (for files)
$cp -a -> copy (for directories)
$cp -p -> copy and preserve date and time
$mv -> move OR rename
$rmdir -> remove empty directory
$rm -> remove (for files)
$rm -f -> remove forcefully ( " " )
$rm -r -> remove recursively (for directories)
$rm -rf -> remove recursively and forcefully ( " " )
$cat -> display content of the file
$cat -n -> display content of the file and number the
lines
$cal -> display calendar for current month
$date -> display system date and time
$date -s '<value>' -> change system date and time in mm/dd/yy
$hwclock -> display the hardware clock
b.sadhiq
4 www.altnix.com
$hwclock hctosys -> set the system time from the hardware clock
$ln -s -> make a soft/sym/symbolic link
$ln -> make a hard link
$history -> display the list of the last 1000 commands
$! 100 -> Run command 100 in history
$vi -> text editor
$vimtutor -> vi manual withexercise
$pico -> pico manual withexercise
$mcedit -> mcedit manual withexercise
$joe -> joe manual withexercise
$aspell -c <filename> -> check the spelling in the file
$elinks -> check the web links
$file -> display the type of file
$which -> display the path of the binary
$whereis -> display all paths
$hostname -> display system name with domain
$id -> display id info of current user
$id -u -> display user id of current user
$id -un -> display username of current user
$id -g -> display group id of current user
$id -gn -> display groupname of current user
$uptime -> display for how long the system has been
running
$tty -> display current terminal number
$users -> display no. of users currently logged in
$whoami -> display username of current user
$who -> display users logged in the system with
their
respective terminals and time since logged in
$who am -> display current user, terminal and uptime
$w -> display is details which files are open on
which terminal
http://www.oraclehome.co.uk/linux-commands.htm
$mkdir -p /opt/funny/test
$cd /opt/funny/test -- absolute path
$cd /opt/funny
$pwd
/opt/funny
$cd test - relative path
Fdisk
Partitioning with fdisk
This section shows you how to actually partition your hard drive with the fdisk utility. Linux
allows only 4 primary partitions. You can have a much larger number of logical partitions by
sub-dividing one of the primary partitions. Only one of the primary partitions can be sub-
b.sadhiq
5 www.altnix.com
divided.
Examples:
1. Four primary partitions
2. Mixed primary and logical partitions
fdisk usage
fdisk is started by typing (as root) fdisk /0;.0 at the command prompt. /evice might be
something like /dev/hda or /dev/sda (see Section 2.1.1). The basic fdisk commands you
need are:
p print the partition table
n create a new partition
d delete a partition
q quit without saving changes
w write the new partition table and exit
Changes you make to the partition table do not take effect until you issue the write (w)
command. Here is a sample partition table:
Disk /dev/hdb: 64 heads, 63 sectors, 621 cylinders
Units = cylinders of 4032 * 512 bytes
Device Boot Start End Blocks d System
/dev/hdb1 * 1 184 370912+ 83 Linux
/dev/hdb2 185 368 370944 83 Linux
/dev/hdb3 369 552 370944 83 Linux
/dev/hdb4 553 621 139104 82 Linux swap
The first line shows the geometry of your hard drive. t may not be physically accurate, but
you can accept it as though it were. The hard drive in this example is made of 32 double-
sided platters with one head on each side (probably not true). Each platter has 621
concentric tracks. A 3-dimensional track (the same track on all disks) is called a cylinder.
Each track is divided into 63 sectors. Each sector contains 512 bytes of data. Therefore the
block size in the partition table is 64 heads * 63 sectors * 512 bytes er...divided by 1024.
(See 4 for discussion on problems with this calculation.) The start and end values are
cylinders.
$fdisk /dev/hdxx
n create a new partition
b.sadhiq
6 www.altnix.com
press <-| at first cylinder
define size +100M at Last cylinder
w write and quit
$sync
$partprobe -s /dev/hdxx
- rereads the partition table and updates the kernel table.
-s - to show the output
Format
for ext2 format use $mke2fs for ext3 use $mke2fs -j
$ mke2fs j /dev/hdxx
-j stands for journaling as ext3 is a journaling filesystem.
$mkdir /newdir
$mount /dev/hdxx /newdir
for permanent mount use fstab
b.sadhiq
7 www.altnix.com
$vi /etc/fstab
Append - /dev/hdxx /mnt ext3 defaults 0 0
fstab is 9th out of the 10 most critical and important configuration files which is stored in /etc
directory, where all the configuration files are stored.
fstab stands for "File System TABle" and this file contains information of hard disk partitions
and removeable devices in the system. t contains infor-mation of where the partitions and
removeable devices are mounted and which device drivers are used for mounting them,
which filesystem they are using and what permissions are assigned to them.
1
st
field device
2
nd
field mountpoint
3
rd
field filesystem
4
th
filed permisson
5
th
field backup for sixth field
6
th
field fsck sequence (same as chkdsk in windows)
Task
create 100mb partition for Linux.
Follow steps same as above
ext3 is a journaling which maintains record in its journal.
Fast recovery & recovery successful
Ext2 doesnt maintains journal.Slow recovery & no guarantee.
Task
create 100000kb partition for ext2.
Follow steps same as above.
Task
create 96mb partition for windows.
Follow steps same as above.
Mount all the created partitions under fstab.
xt2 vs xt3
At some point in your install, you'll probably want to switch Iilesystem types. In the base install,
you're only given a choice oI ext2 (short Ior ext2Is, or ``second extended Iilesystem,'' which is the
``standard'' UNIX Iilesystem. xt3Isis the same as ext2, but provides journaling.For those as
sketchy on Iilesystem types as I am, it seems to be pretty basic. In the RADM on the original ext3
download page, the author answers the journaling question:
b.sadhiq
8 www.altnix.com
Q: What is journaling.
A: It means you don't have to fsck after a crash. Basically.
This is useful, because it means that every time your screen whites out
and crashes while choosing the right video card (Section 1.2.1), you
don't have to sit through an entire filesystem check of every inode. The
filesystem still fscks itself every X mounts or Y days, but doesn't put
you through the entire wait every time you crash it.To convert
partition,s to the ext3 filesystem, you need to cleanly unmount them,
boot something else (like the Debian CD you installed from -- see
Section 6.2 on how to do this), and then, on a console, do:
tune2fs -j /dev/hdaX
wherein /dev/hdaX is the partition you want to add journaling to (hence
the `-j' flag).Don't forget to modify the lines in your /etc/fstab to
reflect that the partitions in question are to be mounted as ext3, not
ext2. When cleanly unmounted, they can still be mounted as ext2, but the
whole point of changing them was so they wouldn't be.
That's it. When you reboot, your partitions should come up as ext3.
b.sadhiq
9 www.altnix.com
RunIeveIs
Red Hat Linux/Fedora runlevels D Description
0 Halt
1 Single-User mode
2 Multi-User mode with network enabled, but most network services disabled
3 Multi-User mode, console logins only
4 Not used/User-definable
5 Multi-User mode, with display manager as well as console logins
6 Reboot
SymIinks & HardIinks
Files are arranged in directories (or folders if you prefer that term), and each file can be
reached through a series of directories and sub-directories from the root - correct? Yes ...
BUT ... there are some times that the same file can be reached through several names, and
on Unix and Linux systems this is known as a "link".
There are two ways a link can be set up.
Hard Link
A Hard Link is where a file has two names which are
both on an equal weighting, and both of the file names in
the "inode table" point directly to the blocks on the disc
that contain the data. See diagram to the left.
You set up a hard link with an ln command without
options - if the file ab.txt already exists and you want to
give an additional name (hard link) to it, you'll write ln
ab.txt cd.txt and then both names will have equal ranking.
The only way you'll know that there's a link there is by
doing a long listing and you'll see a link count of 2 rather
than 1, and if you need to find out what's linked to what,
b.sadhiq
10 www.altnix.com
use the -i option to ls.
SymboIic Link
A SymboIic Link is where a file has one main name, but there's an extra entry in the file
name table that refers any accesses back to the main name. This is slighly slower at
runtime that a hard link, but it's more flexible and much more often used in day to day
admin work.
Symbolic links are set up using the ln command with the -s option - so for example
ln -s ab.txt cd.txt
will set up a new name cd.txt that points to the (existing) file ab.txt. f you do a log listing (ls
-l) of a directory that contains a symbolic link, you'll be told that it's a symbolic link with an
"l" in the first column, and you'll be told where the file links to in the file name column. Very
easy to spot!
Soft Links(Symbolic Links) :
1.Links have different inode numbers.
2. ls -l command shows all links with second column value 1 and the link points to original
file.
3. Link has the path for original file and not the contents.
4.Removing soft link doesn't affect anything but removing original file the link becomes
dangling link which points to nonexistant file.
n Softlink node is diff and the linked file will b a shortcut of first file
Hard Links :
1. All Links have same inode number.
2.ls -l command shows all the links with the link column(Second) shows No. of links.
3. Links have actual file contents
4.Removing any link just reduces the link count but doesn't affect other links.
b.sadhiq
11 www.altnix.com
n Hardlink node is same and both are independent
Soft link can create directories but hard link can't. Hard links created within that particular
file system but soft link cross that file system
Hard links canot cross partition
A single inode number use to represent file in each file system. All hard links
based upon inode number.
So linking across file system will lead into confusing references for UNX or
Linux. For example, consider following scenario
* File system: /home
* Directory: /home/sadhiq
* Hard link: /home/sadhiq/file2
* Original file: /home/sadhiq/file1
Now you create a hard link as follows:
$ touch file1
$ ln file1 file2
$ ls -l
Output:
-rw-r--r-- 2 sadhiq sadhiq 0 2006-01-30 13:28 file1
-rw-r--r-- 2 sadhiq sadhiq 0 2006-01-30 13:28 file2
Now just see inode of both file1 and file2:
$ ls -i file1
782263
$ ls -i file2
782263
As you can see inode number is same for hard link file called file2 in inode
table under /home file system. Now if you try to create a hard link for /tmp
file system it will lead to confusing references for UNX or Linux file system.
s that a link no. 782263 in the /home or /tmp file system? To avoid this
problem UNX or Linux does not allow creating hard links across file system
boundaries. Continue reading rest of the Understanding Linux file system series
PracticaI
$mkdir /opt/new-file
$mkdir /usr/local/link-file
$vi /opt/newfile/abc
b.sadhiq
12 www.altnix.com
Append some content & save the above file
Now create a softlink for abc as xyz under /usr/local/link-file
$pushd /usr/local/link-file
$pwd
$ln -s /opt/new-file/abc xyz
Or
f u want to create symlink as from /home then
$pushd /home
$ln -s /opt/new-file/abc /usr/loal/link-file/xyz
Now check with the following & also note symlink files always have 777 perm
$ll | grep ^l
Also chk the size of both file and its self expalinatory
Now
$Append some data in xyz file u will get the same under abc
Now try removing the parent file in our case "abc
$rm rf /opt/new-file/abc
Now verify the symblink
$ll /usr/local/link-file/
Your file has broken symlink so its called orphaned
So when ever u delete a parent file it will effect & if softlink is deleted there is no effect in
softlinks
Softlink files have different inodes of parent
Softlink can also cross partitions.
Now what if u want run a binary from different path and with different name
$which mount
$ln s /sbin/mount /opt/mapping
$pushd /opt/
$./mapping
$ln s /bin/pwd /usr/bin/prnt-work-dir
Now u can run the follow for "pwd
$prnt-work-dir
b.sadhiq
13 www.altnix.com
$mkdir /opt/hard-link
$pushd /opt/link-file
Create a new file name file1 and aapend data
$echo "This is an new file > file1
$cat file1
Now create a hard link from current path file1 to file2
$ln file1 /opt/hard-link/file2
Now try deleting and appending and try u done as above for soft link
Hardlinks are type of backup if parent & child is deleted no effect
Hardlinks have same inode numbers
Harslinks cannot cross parttitons, Also try crossing partitions
Also try creating 2 to 3 links for a single parent file in softlink and hardlink.
More
17. Hard Links and SymboIic Links
Today we're going to test your virtual imagination ability! You're probably familiar with
shortcuts in Microsoft Windows or aliases on the Mac. Linux has something, or actually
some things similar, called hard links and symbolic links.
Symbolic links (also called symlinks or softlinks) most resemble Windows shortcuts. They
contain a pathname to a target file. Hard links are a bit different. They are listings that
contain information about the file. Linux files don't actually live in directories. They are
assigned an ino/e number, which Linux uses to locate files. So a file can have multiple
hardlinks, appearing in multiple directories, but isn't deleted until there are no remaining
hardlinks to it. Here are some other differences between hardlinks and symlinks:
1. You cannot create a hardlink for a directory.
2. f you remove the original file of a hardlink, the link will still show you the content of the
file.
3. A symlink can link to a directory.
4. A symlink, like a Windows shortcut, becomes useless when you remove the original file.
HardIinks
b.sadhiq
14 www.altnix.com
Let's do a little experiment to demonstrate the case. Make a new directory called Test and
then move into it. to do that, type:
$ mkdir Test
$ cd Test
Then make a file called FileA:
$ vi FileA
Press the key to enter nsert mode:
i
Then type in some funny lines of text (like "Why did the chicken cross the road?") and save
the file by typing:
Esc
ZZ
So, you made a file called FileA in a new directory called "Test" in your /home. t contains
an old and maybe not so funny joke. Now, let's make a hardlink to FileA. We'll call the
hardlink FileB.
$ ln FileA FileB
Then use the "i" argument to list the inodes for both FileA and its hardlink. Type:
$ ls -il FileA FileB
This is what you get:
1482256 -rw-r--r-- 2 sadhiq sadhiq 21 May 5 15:55 FileA
1482256 -rw-r--r-- 2 sadhiq sadhiq 21 May 5 15:55 FileB
You can see that both FileA and FileB have the same inode number (1482256). Also both
files have the same file permissions and the same size. Because that size is reported for
the same inode, it does not consume any extra space on your HD!
Next, remove the original FileA:
$ rm FileA
And have a look at the content of the "link" FileB:
$ cat FileB
You will still be able to read the funny line of text you typed. Hardlinks are cool.
SymIinks
Staying in the same test directory as above, let's make a symlink to FileB. Call the symlink
FileC:
b.sadhiq
15 www.altnix.com
$ ln -s FileB FileC
Then use the i argument again to list the inodes.
$ ls -il FileB FileC
This is what you'll get:
1482256 -rw-r--r-- 1 sadhiq sadhiq 21 May 5 15:55 FileB
1482226 lrwxrwxrwx 1 sadhiq sadhiq 5 May 5 16:22 FileC -> FileB
You'll notice the inodes are diIIerent and the symlink got a "l" beIore the rwxrwxrwx. The link has
diIIerent permissions than the original Iile because it is just a symbolic link. Its real content is just a
string pointing to the original Iile. The size oI the symlink (5) is the size oI its string. (The "-~ FileB"
at the end shows you where the link points to.
Now list the contents:
$ cat FileB
$ cat FileC
They will show the same funny text.
Now if we remove the original file:
$ rm FileB
and check the Test directory:
$ ls
You'll see the symlink FileC is still there, but if you try to list the contents:
$ cat FileC
t will tell you that there is no such file or directory. You can still list the inode. Typing:
$ ls -il FileC
will still give you:
1482226 lrwxrwxrwx 1 sadhiq sadhiq 5 May 5 16:22 FileC -> FileB
But the symlink is obsolete because the original file was removed, as were all the hard
links. So the file was deleted even though the symlink remains. (Hope you're still following.)
OK. The test is over, so you can delete the Test directory:
$ cd ..
b.sadhiq
16 www.altnix.com
$ rm -rf Test (r stands for recursive and f is for force)
Note: Be cautious using "rm -rf"; it's very powerful. f someone tells you to do "rm -rf /" as
root, you might loose all your files and directories on your / partition! Not good advice.
Now you know how to create (and remove) hardlinks and symlinks to make it easier to
access files and run programs. See you on the links!
Archiving & Compression
Archiving means that you take 10 files and combine them into one file, with no difference in
size. f you start with 10 100KB files and archive them, the resulting single file is 1000KB.
On the other hand, if you compress those 10 files, you might find that the resulting files
range from only a few kilobytes to close to the original size of 100KB, depending upon the
original file type.
ll of the archive and compression formats in this chapter zip, gzip, bzip2, and tar are
b.sadhiq
17 www.altnix.com
popular, but
Zip
zip is probably the world's most widely used format. That's because of its almost universal
use on Windows, but zip and unzip are well supported among all major (and most minor)
operating systems,
ip
gzip was designed as an open-source replacement for an older Unix program, compress.
t's found on virtually every Unix-based system in the world, including Linux and Mac OS X,
but it is much less common on Windows. f you're sending files back and forth to users of
Unix-based machines, gzip is a safe choice.
Bip2
The bzip2 command is the new kid on the block. Designed to supersede gzip, bzip2 creates
smaller files, but at the cost of speed. That said, computers are so fast nowadays that most
users won't notice much of a difference between the times it takes gzip or bzip2 to
compress a group of files.
PracticaI
zip both archives and compresses files, thus making it great for sending multiple files as
email attachments, backing up items, or for saving disk space.
Create
$mkdir -p /opt/test/zip_dir;cd /opt/test/zip_dir
Append man pages to a file
$man ls > file-ls;cat /etc/fstab > file-fstab;cat /root/anaconda.cfg > file-anaconda
$ls -lh
$ls -al
Zip the files to man-file.zip
$zip man-file.zip *
$ls -lF
$ man ls > file-ls.txt;cat /etc/fstab > file.txt;cat /root/anaconda.cfg > file-anaconda.txt;man
fdisk > file1.cfg;man fstab > fstab.cfg;man man > man.cfg
Try compressing the files created using zip and verify the size of moby.zip files
$ zip -0 moby.zip1 *.txt
$ ls -l
b.sadhiq
18 www.altnix.com
$ zip -1 moby.zip2 *.cfg
$ ls -l
$ zip -9 moby.zip3 *.cfg
$ ls -l
You can also try
$alias zip='zip -9'
Create backup dir under mnt $mkdir/mnt/backup
Copy /opt/test contents with rsync
$rsync -parv/opt/test/* /mnt/backup/
Exclude moby.zip under /mnt/backup and create backup.zip under /usr/local/
$zip -r /usrlocal/backup.zip /mnt/backup -x "/mnt/backup/zip_dir/moby.zip1"
Change dir to /usr/local by pushd cmd (man pushd)
$pushd /usr/local/
Try Password protected zip
$ zip -P 12345678 backup.zip *.txt
$ zip -e backup.zip *.txt
$unzip l
$unzip --ql backup.zip
verbose
unzip -v moby2.zip
list zipped files
$ unzip -l moby3.zip
List type
$ unzip -t moby2.zip
Now try any the following same as zip under any dir
gzip paradise_lost.txt
$ ls -l
b.sadhiq
19 www.altnix.com
Not good. nstead, output to a file.
$ ls -l
$ gzip -c paradise_lost.txt > paradise_lost.txt.gz
$ gzip -c -1 moby-dick.txt > moby-dick.txt.gz
$ ls -l
$ gzip -c -9 moby-dick.txt > moby-dick.txt.gz
$ ls -l
$ gzip -t paradise_lost.txt.gz
$ gunzip -c paradise_lost.txt.gz > paradise_lost.txt
$ bzip2 moby-dick.txt
$ ls -l
$ bzip2 -c moby-dick.txt > moby-dick.txt.bz2
$ ls -l
$ bzip2 -c -1 moby-dick.txt > moby-dick.txt.bz2
$ bzip2 -c -9 moby-dick.txt > moby-dick.txt.bz2
$ ls -l
$ bunzip2 moby-dick.txt.bz2
$ bunzip2 -c moby-dick.txt.bz2 > moby-dick.txt
$ bunzip2 -t paradise_lost.txt.gz
et the Best Compression Possible with zip
-[0-9]
t's possible to adjust the level of compression that zip uses when it does its job. The zip
command uses a scale from 0 to 9, in which 0 means "no compression at all" (which is like
tar, as you'll see later), 1 means "do the job quickly, but don't bother compressing very
much," and 9 means "compress the heck out of the files, and don't mind waiting a bit
longer to get the job done." The default is 6, but modern computers are fast enough that it's
probably just fine to use 9 all the time.
n tabular format, the results look like this:
Book ip -0 ip -1 ip -9
oby-Dick 0% 54% 61%
Para/ise Lost 0% 50% 56%
b.sadhiq
20 www.altnix.com
Book ip -0 ip -1 ip -9
ob 0% 58% 65%
Total (in bytes) 1848444 869946 747730
Password-Protect Compressed Zip Archives
-P
-e
The Zip program allows you to password-protect your Zip archives using the -P option. You
shouldn't use this option. t's completely insecure, as you can see in the following example
(the actual password is 12345678):
unip
Expanding a Zip archive isn't hard at all. To create a zipped archive, use the zip command;
to expand that archive, use the unzip command.
Archive with Tar
Archive and Compress Files with tar and gzip
-zcvf
f you look back at "Archive and Compress Files Using gzip" and "Archive and Compress
Files Using bzip2" and think about what was discussed there, you'll probably start to figure
out a problem. What if you want to compress a directory that contains 100 files, contained
in various subdirectories? f you use gzip or bzip2 with the -r (for recursive) option, you'll
end up with 100 individually compressed files, each stored neatly in its original
subdirectory. This is undoubtedly not what you want. How would you like to attach 100 .gz
or .bz2 files to an email? Yikes!
That's where tar comes in. First you'd use tar to archive the directory and its contents
(those 100 files inside various subdirectories) and then you'd use gzip or bzip2 to compress
the resulting tarball. Because gzip is the most common compression program used in
concert with tar, we'll focus on that.
b.sadhiq
21 www.altnix.com
You could do it this way:
$mkdir -p /mnt/common/moby-dick
$cd /mnt/common/moby-dick
$ man ls > file-ls.txt;cat /etc/fstab > file.txt;cat /root/anaconda.cfg > file-anaconda.txt;man
fdisk > file1.cfg;man fstab > fstab.cfg;man man > man.cfg
$cd ..
$pwd
/mnt/common/
$ ls -l moby-dick/*
$ tar -cf moby1.tar moby-dick/ | gzip -c > moby1.tar.gz
$ ls -l
That method works, but it's just too much typing! There's a much easier way that should be
your default. t involves two new options for tar: -z (or --gzip), which invokes gzip from
within tar so you don't have to do so manually, and -v (or --verbose), which isn't required
here but is always useful, as it keeps you notified as to what tar is doing as it runs.
$ ls -l moby-dick/*
$ ls -l
The usual extension for a file that has had the tar and then the gzip commands used on it is
.tar.gz; however, you could use .tgz and .tar.gzip if you like.
Note - t's entirely possible to use bzip2 with tar instead of gzip. Your command would look
like this (note the -j option, which is where bzip2 comes in):
$tar cvzf moby.tar.gz moby-dick
$ tar -jcvf moby.tar.bz2 moby-dick/
n that case, the extension should be .tar.bz2, although you may also use .tar.bzip2, .tbz2,
or .tbz. Yes, it's very confusing that using gzip or bzip2 might both result in a file ending with
.tbz. This is a strong argument for using anything but that particular extension to keep
confusion to a minimum.
Test Files That Will Be Untarred and Uncompressed
$ tar jvtf moby.tar.bz2
Before you take apart a tarball (whether or not it was also compressed using gzip), it's a
really good idea to test it. First, you'll know if the tarball is corrupted, saving yourself hair
pulling when files don't seem to work. Second, you'll know if the person who created the
tarball thoughtfully tarred up a directory containing 100 files, or instead thoughtlessly tarred
up 100 individual files, which you're just about to spew all over your desktop.
b.sadhiq
22 www.altnix.com
To test your tarball (once again assuming it was also zipped using gzip), use the -t (or --list)
option.
$ tar -zvtf moby.tar.gz
This tells you the permissions, ownership, file size, and time for each file. n addition,
because every line begins with moby-dick/, you can see that you're going to end up with a
directory that contains within it all the files and subdirectories that accompany the tarball,
which is a relief.
Be sure that the -f is the last option because after that you're going to specify the name of
the .tar.gz file. f you don't, tar complains:
$ tar -zvft moby.tar.gz
tar: You must specify one of the '-Acdtrux' options
Try 'tar --help' or 'tar --usage' for more information.
Now that you've ensured that your .tar.gz file isn't corrupted, it's time to actually open it up,
as you'll see in the following section.
Note - f you're testing a tarball that was compressed using bzip2, just use this command
instead:
$ tar -jvtf moby.tar.bz2
Untar and Uncompress Files
-zxvf
To create a .tar.gz file, you used a set of options: -zcvf. To untar and uncompress the
resulting file, you only make one substitution: -x (or --extract) for -c (or --create).
$ ls -l
$ tar -zxvf moby.tar.gz
$ ls -l
Make sure you always test the file before you open it, as covered in the previous section,
"Test Files That Will Be Untarred and Uncompressed." That means the order of commands
you should run will look like this:
$ tar -zvtf moby.tar.gz
$ tar -zxvf moby.tar.gz
Note - f you're opening a tarball that was compressed using bzip2, just use this command
instead:
$ tar -jxvf moby.tar.bz2
Repeat with different path
$ tar cvf /mnt/backup/sam.tar /opt/test/zip_dir/*
Archive & compress with gzip
$ tar cvf /mnt/backup/ramu.tar.gz /opt/test/zip_dir/*
b.sadhiq
23 www.altnix.com
$ pushd /mnt/backup
List before extracting
$ tar tvf ramu.tar.gz
Understand the foIIowing
$ mkdir ramu;tar zxvf ramu.tar.gz ramu/
$ ls ramu/
$ rm ramu/*
AIso try and understand
$ cat ramu.tar.gz | gunzip -d | tar -xvf /mnt/backup/ramu
$ ls /mnt/backup/ramu/
$ rm -rf /mnt/backup/ramu/*
$ gzcat ramu.tar.gz | tar -xvf - /mnt/backup/ramu
Finding fiIes and archiving them
You can make a tarball of only certain types of files from a directory with the following one-
liner:
$mkdir /mnt/common/test
$ find /mnt/common/moby-dick/ -name "*.txt" | xargs tar -zcpf reports.tar.gz
$ find /mnt/common/moby-dick/ -name "*.txt" | xargs tar -jcpf reports.tar.bz2
Now check
untar in a different directory
f you've got a gzipped tarball and you want to untar it in a directory other than the one
you're in, do the following:
$ cd /mnt/backup
$ zcat reports.tar.gz | ( cd ./otherdir; tar zxvf - )
$ ls
Understand the above cmd , note -: "-" is used in after the arguments given to tar.
Extract individuaI fiIes from a tarbaII
f you need a file that you've put into a tarball and you don't want to extract the whole file,
you can do the following.
b.sadhiq
24 www.altnix.com
First, get a list of the files and find the one you want
$ cd /mnt/common/moby-dick
$ tar -zltf moby1.tar.gz
Then extract the one you want
$ tar zxvf moby1.tar.gz file-anaconda.txt
Backup everything with tar
To make a backup of everything in a particular directory, first do this
$ cd /mnt/common/moby-dick/
$ ls -a > backup.all
f you don't really want *everything*, you can also edit backup.all and get rid of things you
don't want
To make the tarball, just do this:
$ tar -cvf newtarfile.tar `cat backup.all`
(remember, those are backtics)
Extracting Specific FiIes
Extract a file called etc/default/sysstat from config.tar.gz tarball:
$ tar cvzf /opt/test/config.tar.gz /mnt/backup/ramu
$ tar -ztvf config.tar.gz
$ tar -zxvf config.tar.gz <any file>
$ tar -xvf {tarball.tar} {path/to/file}
Some people prefers following syntax:
$ tar --extract --file={tarball.tar} {file}
Extract a directory caIIed css from cb.tar:
$ tar --extract --file=cbz.tar css
Wildcard based extracting
You can also extract those files that match a specific globbing pattern (wildcards). For
example, to extract from cbz.tar all files that begin with pic, no matter their directory prefix,
you could type:
Note before attempting the following you have to create tar files as cbz.tar with he files you
are going to extract.
$ tar -xf cbz.tar --wildcards --no-anchored 'pic*'
b.sadhiq
25 www.altnix.com
To extract aII php fiIes, enter:
$ tar -xf cbz.tar --wildcards --no-anchored '*.php'
-x: instructs tar to extract files.
-f: specifies filename / tarball name.
-v: Verbose (show progress while extracting files).
-j : filter archive through bzip2, use to decompress .bz2 files.
-z: filter archive through gzip, use to decompress .gz files.
--wildcards: instructs tar to treat command line arguments as globbing patterns.
--no-anchored: informs it that the patterns apply to member names after any /
delimiter.
Have you ever seen this error when using tar?
$ tar -czf etc.tgz /etc
Removing leading `/' from member names
Tar is removing the leading / from the archive file, and warning you about it. Although you
can redirect STDERR to /dev/null, doing so can result in missed errors. nstead, use tar
with the -P or --absolute-names switch. They do the same thing: leave the leading / in the
archived files.
$ tar -czPf etc.tgz /etc
When you untar the archive without -P, the leading / will still equate to your current working
directory. Use the -P when untarring to restore from archive to the absolute path name. For
example:
The following creates ./etc (dot, slash, etc)
$ tar -xzf etc.tgz
This overwrites /etc (slash, etc)!
$ tar -xzPf etc.tgz
PATH is an environmental variable in Linux and other Unix-like operating systems that tells
the shell which directories to search for executable files (i.e., ready-to-run programs) in
response to commands issued by a user. t increases both the convenience and the safety
of such operating systems and is widely considered to be the single most important
b.sadhiq
26 www.altnix.com
environmental variable.
Environmental variables are a class of variables (i.e., items whose values can be changed)
that tell the shell how to behave as the user works at the command line (i.e., in a text-only
mode) or with shell scripts (i.e., short programs written in a shell programming language). A
shell is a program that provides the traditional, text-only user interface for Unix-like
operating systems; its primary function is to read commands that are typed in at the
command line and then execute (i.e., run) them.
PracticaI - Setting Path
Login as root
$id
$echo $PATH
$useradd john
$passwd john
$su - john
$id
Verify john's PATH
$echo $PATH
you cant find :/sbin:/usr/sbin so u cant run cmnd's fdisk, shred under the same.
$fdisk -l
will get command not found.
So u can set path, but it's temporary for the sheII.
$PATH=$PATH=:/sbin:/usr/sbin
To set under environment run
$export PATH
For permanent
you can locate the above two cmnds under /etc/profile file, which run's always after
login.
Now chk you will get the above added dir under john's path.
$echo $PATH
b.sadhiq
27 www.altnix.com
Now try
$ fdisk -l
Note-: The cmd is executed but fdisk binary wiII work onIy by uid 0 (root), bco
it's programmed Iike that.
So search for the cmd in /sbin & /usr/sbin , which can run by other uid's.
Now create a testscript under /opt and execute the script
$vi /opt/testscript
#Append the following
echo " THS S MY SCRPT
#Save
$cd /opt
set execute permisson
$chmod +x /opt/testscript
$./testscript # (./ means current path execution)
But what if u want to run the script from any other directories under your filesystem
hiriearchy.
Then set the /opt dir to the users path as mentioned above or copy the script under the
following PATH . (which is already set)
set. For eg-:
$PATH=$PATH:/opt
$cd /
$testscript
or
$cp /opt/testscript /bin or /usr/local/bin etc...
Now try running the script
b.sadhiq
28 www.altnix.com
$cd /
testscript
Daemons&Process
Application Daemon are those which can be killed & will have no
effect to the sysytem
$ kill -15 <appd-pid
For eg. - firefox, openoffice, X server, etc...
System Daemons are those which can be killed & will effect the
system.
$ kill -9 <sysd-pid
For eg - init, kerneld, ksoftirqd, khelper, kthread, kblockd
OBJECTIVES
Defining a process
Process states
Process management
Job control
System Information
Performance Related Information
What is a Process?
A process has many components and properties
O exec thread
O PID
O priority
O memory context
O environment
O file descriptors
O security credentials
How Processes Are Created
Jne process forks a child, pointing to the same pages of memory,
and marking the area as read-only.Then, the child execs the new
command, causing a copy-on-write fault, thus copying to a new area
of memory. A process can exec, without forking. The child maintains
b.sadhiq
29 www.altnix.com
the process ID of the parent.
Process Ancestry
init is the first process started at boot time - always has PID 1.
Except init, every process has a parent.
Processes can be both a parent and a child at the same time.
Understand the Multiuser Environment.
Jne of the goals of UNIX was to enable a number of users to use the
system simultaneously (multiuser capability). Because several users
might also want to use several different programs simultaneously,
mechanisms must be available to allow these programs to run
simultaneously (multitasking capability).
The implementation of a multiuser and multitasking system appears
to be simultaneous in a single processor system, but this is only
possible in a multiprocessor system.
Even in a single-processor system, advantages can be gained through
multitasking because waiting times for input or output from
processes can be used for other processes.
UNIX implements preemptive multitaskingeach process is allowed a
maximum time with which it can work. When this time has expired,
the operating system takes processor time away from the process and
assigns it to another process waiting to run. Jther operating
systems (such as versions older than the MAC JS version X) do not
intervene in this process cycle. Instead, control over the
processor must be released by the running process before another
process can run.
This can lead to one process hijacking the processor, leaving other
processes without processing time and blocking the system. The
operating system coordinates access to the resources available in
the system (hard drives, tapes, interfaces). If there is
competition among processes, e.g., for access to a tape device,
only one process can be granted access. The others must be
rejected. This coordination task is very complex and no operating
system is able to implement an ideal solution. The classic problem
involves a situation in which two or more processes exclusively
need the same resources, as illustrated in the following resource
conflict:
The following describes the resource conflict:
Process A needs resources Res.1 and Res.2.
b.sadhiq
30 www.altnix.com
Process B needs resources Res.2 and Res.1.
Process A has received access to Res.1 and would now also like
access to Res.2. In the meantime, however, B has already gained
access to Res.2 and, in turn, would like access to Res.1 as well.
If these two processes wait until what they need is available,
nothing more will happen-they are deadlocked.
Multithreading is an extension of multitasking, and helps solve
this problem. In multithreading, a number of parts independent from
one another (threads) can be produced within a process.
Multithreading increases the level of parallel processes with each
thread needing to be administered, which makes the use of a
multiprocessor system more valuable.
A clear distinction should be made here between programs and
processes: as a rule, a program exists only once in the system, but
there can be several processes that perform the same program.If a
number of users are active, both programs and processes can be used
independently of one another (such as a program used to display
directories).
Processes and Multitasking
Terminology can be confusing
Multiuser: system can simultaneously service more than one
online terminal
Multiprogramming: the system can execute more than one program at
the same time
Multitasking: system can execute two or more tasks at the same
time
In common usage, these all refer to the same thing
Multitasking Operating Systems
Multitasking JSs are designed to perform a complex juggling trick
They must:
O Allocate resources, such as CPU cycles and memory, and assign
priorities so each process receives adequate attention
O igher priority jobs need more or larger CPU time-slices without
neglecting lower priority jobs
O Jobs that are waiting for some resource (such as user input,
input from disk, or a shared output such as a printer) need to
b.sadhiq
31 www.altnix.com
handled without wasting CPU time
Multitasking on a Single CPU
Jbviously, a single CPU cannot run multiple process simultaneously.
The JS simulates simultaneity by switching between tasks at a high
rate. Each switch is a time-slice Since thousands or hundreds of
thousands of CPU cycles can go by between user keystrokes, this
gives the appearance of simultaneous operation.
This resource allocation, priority processing, and time-slicing is
all done by the scheduler Unix Scheduling Algorithm
Unix schedules tasks in this order:
O ighest priority task that is Ready-to-Run and loaded in memory
and preempted
O Ties for priority are broken by time spent waiting (also known as
Round-Robin scheduling)
O If no one is ready to run, the kernel idles until the next time-
slice Unix Images and Processes
Each process receives a unique numerical process identifier (pid)
when it is started. Even if the same program is run multiple times,
each instance will have a unique PID. A process has an image in
RAM.
Forks and Spawns:
O When a process A is running, it can spawn another process B
O It does this using the fork system call
O B is said to be the child of A and A is known as the parent of B
O Initially, the child and parent are virtually identical
They each start with identical but independent copies of the RAM
image, but being separate processes, they have unique PIDs.
The child then calls the system call exec using the command name
and arguments inherited from the parent. From this point on, the
child and parent can go their separate ways. owever, since they
both have access to the same open files and pipes, there is a
potential for communication between them (interprocess
communication). The shell is the parent of most of your processes
The Shell is a Process. The principle process you interact with is
the shell. The shell can run some commands (builtins) itself but
for most commands, it forks a separate process. It usually waits
for the command process to finish and then gives you a new shell
prompt.
What if you could tell the shell not to wait. You could then
instruct the shell to do something else while the first command was
b.sadhiq
32 www.altnix.com
running in the background Voila! Multiprocessing in action!
Redhat Linux comparing with other Unices/Linuces, its shipped with
plethora of options for monitoring system, utilization with regards
to CPU, Memory and Disk etc.
$ uptime
1:1:16 up 3 days, :3, 5 users, load average: 0.00, 0.00, 0.00
Tells you exactly how long your system is been running from
1mt 5mt 15mt
load average: 0.00, 0.00, 0.00
$ cat /proc/meminfo
/proc
Virutal Directory created in RAM. It runs whenever the system is
running. It represents real time information and values stored in
are accurate. It doesn't occupy space on the disk
$ cat /proc/cpuinfo CPU Information A process has many components
and properties.
Display and update information about the top cpu processes
$ top
Top displays the top 10 processes on the system and periodically
updates this information. Top command is a combination of various
commands to display CPU stats, memory, real time processes running
in the system Top refresh every 5 seconds Process States. Unix uses
several process states to determine the current condition of a
process.
O Runnable
O Stopped
O Page Wait
O Non-Interruptable wait
Typically for disk I/J or NFS requests
O Sleeping
O Idle
O Terminated
OPTIONS
-q Renice top to -20 so that it will run faster. This can be used
b.sadhiq
33 www.altnix.com
when the system is being very sluggish to improve the possibility
of discovering the problem.
-dcount Show only count displays, then exit. A display is
considered to be one update of the screen. This option allows the
user to select the number of displays he wants to see before top
automatically exits. For intelligent terminals, no upper limit is
set. The default is 1 for dumb terminals.
-stime Set the delay between screen updates to time seconds. The
default delay between updates is 5 seconds.
INTERACTIVE MODE
h or . Display a summary of the commands (help screen). Version
information is included in this display.
Q Quit top.
K Send a signal ("kill" by default) to a list of processes
R Change the priority (the "nice") of a list of processes.
S Change the number of seconds to delay between displays (prompt
for new number).
J Change the order in which the display is sorted. This command
is not available on all systems. The sort key names vary from
system to system but usually include: "cpu", "res", "size", "time".
The default is cpu.
THE DISPLAY
PID every process runs have the process ID USER owner of the
process
PRI Current priority of the process.
NICE Nice amount in the range -20 to 20, as established by the use
of the command nice.
RES Resident memory: current amount of process memory that resides
in physical memory, given in kilobytes.
STATE Current state (typically one of "sleep", "run", "idl",
"zomb", or "stop").
TIME Number of system and user cpu seconds that the process has
used.
SIZE Amount of memory the process needs
CPU Percentage of available cpu time used by this process.
CJMMAND Name of the command that the process is currently running
PROCESS STATE CODES
ere are the different values that the s, stat and state output
specifiers (header "STAT" or "S") will display to describe the
state of a process.
D Uninterruptible sleep (usually IJ)
R Running or runnable (on run queue)
b.sadhiq
34 www.altnix.com
S Interruptible sleep (waiting for an event to complete)
T Stopped, either by a job control signal or because it is being
traced.
W paging (not valid since the 2.6.xx kernel)
X dead (should never be seen)
Z Defunct ("zombie") process, terminated but not reaped by its
parent. zombie -- dead process
For BSD formats and when the stat keyword is used, additional
characters may
be displayed:
high-priority (not nice to other users)
N low-priority (nice to other users)
L has pages locked into memory (for real-time and custom IJ)
s is a session leader
l is multi-threaded (using CLJNE_TREAD, like NPTL pthreads do)
+ is in the foreground process group
Each process has a unique identification number (PID) which
characterises the process. The command top allows you to kill
processes using the k interactive command and entering the PID of
the relevant process. To leave top to just press the q key.
ree
$ free -m
$ free -c 5 -s 3
$ free -m
total used free shared buffers cached
Mem: 1003 91 22 0 91 6
-/+ buffers/cache: 201 02
Swap: 105 0 105
As you can see, my system has 1 GB of ram and 91 MB are in use
leaving 22MB free. If you look at the cached column, it shows 6
MB free. This is a good thing as cached memory is basically free
memory. This is where programs a user may have used earlier and
then quit are stored, just on the off chance that the user might
start up the program again. Jn the other hand, if the user starts
up a new program, this cache can be replaced for the new program
that is running. It should be mentioned that the caching works not
just for recently loaded programs but also for data, i.e. recently
used files and directories. Program loading is just a special case
of loading a file.
The -/+ buffers/cache section is will show you what is really going
on. In my example, it shows that only 201 MB are in use and that
02 MB are free. The rest is just cached. What a user really needs
to worry about is that last line. If you start seeing the swap file
b.sadhiq
35 www.altnix.com
go into use that means that you are out of free ram and you are now
using space on your hard disk to help out. If this starts
happening, the best thing to do is run the top command and see what
is taking up all the memory. Then, if it is an unneeded program,
shut it down.
Signals
Signals are a software mechanism that are similar to a message of
some sort. They can be trapped and handled or ignored
Signals operate through two different system calls
1) The kill system call
2) The signal system call
The kill System Call
The kill system call sends a signal to a process kill is generally
used to terminate a process. It requires the PID of the process to
be terminated and the signal number to send as arguments.
2 The Signal System Call
The signal system call is much more diverse. When a signal occurs,
the kernel checks to see if the user had executed a signal system
call and was therefor expecting a signal. If the call was to ignore
the signal, the kernel returns
Jtherwise, it checks to see if it was a trap or kill signal If not,
it processes the signal If it was a trap or kill signal, the kernel
checks to see if core should be dumped and then calls the exit
routine to terminate the user process.
Common Unix Signals
$kill -l
SIGUP ang-up
SIGINT Interrupt
SIGQIT Quit
SIGINS Illegal Instruction
SIGTRAP Trace Trap
SIGKILL Kill
SIGSYS Bad argument to system call
SIGPIPE Write on pipe with no one to read it
SIGTERM Software termination signal from kill
SIGSTJP Stop signal
See /usr/include/sys/signal.h
b.sadhiq
36 www.altnix.com
Signal Acceptance
There are a couple of possible actions to take when a signal occurs
Ignore it
Process it
Terminate
The superuser can send signals to any process.
Normal users can only send signals to their own processes
Process Termination
A process is terminated by executing an exit system call or as a
result of a kill signal. When a process executes an exit system
call, it is first placed in a zombie state. In this state, it
doesn't exist anymore but may leave timing information and exit
status for its parent process. A zombie process is removed by
executing a wait system call by the parent process.
Process Cleanup
The termination of a process requires a number of cleanup actions
These actions include:
Releasing all memory used by the process
Reducing reference counts for all files used by the process
Closing any files that have reference counts of zero
Releasing shared text areas, if any
Releasing the associated process table entry, the proc structure
This happens when the parent issues the wait system call, which
returns the
terminated child's PID
kill - signal a process
kill is somewhat strangely named
Sends the specified signal to a process
b.sadhiq
37 www.altnix.com
Syntax: kill -sig_no, pid
kill -l (display list of signals)
-sig_no - signal number to send
pid - process id of process to receive signal
Default signal is TERM sig_no is 15, or request-process-termination
kill -9 pid terminates the process with extreme prejudice. As
usual, you can only kill your own processes unless you are the
superuser.
$ kill -9 <PID
$ kill -l - lists all available signals
$ killall
$ pidof <pidname
$ pgrep <pidname
$ pkill <pidname
Job Control
Job control refers to the ability to selectively stop (suspend) the
execution of processes and continue (resume) their execution at a
later point. A job is one or more processes started from a single
command line. By default, only one job can be run in the
foreground. This means that when a job is being executed in the
foreground the command line is unavailable. When the job
has finished executing the command prompt is reissued.
It is also possible to suspend jobs and/or run multiple jobs in the
background, in which case the command line is still available in
the foreground, although any output from running background jobs
will still be displayed at the terminal. You can see the jobs
currently running or stopped in the background using the jobs
command.
The syntax for the jobs command is shown below:
jobs option(s)
Common jobs options are:
Jption Explanation:
l Shows the job number, its PID, its status, and its name
p Shows just the PID of running jobs
Issuing the jobs command without any options will show a list of
all running, stopped and
suspended background jobs.
An example of using the job command is illustrated below:
b.sadhiq
38 www.altnix.com
$ jobs -l
1,- 1229 Running tail -n5 -f /var/log/secure
2,+ 1230 Stopped joe fred
In the above example there are two jobs in the background, one
running and one stopped.
b.sadhiq
39 www.altnix.com
ile Permissions
ile permissions are assigned to:
1. the owner of a file
2. the members of the group the file is assigned to
3. all other users
4. Permissions under Linux are configured for each file and
directory.
There are three levels of permissions:
1. The permissions that apply to the owner of the file. The owner of
a file is by default the user that created the file1.
2. The permissions that apply to all members of the group that is
associated with the file.
3. The permissions that apply to all other users on the system.
4. Permissions can only be changed by the owner, and root of course.
or a file, these permissions mean the following:
read allow the user to read the contents of the file, for instance
with cat or less.
write allow the user to modify the contents of the file,for
instance with vi.
execute allow the user to execute the file as a program, provided
that the file is indeed an executable program (such as a shell
script).
For a directory, these permissions have a slightly different
meaning:
read allow the user to view the contents of the directory, for
instance with ls.
write allow the user to modify the contents of the directory.
In other words: allow the user to create and delete files, and to
modify the names of the files. Note: aving write permissions on a
directory thus allows you to delete files, even if you have no
write permissions on that file!
execute allow the user to use this directory as its current
b.sadhiq
40 www.altnix.com
working directory. In other words: allow the user to cd into it.
r - read
w - write
x - execute
- u for the owner (user) of the file
- g for the group assigned to the file
- o for all other users
- a for all (owner+group+others)
<operator can be:
- + to add permissions
- - to delete permissions
- = to clear all permissions and set to the permissions specified
Symbolic way
$ useradd sachin
$ passwd sachin
$ useradd dhoni
$ passwd dhoni
$ groupadd market;usermod -G market dhoni
$ useradd shewag
$ passwd shewag
$ groupadd market;usermod -G market shewag
$ mkdir /opt/perm/;touch /opt/perm/file,1..6,
$ mkdir /opt/perm/,data1,data2,
$ cd /opt/perm
$ ll -d data1
drwxr-xr-x 2 root root 4096 Jul 29 20:15 data1
$ chown sachin data1
$ ll -d data1
$ chgrp market data1
$ ll -d data1
$ chmod u-w data1
$ ll -d data1
$ chmod g+w data1
$ ll -d data1
$ chmod o+w,o-rx data1
$ ll -d data1
$ ll -d data2
drwxr-xr-x 2 root root 4096 Jul 29 20:15 data2
$ chown -Rv sachin.market data2
$ ll -d data2
$ chmod u-rwx data2
$ ll -d data2
$ chmod g+w,g-x data2
$ ll -d data2
b.sadhiq
41 www.altnix.com
$ chmod -Rv o+w,o-r data2
$ ll -d data2
Octal way
$ ll file1
-rw-r--r- 1 root root 0 Jul 29 20:15 file1
$ chmod file1
$ ll file1
$ chmod 666 file2
$ ll file1
$ chmod 46 file3
$ ll file1
$ chmod 541 file4
$ ll file1
$ chmod 24 file5
$ ll file1
$ chmod 000 file6
$ chmod 0 file6
This table shows what numeric values mean:
Jctal
digit
Text
equivalent
Binary
value
Meaning
0 --- 000
All types of access are
denied
1 --x 001
Execute access is allowed
only
2 -w- 010 Write access is allowed only
3 -wx 011
Write and execute access are
allowed
4 r-- 100 Read access is allowed only
5 r-x 101
Read and execute access are
allowed
6 rw- 110
Read and write access are
allowed
rwx 111 Everything is allowed
b.sadhiq
42 www.altnix.com
Umask
User Mask
New files should not be created with 666! To avoid this problem a
permission mask exists. It is obviously important to know with what
permissions new files and directories are created. Under Linux,
it's not really easy to tell, since the default permissions can be
modified by setting a umask (with the umask command).
If no umask were set (which never happens, by the way), a file
would always be created with permissions 666 (rw-rw-rw-) and a
directory would get (rwxrwxrwx). In actual practice however, a
umask is set, and this number is subtracted from these permissions.
So, with a umask of 022, the default permissions for a file will
become 644 (rw-r--r--, 666-022) and the default permissions for a
directory will become 55 (rwx-r-xr-x, -022).
The default umask depends on your distribution, and whether your
distribution uses something called User Private Groups.
- Red at assigns a umask of 002 to regular users, and 022 to root.
- SUSE assigns a umask of 022 to all users, including root.
- What is your current default permission (umask)
- ow do you set your default permission.
- Umask defines what permissions, in octal, cannot be set
- Umask stands for user file creation mode mask
- In essence, system sets the default permission on the file and
directory
- If i would have no "umask:, the default permission on the file
would be ""
- Usually set in a login script
- it is the inverse of the normal octal permissions
- "umask -S" shows your umask in symbolic form
- linux removes the "x" permissions (or the 1) so is the same
as 666
- here are the common umask values:
-- 000 = full access (r+w) to everyone, or 666
-- 006 = no access to other, or 660
-- 022 = full access (r+w) to user and r to g and 0, or 644
-- 066 = full access (r+w) to user and no access to g + o, or 600
-
b.sadhiq
43 www.altnix.com
Normally, you can subtract from 666 but be very careful as it may
be . In Fedora Linux, it is 666 but lets test it out...
-- View the current umask setting
$umask
-- shows your umask in symbolic form
$ umask -S
- Umask on directory should be subtract from
- 022
------
55
System-wide umask for all users in /etc/profile
Individual umask in $JME/.bash_profile or $JME/.profile
Default value of umask is:
For root 022
For user 002 (if user private groups are used) or 022 (otherwise)
The umask specifies what permission bits will be set on a new file
when it is created. The umask is an octal number that specifies the
which of the permission bits will not be set. Jn Task
I
change Symbolic way
1.Give 04 to abc file
2.Give 41 to abc file
3.Give 006 to abc file
4.give 0 to abc file
II
change Octal way
1.change to octal mode r-xrw-r-x to abc chmod 565
2.change to octal mode --xr-xr-- to abc chmod 154
3.change to octal mode rw----rwx to abc chmod 60
4.change to octal mode ---r-x--- to abc chmod 050
III
symbolic way
1.change r-xrw-r-x to rw--wxrwx to abc chmod u+w,u-x,g-r,g+x,o+w
2.change --xr-xr-- to rwxrwxrw- to abc chmod u+rw,g+w,o+w
3.change rw----rwx to --x----wx to abc chmod u-rw,u+x,o-r
4.change ---r-x--- to rwx-w-rwx to abc chmo u+rwx,g-rx,g+w,o+rwx
b.sadhiq
44 www.altnix.com
Administarative cmnds and LowIeveI cmnds
LowIeveI
/bin This directory contains executable programs which are needed in single user mode
and to bring the system up or repair it.
Administrative
/sbin Like /bin, This directory holds commands needed to boot the sys-
tem, but which are usually not executed by normal users.
LowIeveI
/usr/bin This is the primary directory for executable programs. Most
programs executed by normal users which are not needed for boot-
ing or for repairing the system and which are not installed
locally should be placed in this directory.
Administrative
/usr/sbin This directory contains program binaries for system administration which are not
essential for the boot process, for mounting
/usr, or for system repair.
Understanding UNIX / Linux fiIe system
b.sadhiq
45 www.altnix.com
A conceptual understanding oI Iile system, especially data structure and related terms will help you
become a successIul system administrator. I have seen many new Linux system administrator w/o
any clue about Iile system. The conceptual knowledge can be applied to restore Iile system in an
emergency situation.
What is a FiIe?
File are collection oI data items stored on disk. Or it`s device which can store the inIormation, data,
music (mp3), picture, movie, sound, book etc. In Iact what ever you store in computer it must be
inIorm oI Iile. Files are always associated with devices like hard disk ,Iloppy disk etc. File is the last
object in your Iile system tree. See Linux/UNIX rules Ior naming Iile and directory names.
What is a directory?
Directory is group oI Iiles. Directory is divided into two types: Root directory Strictly speaking,
there is only one root directory in your system, which is denoted by / (forward slash). t is
root of your entire file system and can not be renamed or deleted.
O Sub directory - Directory under root (/) directory is subdirectory which can be created,
renamed by the user.
Directories are used to organize your data Iiles, programs more eIIiciently.
Linux supports numerous fiIe system types
3. Ext2: This is like UNX file system. t has the concepts of blocks, inodes and directories.
4. Ext3: t is ext2 filesystem enhanced with journalling capabilities. Journalling allows fast
file system recovery. Supports POSX ACL (Access Control Lists).
5. sofs (iso9660): Used by CDROM file system.
6. Sysfs: t is a ram-based filesystem initially based on ramfs. t is use to exporting kernel
objects so that end user can use it easily.
7. Procfs: The proc file system acts as an interface to internal data structures in the kernel.
t can be used to obtain information about the system and to change certain kernel
parameters at runtime using sysctl command. For example you can find out cpuinfo with
following command:
What is a UNIX/Linux FiIe system?
A UNX file system is a collection of files and directories stored. Each file system is stored
in a separate whole disk partition. The following are a few of the file system:
O / - Special file system that incorporates the files under several directories including /dev,
/sbin, /tmp etc
O /usr - Stores application programs
b.sadhiq
46 www.altnix.com
O /var - Stores log files, mails and other data
O /tmp - Stores temporary files
ExpIoring Linux FiIe System Hierarchy
A typical Linux system has the following directories:
=> / : This is the root directory.
=> /bin : This directory contains executable programs which are needed in single user
mode and to bring the system up or repair it.
=> /boot : Contains static files for the boot loader. This directory only holds the files which
are needed during the boot process.
=> /dev : Special or device files, which refer to physical devices such as hard disk,
keyboard, monitor, mouse and modem etc
=> /etc : Contains configuration files which are local to the machine. Some larger software
packages, like Apache, can have their own subdirectories below /etc i.e. /etc/httpd. Some
important subdirectories in /etc:
=> /home : Your sweet home to store data and other files. However in
large installation yhe structure of /home directory depends on local
administration decisions.
=> /Iib : This directory should hold those shared libraries that are
necessary to boot the system and to run the commands in the root
filesystem.
=> /Iib64 : 64 bit shared libraries that are necessary to boot the
system and to run the commands in the root filesystem.
=> /mnt : This directory contains mount points for temporarily mounted
filesystems
=> /opt : This directory should contain add-on packages such as
install download firefox or static files
=> /proc : This is a mount point for the proc filesystem, which
provides information about running processes and the kernel.
b.sadhiq
47 www.altnix.com
=> /root : This directory is usually the home directory for the root user.
=> /sbin : Like /bin, this directory holds commands needed to boot the
system, but which are usually not executed by normal users, root /
admin user specific commands goes here.
=> /tmp : This directory contains temporary files which may be deleted
with no notice, such as by a regular job or at system boot up.
=> /usr : This directory is usually mounted from a separate partition.
t should hold only sharable, read-only data, so that it can be
mounted by various machines run ning Linux (useful for diskless client
or multiuser Linux network such as university network). Programs,
libraries, documentation etc. for all user-related programs.
=> /var : This directory contains files which may change in size, such
as spool and log files.
=> /Iost+found : Every partition has a lost+found in its upper
directory. Files that were saved during failures are here,
for e.g
ext2/ext3 fsck recovery.
O etcskel : When a new user account is created, Iiles Irom this directory are usually copied into
the user`s home directory.
O /etc/X11 : Configuration files for the X11 window system .
* /etc/sysconfig : mportant configuration file used by SysV script stored in /etc/init.d and
/etc.rcX directories
/etc/cron.* : cron daemon configuration files which is used to execute scheduled commands
Common Linux Iog fiIes name and usage
* /var/log/message: eneral message and system related stuff
* /var/log/auth.log: Authenication logs
* /var/log/kern.log: Kernel logs
* /var/log/cron.log: Crond logs (cron job)
* /var/log/maillog: Mail server logs
* /var/log/qmail/ : Qmail log directory (more files inside this directory)
* /var/log/httpd/: Apache access and error logs directory
* /var/log/lighttpd: Lighttpd access and error logs directory
* /var/log/boot.log : System boot log
b.sadhiq
48 www.altnix.com
* /var/log/mysqld.log: MySQL database server log file
* /var/log/secure: Authentication log
* /var/log/utmp or /var/log/wtmp : Login records file
* /var/log/yum.log: Yum log files
o to /var/Iogs directory:#
$cd /var/logsView common log file /var/log/messages using any one of the
following command:
$ tail -f /var/log/messages
$ less /var/log/messages
$ more -f /var/log/messages
$ vi /var/log/messagesOutput:
Device Driver character,block,socket
. Type field: The first character in the field indicates a file type of one of the following:
* d = directory.
* l = symbolic link.
* s = socket sockets are special files offering a type of network interface.
* p = named pipe handling other programme other than kernel driver.
* - = regular file.
* c= character (unbuffered) device file special.
* b=block (buffered) device file special.
*D=door A door is a special Iile Ior inter-process communication between a client and server.
Ref -
http://www.securityfocus.com/infocus/1872
http://tldp.org/LDP/Linux-Filesystem-Hierarchy/html/index.html
http://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard
http://www.comptechdoc.org/os/linux/howlinuxworks/linux_hlfilesystems.html
b.sadhiq
49 www.altnix.com
FSTAB
fstab is 9th out of the 10 most critical and important configuration files
which is stored in /etc directory, where all the configuration files are stored.
fstab stands for "File System TABle" and this file contains information of
hard disk partitions and removeable devices in the system. t contains infor-
mation of where the partitions and removeable devices are mounted and
which device drivers are used for mounting them, which filesystem they are
using and what permissions are assigned to them.
The file fstab contains descriptive information about the various file
systems. fstab is only read by programs, and not written; it is the duty of the
b.sadhiq
50 www.altnix.com
system administrator to properly create and maintain this file. Each filesystem
is described on a separate line; fields on each line are separated by tabs or
spaces. Lines starting with '#' are comments. The order of records in fstab is
important because fsck, mount, and umount sequentially iterate through fstab
doing their thing.
Example of a fstab file content :
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
LABEL=/ / ext3 defaults 1 1
LABEL=/boot /boot ext3 defaults 1 2
none /dev/pts devpts gid=5,mode=620 0 0
LABEL=/home /home ext3 defaults 1 2
none /proc proc defaults 0 0
none /dev/shm tmpfs defaults 0 0
LABEL=/tmp /tmp ext3 defaults 1 2
LABEL=/u01 /u01 ext3 defaults 1 2
LABEL=/usr /usr ext3 defaults 1 2
LABEL=/var /var ext3 defaults 1 2
/dev/hda6 swap swap defaults 0 0
/dev/cdrom /mnt/cdrom udf,iso9660 noauto,ro 0 0
/dev/fd0 /mnt/floppy auto noauto,owner,kudzu 0 0
/dev/sda1 /mnt/usb_hdd vfat noauto 0 0
\________/ \___________/ \_________/ \____________/ \_/ \_/
| | | | | |
1st 2nd 3rd 4th 5th 6th
There are total six columns in the fstab file separated by spaces or tabs. Each
column holds different information about the device. For adding any new device add a fresh
row. Each row stands for a partition or removeable device in the system.
b.sadhiq
51 www.altnix.com
1st CoIumn :
~~~~~~~~~~
The first column contains the partitions's label, eg. "LABEL=/boot" or driver's path, eg.
"/dev/cdrom". Device driver's path tells the system to mount the device with the mentioned
device driver.
2nd CoIumn :
~~~~~~~~~~
The second field (fs_file) describes the mount point for the filesystem.
For swap partitions, this field should be specified as `none'. f the name of
the mount point contains spaces these can be escaped as `\040'.
The second column shows the mount point specified for a device in the fstab file. The
mount points actually is the directory where that particular device(mentioned in the first
column) will be mounted and through which we can view and modify the content of that
partition. You can change the default mount point listed in the column, if you are not
satisfied with the one your system has
given you.
3rd CoIumn :
~~~~~~~~~~
The third column in the file specifies the file system type of the device or partition.Many
diffrent file systems are supported by Linux and most common ones are,
1) autofs
2) devpts
3) ext2
4) ext3
5) iso9660
6) nfs
7) ntfs
8) proc
9) swap
10) tmpfs
11) udf
12) ufs
13) vfat
14) xfs
f you are not sure of the file system type of the device then set the value to "auto"
and the system will itself determine the file system type and will mount the device with that
file system.
4th CoIumn :
~~~~~~~~~~
b.sadhiq
52 www.altnix.com
The fourth column is for permissions to be given to the partition at the time of booting.
There are many options which constitutes the forth column. They are as follows : -
1) ro - Read Only
2) rw - Read Write
3) auto - Mount on startup
4) noauto - Do not mount on startup
5) user - Any user can mount, but only unmount device mounted by him
6) nouser - Only root can mount & unmount the device
7) users - Every user can mount and also unmount the device mounted by
others
8) owner - Same as user (above no. 5)
9) dev - User can use device driver to mount the device
10) nodev - User cannot use device driver to mount the device
11) exec - Users can execute binaries on the partition
12) noexec - Users cannot execute binaries on the partition
13) async - Asynchronous, whenever a file is saved it will be first saved in the
RAM and after 30 seconds all the queued files will be written on the
hard disk
14) sync - Synchronous, whenever a file is saved it will be directly written to the
hard disk
15) suit - Allow set-user-identifier for the device where users are allowed to run
binaries even though they do not have execute permissions. These
binaries are temporarily made available to them to perform certain
tasks
16) nosuid - Do not allow set-user-identifier
17) defauIts - auto, rw, dev, async, suid, exec & nouser
5th CoIumn :
~~~~~~~~~~
b.sadhiq
53 www.altnix.com
The 5th column is for backup option. This column contains either 0 or 1. Where "0"
stands for "NO" and "1" stands for "YES". The system checks it at the time of booting, if it's
"0", dump will ignore that filesystem but if its "1" then it will enable backup option.Backup
is supported on only ext3 file system, hence only for ext3 file system it should be enabled
and for rest of the file systems it should be disabled.
6th CoIumn :
~~~~~~~~~~
The 6th column is for "fsck" option. fsck stands for file system check. This column
defines the order in which the system should scan the partitions on start up. The / partition
is assigned top priority i.e. 1 and the rest of the partitions are assigned second priority i.e.
2. f value is set to 0 means no scanning will be done at the time of startup.f same number
is given to different partitions then the partitions are scanned together with equal priority.
This minimizes error because if a link is present on one partition with higher priority and the
source file in another partition with a priority lower than the link, it will give an error.
The /mesg command is used to write the kernel messages in Linux and other Unix-like
operating systems to standard output (which by default is the display screen).
A kernel is the core of an operating system. t is the first part of the operating system that is
loaded into memory when a computer boots up (i.e., starts up), and it controls virtually
everything on a system. The numerous messages generated by the kernel that appear on
the display screen as a computer boots up show the hardware devices that the kernel
detects and indicate whether it is able to configure them.
dmesg obtains its data by reading the kernel ring buffer. A buffer is a portion of a
computer's memory that is set aside as a temporary holding place for data that is being
sent to or received from an external device, such as a hard disk drive (HDD), printer or
keyboard. A ring buffer is a buffer of fixed size for which any new data added to it
overwrites the oldest data in it.
dmesg can be very useIul when troubleshooting or just trying to obtain inIormation about the
hardware on a system. Its basic syntax is dmesg [options]
nvoking dmesg without any of its options (which are rarely used) causes it to write all the
kernel messages to standard output. This usually produces far too many lines to fit into the
display screen all at once, and thus only the final messages are visible. However, the
output can be redirected to the less command through the use of a pipe (designated by the
vertical bar character), thereby allowing the startup messages to be viewed one screenful
at a time:
dmesg | less
less allows the output to be moved forward one screenful at a time by pressing the SPACE
bar, backward by pressing the b key and removed by pressing the q key. (The more
command could have been used here instead of the less command; however, less is newer
than more and has additional functions, including the ability to return to previous pages of
the output.)
b.sadhiq
54 www.altnix.com
When a user encounters a problem with the system, it can be convenient to write the output
of dmesg to a file and then send that file by e-mail to a system administrator or other
knowledgeable person for assistance. For example, the output could be redirected to a file
named boot_messages using the output re/irection operator (designated by a rightward
facing angle bracket) as follows:
dmesg > boot_messages
Because of the length of the output of dmesg, it can be convenient to pipe its output to
grep, a filter which searches for any lines that contain the string (i.e., sequence of
characters) following it. The -i option can be used to tell grep to ignore the case (i.e., lower
case or upper case) of the letters in the string. For example, the following command lists all
references to USB (universal serial bus) devices in the kernel messages:
dmesg | grep -i usb
And the following tells dmesg to show all serial ports (which are represented by the string
tty):
dmesg | grep -i tty
The dmesg and grep combination can also be used to show how much physical memory
(i.e., RAM) is available on the system:
dmesg | grep -i memory
The following command checks to confirm that the HDD(s) is running in DMA (direct
memory access) mode:
dmesg | grep -i dma
The output of dmesg is maintained in the log file /var/log//mesg, and it can thus also be
easily viewed by reading that file with a text editor, such as vi or gedit, or with a command
such as cat, e.g.,
cat /var/log/dmesg | less http://linuxgazette.net/issue59/nazario.html
Ispci is a command on Unix-like operating systems that prints detailed information about all
PC buses and devices in the system. t is based on a common portable library libpci which
offers access to the PC configuration space on a variety of operating systems.
Example output on a Linux system:
# lspci
00:00.0 Host bridge: ntel Corporation 82815 815 Chipset Host Bridge and Memory
Controller Hub (rev 11)
00:02.0 VA compatible controller: ntel Corporation 82815 CC [Chipset raphics
Controller] (rev 11)
00:1e.0 PC bridge: ntel Corporation 82801 Mobile PC Bridge (rev 03)
00:1f.0 SA bridge: ntel Corporation 82801BAM SA Bridge (LPC) (rev 03)
00:1f.1 DE interface: ntel Corporation 82801BAM DE U100 (rev 03)
00:1f.2 USB Controller: ntel Corporation 82801BA/BAM USB (Hub #1) (rev 03)
00:1f.3 SMBus: ntel Corporation 82801BA/BAM SMBus (rev 03)
b.sadhiq
55 www.altnix.com
00:1f.4 USB Controller: ntel Corporation 82801BA/BAM USB (Hub #2) (rev 03)
00:1f.5 Multimedia audio controller: ntel Corporation 82801BA/BAM AC'97 Audio (rev 03)
01:03.0 CardBus bridge: O2 Micro, nc. OZ6933/711E1 CardBus/SmartCardBus Controller
(rev 01)
01:03.1 CardBus bridge: O2 Micro, nc. OZ6933/711E1 CardBus/SmartCardBus Controller
(rev 01)
01:0b.0 PC bridge: Actiontec Electronics nc Mini-PC bridge (rev 11)
02:04.0 Ethernet controller: ntel Corporation 82557/8/9 [Ethernet Pro 100] (rev 08)
02:08.0 Communication controller: Agere Systems WinModem 56k (rev 01)
f many devices are shown as unknown (e.g. "Unknown device 2830 (rev 02)), issuing the
command 'update-pciids' will usually do the trick.
Detail nformation
$lspci vv
To update pci-ids inIormation to /usr/share/hwdata/pci.ids update-pciids
Bash
Bash
Descended from the Bourne Shell, Bash is a NU product, the "Bourne
Again SHell." t's the standard command line interface on most Linux
machines. t excels at interactivity, supporting command line editing,
completion, and recall. t also supports configurable prompts - most
people realize this, but don't know how much can be done.
Bash converts the text script to binary (0,1).
This chapter is based on Chapters 6 through 8 of the Siever book, Linux in a Nutshell
[Siever 2003] .
Figure 1 illustrates some of the shells found on UNX/Linux systems.
SheII Description
bash Bourne-again shell (NU)
csh C shell (BSD)
jsh Job control shell (SVR4)
ksh Korn shell (Bell Labs)
rc Plan 9 shell (Bell Labs)
rsh Remote shell (TCP/P)
sh Bourne shell (UNX 7th Edition)
tcsh Popular extension of the C shell
zsh
Popular extension of the Korn
shell
b.sadhiq
56 www.altnix.com
Figure 1: Some UNIX/Linux SheIIs
Standard NU/Linux systems use bash as the default shell. Some distributions, e.g. Red
Hat Linux, have /bin/sh as a symbolic link to /bin/bash and /bin/csh as a symbolic link to
/bin/tcsh.
Common Features
Figure 2 illustrates some features that are common to both bash and tcsh.
SymboI Description
> Redirect output
>> Append output to a file
< Redirect input
<< Redirect input ("Here" document)
| Pipe output
& Run process in background
; Separate commands on one line
* Match character(s) in filename
? Match single character in filename
!n Repeat command number n
[...] Match any characters enclosed
(...) Execute commands in a subshell
"..."
Quote allowing variable and command
expansion
'...' Literal string
`...` Command substitution
\ Quote following character
$var Variable expansion
$$ Process D
$0 Command name
$n nth argument (0...9)
$* All arguments
$? Exit status
Begin comment
Figure 2: Common symboIs
n addition to these symbols, both shells have some common commands, as illustrated in
Figure 3.
Command Description
bg Background execution
break Break out of a loop
b.sadhiq
57 www.altnix.com
Command Description
cd Change directory
continue Resume a loop
echo Display output
eval Evaluate arguments
exec Execute a new program
fg Foreground execution
jobs Show active jobs
kill Terminate running job(s)
shift Shift positional parameters
stop Suspend a background job
suspend Suspend a foreground job
umask Set or list file permissions
unset Erase variable or function definition
wait Wait for a background job to finish
Refrence
http://en.wikipedia.org/wiki/Bash
PracticaI
BASH
Login root
passwd *****
when you login u get vcs (virtual konsole) with the help of tty driver (/dev/tty*) and a
shell (/bin/bash)
n Linux default shell is bash (/bin/bash)
To check the shells supported by your OS
$cat /etc/shells
To swap in other shell
$sh
$ksh etc....
check your bash
$ps
to start another bash just run the following, which is inherited
$bash
Now chk with
b.sadhiq
58 www.altnix.com
$ps you will have two bash (parent and child) if u will kill the child bash it wont effect to
parent but if u do viceversa then chk what happens.
$List the bash shell pid
$ps -el | grep bash
Now try loading bash and can kill with the cmd
$kill -9 pid oI bash~
Bash
Here's a neat Bash-prompt trick. At a basic Bash prompt, press the up-arrow key and you'll
see the last command you typed in. Press again and again to rotate through all the
commands you typed previously, stored for you in Bash history.
You will only see the commands you typed in for your login, whether that's for a specific
user or for root.
Here are some additional Bash tips, all oI which are commands that you type at the Bash prompt:
To display a Iull numbered list oI all stored commands, type:
history
To retrieve the eighth command previously entered, type:
!8
To get the last command that started with the letter V, type:
!v
Bash history isn't lost when you reboot or shutdown either. Clever isn't it?
Bash Shortcuts
To go along with Bash basics above, here are some basic shorthand commands:
To go back one step in the directory tree, type:
cd ..
To change to the /home/{logged in username} directory, type:
cd ~
To change to the directory of a specific user when you have more than one, type the
previous command followed by the name of the user:
cd ~bruno
cd ~anna
To change the directory /home/{logged in username}/Downloads/Backgrounds, type:
cd ~/Downloads/Backgrounds
For really fast typing don't forget to use the Tab-key for auto-completion.
b.sadhiq
59 www.altnix.com
Typing the following does the same as the previous example, a lot faster:
cd ~/D {press Tab Key} /B {press Tab key}
Bash Script
You probably know that the "rm" command removes (or deletes) a file permanently.
Wouldn't it be nice if we could move it to the recycle bin with a simple command instead?
You can. To do that, you can make your own command called Del with a brief script.
To build the script, open a terminal and type the following lines:
su
{type your root password} (Note: you should see the # prompt)
kedit /usr/bin/del
This opens a new window in the keditor into which you should type the following script:
#!/bin/bash
mv $1 ~/Desktop/Trash
#End script
The next step is to save the file using kedit's File, Save As menu command. Then, back at
the Bash prompt logged in as root, type this line to make the new script executable:
$chmod 0775 /usr/bin/del
Now whenever you type the del command, it will run your script. For example, if you came
across the "tessst" file and you wanted to move it to the trash, you could just type this at the
Bash prompt:
$del tessst
That will perform the same action as:
$mv tessst /home/{logged in username}/Desktop/Trash
Sure this was a very short example, a three-line script, it only holds one command, but you
could add as many lines to the script as you wanted to and execute it with a simple three-
letter word. f there are more commands in the script it will execute them in the order that
they appear. Because /usr/bin is in your path you only have to type "del" to execute it from
anywhere in the file system.
Tab CompIetion Tip
Did you know you can use the Tab key to auto-complete commands on the command line?
Just type a few characters that start a command and press the Tab key. The command or
name of an existing directory or file will be completed.
Try this. Type the following and then press the Tab key:
$ cd /u
Now add an "s" and press Tab, type "h" and press Tab. The result should be:
$ cd /usr/share/
b.sadhiq
60 www.altnix.com
Now type "f" "o" "n" and press Tab, "t" press Tab, "d" Tab, and press the Enter key. That
should put you in:
/usr/share/fonts/ttf/decoratives
Type the following and press Enter:
ls
That'll bring up a list of all the fancy ttf fonts on your system.
So next time you have to type a long command like this:
# cp synthesis.hdlist.update_source.cz /var/lib/urpmi/synthesis.hdlist.update_source.cz
... try it this way instead:
# cp sy (Tab key), /v (Tab key), li (Tab key), u (Tab key), sy (Tab key)
And because the full command is on your screen, the light will go on if it hasn't already!
(Note: This command works only if the file "synthesis.hdlist.update_source.cz" is in your
/home directory)
How about a little more on the Tab key and commands. f you don't remember exactly how
a command was written, type in the first character or two and hit the Tab key. You'll get a
list of all the commands that start with the same character(s).
f you wish to know what a certain command does -- say, mkmanifest -- use the whatis
command, like this:
$ whatis mkmanifest
mkmanifest (1) - Makes Iist of fiIe names and their DOS 8+3 equivaIents.
Introduction to BASH
* Developed by NU project.
* The default Linux shell.
* Backward-compatible with the original sh UNX shell.
* Bash is largely compatible with sh and incorporates useful features from the Korn shell
ksh and the C shell csh.
* Bash is the default shell for Linux. However, it does runs on every version of Unix and a
few other operating systems such as ms-dos, os/2, and Windows platforms.
Quoting from the official Bash home page:
Bash is the shell, or command language interpreter, that will appear in the NU operating
system. t is intended to conform to the EEE POSX P1003.2/SO 9945.2 Shell and Tools
standard. t offers functional improvements over sh for both programming and interactive
use. n addition, most sh scripts can be run by Bash without modification.
The improvements offered by BASH include:
The Bash syntax is an improved version of the Bourne shell syntax. n most cases Bourne
shell scripts can be executed by Bash without any problems.
* Command line editing.
* Command line completion.
* Unlimited size command history.
b.sadhiq
61 www.altnix.com
* Prompt control.
* ndexed arrays of unlimited size (Arrays).
* nteger arithmetic in any base from two to sixty-four.
* Bash startup files - You can run bash as an interactive login shell, or interactive non-
login shell. See Bash startup files for more information.
* Bash conditional expressions: Used in composing various expressions for the test
builtin or [[ or [ commands.
* The Directory Stack - History of visited directories.
* The Restricted Shell: A more controlled mode of shell execution.
* Bash POSX Mode: Making Bash behave more closely to what the POSX standard
specifies.
n Linux, a lot of work is done using a command line shell. Linux comes preinstalled with
Bash. Many other shells are available under Linux:
* tcsh - An enhanced version of csh, the C shell.
* ksh - The real, AT&T version of the Korn shell.
* csh - Shell with C-like syntax, standard login shell on BSD systems.
* zsh - A powerful interactive shell.
* scsh- An open-source Unix shell embedded within Scheme programming language.
SheII Scripting
Starting a Script With #!
1. t is called a shebang or a "bang" line.
2. t is nothing but the absolute path to the Bash interpreter.
3. t consists of a number sign and an exclamation point character (#!), followed by the full
path to the interpreter such as /bin/bash.
4. All scripts under Linux execute using the interpreter specified on a first line[1].
5. Almost all bash scripts often begin with #!/bin/bash (assuming that Bash has been
installed in /bin)
6. This ensures that Bash will be used to interpret the script, even if it is executed under
another shell[2].
7. The shebang was introduced by Dennis Ritchie between Version 7 Unix and 8 at Bell
b.sadhiq
62 www.altnix.com
Laboratories. t was then also added to the BSD line at Berkeley [3].
gnoring An nterpreter Line (shebang)
* f you do not specify an interpreter line, the default is usually the /bin/sh. But, it is
recommended that you set #!/bin/bash line.
/bin/sh
For a system boot script, use /bin/sh:
#!/bin/sh
sh is the standard command interpreter for the system. The current version of sh is in the
process of being changed to conform with the POSX 1003.2 and 1003.2a specifications for
the shell.
Did you know?
* t is the shell that lets you run different commands without having to type the full
pathname to them even when they do not exist in the current directory.
* t is the shell that expands wildcard characters, such as * or ?, thus saving you
laborious typing.
* t is the shell that gives you the ability to run previously run commands without having
to type the full command again by pressing the up arrow, or pulling up a complete list with
the history command.
* t is the shell that does input, output and error redirection.
Why sheII scripting?
* Shell scripts can take input from a user or file and output them to the screen.
* Whenever you find yourself doing the same task over and over again you should use
shell scripting, i.e., repetitive task automation.
o Creating your own power tools/utilities.
o Automating command input or entry.
o Customizing administrative tasks.
o Creating simple applications.
o Since scripts are well tested, the chances of errors are reduced while configuring
services or system administration tasks such as adding new users.
PracticaI exampIes where sheII scripting activeIy used
* Monitoring your Linux system.
* Data backup and creating snapshots.
* Dumping Oracle or MySQL database for backup.
* Creating email based alert system.
* Find out what processes are eating up your system resources.
* Find out available and free memory.
b.sadhiq
63 www.altnix.com
List of command bash keywords and buiIt in commands
* JOB_SPEC &
* (( expression ))
* . filename
* [[:]]
* [ arg... ]
* expression
* alias
* bg
* bind
* builtin
* caller
* case
* command
* compgen
* complete
* continue
* declare
* dirs
* disown
* echo
* enable
* eval
* exec
* exit
* export
* false
* fc
* fg
command1 && command2
OR
First_command && Second_command
command2 is executed if, and only if, command1 returns an exit status of zero (true). n
other words, run command1 and if it is successfull, then run command2.
Example
Type the foIIowing at a sheII prompt:
$rm /tmp/filename && echo "File deleted."
The echo command will only run if the rm command exits successfully with a status of zero.
b.sadhiq
64 www.altnix.com
f file is deleted successfully the rm command set the exit stats to zero and echo command
get executed.
Lookup a username in /etc/passwd file
grep "^champu" /etc/passwd && echo "champu found in /etc/passwd"
Exit if a directory /tmp/foo does not exist
test ! -d /tmp/foo && { read -p "Directory /tmp/foo not found. Hit [Enter] to exit..." enter; exit
1; }
Syntax:
command1 || command2
OR
First_command || Second_command
command2 is executed if, and only if, command1 returns a non-zero exit status. n other
words, run command1 successfully or run command2.
Example
$cat /etc/shadow 2>/dev/null || echo "Failed to open file"
The cat command will try to display /etc/shadow file and it (the cat command) sets the exit
stats to non-zero value if it failed to open /etc/shadow file. Therefore, 'Failed to open file' will
be displayed cat command failed to open the file.
Find username else display an error
$grep "^champu" /etc/passwd || echo "User champu not found in /etc/passwd"
How Do Combine Both Logical Operators?
Try it as foIIows:
$cat /etc/shadow 2>/dev/null && echo "File successfully opened." || echo "Failed to open
file."
Make sure onIy root can run this script:
$test $(id -u) -eq 0 && echo "You are root" || echo "You are NOT root"
b.sadhiq
65 www.altnix.com
OR
$test $(id -u) -eq 0 && echo "Root user can run this script." || echo "Use sudo or su to
become a root user."
SheII functions
* Sometime shell scripts get complicated.
* To avoid large and complicated scripts use functions.
* You divide large scripts into a small chunks/entities called functions.
* Functions makes shell script modular and easy to use.
* Function avoids repetitive code. For example, is_root_user() function can be reused by
various shell scripts to determine whether logged on user is root or not.
* Function performs a specific task. For example, add or delete a user account.
* Function used like normal command.
* n other high level programming languages function is also known as procedure,
method, subroutine, or routine.
Writing the hello() function
Type the following command at a shell prompt:
hello() { echo 'Hello world!' ; }
nvoking the hello() function
hello() function can be used like normal command. To execute, simply type:
hello
Passing the arguments to the hello() function
You can pass command line arguments to user defined functions. Define hello as follows:
hello() { echo "Hello $1, let us be a friend." ; }
You can hello function and pass an argument as follows:
hello champu
SampIe outputs:
Hello champu, let us be a friend.
* One line functions inside { ... } must end with a semicolon. Otherwise you get an error
on screen:
b.sadhiq
66 www.altnix.com
$xrpm() { rpm2cpio "$1" | cpio -idmv }
Above will not work. However, the following will work (notice semicolon at the end):
$xrpm() { rpm2cpio "$1" | cpio -idmv; }
To display defined function names use the declare command. Type the following command
at a shell prompt:
$declare -f
Sample outputs:
declare -f command_not_found_handle
declare -f genpasswd
declare -f grabmp3
declare -f hello
declare -f mp3
declare -f xrpm
Display Function Source Code
To view function names and source code, enter:
declare -f
OR
declare -f | less
The test command is used to check file types and compare values. Test is used in
conditional execution. t is used for:
* File attributes comparisons
* Perform string comparisons.
* Arithmetic comparisons.
test command syntax
b.sadhiq
67 www.altnix.com
test condition
OR
test condition && true-command
OR
test condition || false-command
OR
test condition && true-command || false-command
Type the following command at a shell prompt (is 5 greater than 2? ):
$test 5 > 2 && echo "Yes"
$test 1 > 2 && echo "Yes"
Sample Output:
Yes
Yes
Rather than test whether a number is greater than 2, you have used redirection to create an
empty file called 2 (see shell redirection). To test for greater than, use the -gt operator (see
numeric operator syntax):
test 5 -gt 2 && echo "Yes"
test 1 -gt 2 && echo "Yes"
Yes
You need to use the test command while make decision. Try the following examples and
note down its output:
$test 5 = 5 && echo Yes || echo No
$test 5 = 15 && echo Yes || echo No
$test 5 != 10 && echo Yes || echo No
$test -f /etc/resolv.conf && echo "File /etc/resolv.conf found." || echo "File /etc/resolv.conf
not found."
test -f /etc/resolv1.conf && echo "File /etc/resolv1.conf found." || echo "File /etc/resolv1.conf
not found."
Write Scripts
1.
#!/bin/bash
b.sadhiq
68 www.altnix.com
read -p "Enter # 5 : " number
if test $number == 5
then
echo "Thanks for entering # 5"
fi
if test $number != 5
then
echo " told you to enter # 5. Please try again."
fi
2.
#!/bin/bash
clear
echo -e "What is your name : \c"
read name
echo hello $name. Welcome to Shell programming
sleep 2
clear
echo -e "Would you like to see a listing of your files ? [y/n]: \c"
read yn
if [ $yn = y ]
then
ls
fi
sleep 1
echo -e "Would you like to see who all are logged in ? [y/n]: \c"
read yn
if [ $yn = y ]
then
who
fi
sleep 1
echo Would you like to see which dir you are in \?
read yn
if [ $yn = y ]
then
pwd
fi
3.
#!/bin/sh
clear
echo Enter file name to copy
b.sadhiq
69 www.altnix.com
read apple
echo Enter file name to copy to
read mango
if cp $apple $mango > /dev/null 2>&1
then
echo Files copied ok Congrats!!
else
echo Error !!!!!!!!!!!!!!! Contact Mr ABC at Ext 101
fi
4.
#!/bin/bash
# -lt, -le, -gt, -ge, -ne, -eq : Use this for numerical comparisions
# <, <=, >, >=, <>, = : Use this for String comparisions
clear
tput cup 10 10
echo -e "Enter a no from 1 to 5 : \c"
read num
if test $num -lt 6
then
tput cup 12 10
echo "ood"
else
tput cup 12 10
echo "Sorry only between 1 to 6"
fi
5.
#!/bin/bash
## see man test
clear
echo Enter file name
read filename
if [ -z $filename ]
then
echo You have to enter some file name
echo Exiting....
sleep 2
exit
fi
if [ -f $filename ]
then
echo The filename you entered exists !!
echo Deleting $filename .....
sleep 2
rm -f $filename
echo Deleted $filename .....
sleep 1
b.sadhiq
70 www.altnix.com
cls
else
echo The filename you entered does not exist !!!
fi
6.
#!/bin/bash
read -p "Enter a number : " n
if [ $n -gt 0 ]; then
echo "$n is a positive."
elif [ $n -lt 0 ]
then
echo "$n is a negative."
elif [ $n -eq 0 ]
then
echo "$n is zero number."
else
echo "Oops! $n is not a number."
fi
7.
#!/bin/bash
clear
echo -e "Enter a number from 1 to 3 : \c"
read num
case $num in
1) echo You have entered 1
;;
2) echo You have entered 2
;;
3) echo You have entered 3
;;
*) echo Between 1 to 3 only !!
;;
esac
8.
#!/bin/sh
echo Enter dog/cat/parrot
read animal
case $animal in
b.sadhiq
71 www.altnix.com
cat|kat) echo You have entered cat
;;
dog) echo You have entered dog
;;
parrot|crow) echo You have entered parrot or crow
;;
*) echo nvalid entry !!
;;
esac
RPM
Rpm is a powerful Package Manager for Red at, Suse and Fedora
Linux. It can be used to build, install, query, verify, update, and
remove/erase individual software packages. A Package consists of an
archive of files, and package information, including name, version,
and description:
The RPM Package Manager
-------------------------------- -
RPM is a recursive acronym for RPM Package Manager.
It used to be called the Red at Package Manager, but Red at
changed its name to emphasis that other distributions use it too.
The new official name is RPM Package Manager, and yes, that's a
self-referencing acronym (SRA), just like GNU.
- RPM is the default package manager for Red at Linux systems.
- RPM system consists of a local database, the rpm executable, rpm
package files.
- It deals with .rpm files, which contain the actual programs as
well as various bits of meta-information about the package: what it
is, where it came from, version information and info about package
dependencies.
- RPMs are the files (called packages) which contain the
installable software; typically they have
the .rpm suffix.
RPM ACTS
--------------------------------
1. RPM is free - GPL
The RPM Package Manager or RPM is a tool which was developed by Red
at Software, who still maintain it, but released under the GNU
General Public Licence (GPL) and has proven to be so popular, that
a lot of other distribution manufacturers use it as well.
RPM is a very versatile program which solves a lot of problems that
a distributor of software typically faces:
- Management of source files
- Management of the build process
b.sadhiq
72 www.altnix.com
- A distribution method and format for binary files, including pre-
and
postinstall scripts. RPMs can be created by anyone, not only the
manufacturer of your distribution.
2. stores info about packages in a database /var/lib/rpm
/var/lib/rpm contains all the database necessary for managing all
of the packages installed on your system in the form of rpm
The database stores information about installed packages such as
file attributes and package prerequisites.
When a certain system uses RPMs to install packages, a database of
installed packages is stored in /var/lib/rpm. The database itself
is in rpm format too, so it cannot be read directly. You will have
to access the database using the rpm command.
Where to get RPMs
-
http://rpmseek.com
http://rpmfind.net
http://www.redhat.com
http://freshrpms.net
http://rpm.pbone.net
http://dag.wieers.com
http://rpmforge.net
http://filewatcher.com
Common Build Procedures
- source code install - tarball (.tar, .tar.gz, .tgz, tar.bz,
tar.tbz)
- Configure/make/make install
- Binary RPMs (.rpm)
b.sadhiq
73 www.altnix.com
- Source RPMs (.srpm)
Some Query Options
$ rpm -ivh ,rpm-file, Install the package
$ rpm -ivh mozilla-mail-1..5-1.i56.rpm
$ rpm -ivh --test mozilla-mail-1..5-1.i56.rpm
$ rpm -Uvh ,rpm-file, Upgrade package
$ rpm -Uvh mozilla-mail-1..6-12.i56.rpm
$ rpm -Uvh --test mozilla-mail-1..6-12.i56.rpm
$ rpm -Fvh upgrades to a later version
$ rpm -ev ,package, Erase/remove/ an installed package
$ rpm -ev mozilla-mail
$ rpm -ev --nodeps ,package, Erase/remove/ an installed package
without checking for dependencies
$ rpm -ev --nodeps mozilla-mail
$ rpm -qa Display list all installed packages rpm -qa
$ rpm -qa | less
$ rpm -qi ,package, Display installed information along with
package version and short description
$ rpm -qi mozilla-mail
$ rpm -qf ,/path/to/file, Find out what package a file belongs
to i.e. find what package owns the file
$ rpm -qf /etc/passwd
$ rpm -qf /bin/bash
$ rpm -qc ,pacakge-name, Display list of configuration file(s)
for a package
$ rpm -qc httpd
$ rpm -qcf ,/path/to/file, Display list of configuration files
for a command
$ rpm -qcf /usr/X11R6/bin/xeyes
$ rpm -qa --last Display list of all recently installed RPMs
$ rpm -qa --last
$ rpm -qa --last | less
$ rpm -qpR ,.rpm-file,
$ rpm -qR ,package, Find out what dependencies a rpm file has
$ rpm -qpR mediawiki-1.4rc1-4.i56.rpm
$ rpm -qR bash
$ rpm -qlp foo.rpm Which files are installed with foo.rpm.
$ rpm -ivh --nodeps pants.rpm Installing package Ignoring
Dependencies
$ rpm -e foo ('e' for erase)
$ rpm -i --prefix /new/directory package.rpm The --prefix and --
relocate options should make the rpm command relocate a package to
a new
location.
$ rpm -k <.rpm we could verify the MD5 is JK
$ rpm --rebuilddb
b.sadhiq
74 www.altnix.com
Task
Download xmms-1.2.10-1.i36.rpm & try to install
$ rpm -ivh xmms-1.2.10-1.i36.rpm
Will ask for dependency, Download dep from the given sites above &
install the same.
Download
glib-1.2.10-62.i56.rpm
gtk-1.2.10-926.i56.rpm
gtk-32bit-1.2.10-926.x6_64.rpm
b.sadhiq
75 www.altnix.com
UserAdministration
Jnly root (i.e. system administrator)can use adduser command
To create new users. It is not allow to other users.
Adduser is symlink of Useradd which is binary in /usr/sbin.
We(root)can
customise adduser by using another word(champu) & make it
symlink of useradd.
Let's see
root@localhost root,$ cd /usr/sbin
root@localhost sbin,$ ln -s useradd uad
Now uad is symlink of useradd.
There are 3 types of users
|
__________________|____________________
| | |
Super user System user Normal user
<1 Superuser : At the time of linux installation it is create.
e has right to make other users & his`userid'& `groupid' is zero
in `/etc/Passwd' file.
<2 Systemuser: These users create by System. They can't login
becoz their shell `sbin/noloin' is default in seventh field in
`/etc/passwd' file.
<3 Normaluser: These users create by superuser.
Let's see how superuser make normaluser :
root@localhost root,$ adduser john
root@localhost root,$ passwd john
Changing password for user john.
New password:(user password)
BAD PASSWJRD: it is too short (if password is less than six
character but it doesn't affect so no need to worry)
Retype new password:(user password)
Passwd: all authentication tokens updated succesfully.
b.sadhiq
76 www.altnix.com
root@localhost root,$ userdel john --- `userdel' command delete
only name of the user from
/home directory but it's
data remain there. It's
/usr/sbin/userdel
root@localhost root,$ userdel -r john
----userdel -r delete name of user as well as data.
root@localhost root,$ usermod -G groupname username
i.e.
root@localhost root,$ usermod -G john eric
----`usermod -G' command makes the user eric member
of the group john. /usr/sbin/usermod.
su ---- with the help of this command root can work as
substitute user.
su -r ----with the help of this command root come out from
subtitute user.
The information of adduser refers 2 files & updates 4 files.
Config. files
Refers
|----/etc/login.defs
|
|----/etc/default/useradd
Updates
|----/etc/passwd
|
|----/etc/group
|
|----/etc/shadow
|
|----/etc/gshadow
b.sadhiq
77 www.altnix.com
etclogin.defs
< /etc/login.defs : It keep the information of directory where
mailboxes reside or name of file relative to the home directory,
Password duration & how many users can login.
"Passwd file" & "Group file" get the information of userid &
groupid from this file.
"shadow file" & "Gshadow file" get the information of user login &
password duration of user from this file.
Min/max values for automatic uid selection in useradd.
UID-MIN 500
UID-MAX 60000
The id of user start from 500 & max it is 60000 which is default
according to REDAT but we can customise it.
If there are two department ACCJUNTANT & MARKETING in one office
then I can start userid to ACCJUNTANT from 1000 & to MARKETING from
2000 which is reliable.
Similar way to Groupid
GID-MIN 500
GID-MAX 60000
PASSWORD AGING CONTROLS:
1. PASS-MAX-DAYS 99999 : The maximum number of days a password can
be used. i.e max 99999 days.
2. PASS-MIN-DAYS 0 : The minimum number of days allowed between
password can change.
3. PASS-MIN-LEN 5 : The minimum length of the password. i.e. 5
character.
4. PASS-WARN-AGE : Specifies the number of days warning given to
user before the password expire. ie days.
The above PASSWJRD AGING information is default according to REDAT
which we can customise it.
etcdefaultuseradd
<2 /etc/default/useradd : It has information of no. of groups,
directory
of users & user using which shell in following way.
b.sadhiq
78 www.altnix.com
1. Group=100 ---- It's default no. of groups according to Redhat
which can customise.
2. ome=/home ---- It's default dir of user as Redhat say to which
we can give any name i.e. we can make `ghar'instead of `home' by
making directory under /
3. Inactive ---- It's number of days after password expire of user.
4. Expire ---- It's number of days for the account of user will
expire.
5. Shell=/bin/bash -- It's path of user shell.
Skel=/etc/skel --- When user create there is zero dir or file but
when give command `l.' it shows some hidden files which comes from
/etc/skel.
etcpasswd
<3 /etc/passwd : It keeps the record of new user when create by
superuser. Each line is entry of new user. It is
text file & has details of all system users.
It has fields for each user in each line so
it is called `system passwd database' & each field
is separted : (colon) also called "Internal field
separator".
champu:x:500:500::/home/champu:/bin/bash
\____/\_/\__/\_/||\___________/\______/
| | | | || | |
1 2 3 4 5 6
1. field (champu) : It is username
2. field (x) : It contain user password which is somewhere else if
exist.
If we put inplace of x then user can't login.
If we keep second field blank then user can login without
password.
i.e. (x) --- password somewhere else.
b.sadhiq
79 www.altnix.com
() --- user can't login.
( ) --- user can login without passwd.
3. field (500) : It contain userid which is unique. Further userid's
are just one greater than last user.
4. field (500) : It contain groupid which is always same as userid.
It's group of users.
5. field () : It is comment field or GECJS(General electric
compressive operating system) user can keep his information by
using command `chfn'in this field such as
$ chfn
Name ,:
office ,:
office phone ,:
ome phone ,:
6. field (/home/champu) : It's home of champu. /home is directory
where all users store.
. field (/bin/bash) : It contain the full path of shell used by
user. Through shell we can convert shell script into binary
format & whatever get from kernal convert into text format.
etcgroup
<4 /etc/group : This file keep the information of group. It has
four field of each group of each line so it is called `system
group database'.
Member of group has right to enter other member's of system who is
member of same group.
line in this field like follow
Accounts:x:500:
| | | |
1 2 3 4
1. field (accounts) : It contain name of group which is always same
as the first member username.
b.sadhiq
80 www.altnix.com
2. field (x) : It contain group password which is somewhere else if
exist & it's password is same of first member of group.
3. field (500) : It contain group id which is same of first member's
id of group.
4. field : It contains list of members of group. By default Redhat
it is blank but user can fill it by put the name of members of
group.
Jne user can makes members of his group by using command `usermod -
G' which is run by only root.
$usermod -G groupname username
when system admin first time creates users he can send message
like `Thanku for using redhat linux' through this & user get this
mail whenever he login.
Command line options
Option Description
-c comment Comment for the user
-d home-dir
ome directory to be used instead of
default /home/username/
-e date
Date for the account to be disabled in
the format YYYY-MM-DD
-f days
Number of days after the password
expires until the account is disabled.
(If 0 is specified, the account is
disabled immediately after the password
expires. If -1 is specified, the
account is not be disabled after the
password expires.)
-g group-name
Group name or group number for the
user's default group (The group must
exist prior to being specified here.)
-G group-list
List of additional (other than default)
group names or group numbers, separated
by commas, of which the user is a
member. (The groups must exist prior to
being specified here.)
-m
Create the home directory if it does
not exist
b.sadhiq
81 www.altnix.com
Option Description
-M Do not create the home directory
-n
Do not create a user private group for
the user
-r
Create a system account with a UID less
than 500 and without a home directory
-p password The password encrypted with crypt
-s
User's login shell, which defaults to
/bin/bash
-u uid
User ID for the user, which must be
unique and greater than 499
groupadd group-name>
Command line options
Option Description
-g gid
Group ID for the group, which must be unique
and greater than 499
-r
Create a system group with a GID less than
500
-f
Exit with an error if the group already
exists (The group is not altered.) If -g and
-f are specified, but the group already
exists, the -g option is ignored
Password aging
$chage -l root
$chage -d 0 username
Change shell
chsh username>
FingerInformation
chfn username>
$Iinger
b.sadhiq
82 www.altnix.com
PAM
PAM Iibrary parses the config fiIe and Ioads moduIes to it
What operating systems support PAM?
PAM was first developed by Sun Microsystems in 1995 and is supported by the following
operating system versions (and higher):
RedHat 5.0
SUSE 6.2
Debian 2.2
Mandrake 5.2
Caldera 1.3
TurboLinux 3.6
PAM is the Pluggable Authentication Module, invented by Sun. t's a beautiful concept, but
it can be confusing and even intimidating at first. We're going to look at it on a RedHat
system, but other Linuxes will be similar - some details may vary, but the basic ideas will be
the same.
The first thing to understand is that PAM is NOT something like tcpd (tcp wrappers) or
xinetd that encloses and restricts access to some service. An application needs to be "PAM
aware"; it needs to have been written and compiled specifically to use PAM. There are
tremendous advantages in doing so, and most applications with any interest in security will
be PAM aware.
PAM is about security - checking to see that a service should be used or not. Most of us
first learned about PAM when we were told that login was using it, but PAM can do much
more than just validate passwords. A lot of applications now use PAM - even things like
SAMBA can call on PAM for authentication.
The big advantage here is that security is no longer the application's concern: if PAM says
its OK, its OK. That makes things easier for the application, and it makes things easier for
the system administrator. PAM consults text configuration files to see what security actions
to take for an application, and the administrator can add and subtract new rules at any time.
PAM is also extensible: should someone invent a device that can read your brain waves
and determine ill intent, all we need is a PAM module that can use that device. Change a
few files, and login now reads your mind and grants or denies access appropriately. We're
a bit away from that feature, but there are a tremendous number of available PAM modules
that administrators can use.
b.sadhiq
83 www.altnix.com
Configuration FiIes
On modern RedHat systems, the configuration files are found in /etc/pam.d, one file for
each PAM aware application (plus a special "other" file we'll get to later). One word of
warning: changes to these files take effect instantly. You aren't going to get logged out if
you make a mistake here. but if you DO screw up and blithely log out, you may not be able
to log back in. So test changes before you exit.
We're going to use a very simple example to get started here. n a number of articles here,
we've talked about SSH Security. Most of those articles have been about changes to ssh's
configuration files, but here we'll use PAM to add some additional restriction: the time of
day you are allowed to use ssh. To do this, we need a PAM module called pam_time.so -
it's probably in your /lib/security/ directory already. t uses a configuration file
"/etc/security/time.conf". That file is pretty well commented, so 'm not going to go into detail
about it and will just say that added the line
sshd;*;*;!Al2200-0400
which says that sshd cannot be used between 10:00 PM and 4:00 AM. 'm usually rather
soundly asleep between those times, so why let ssh be used? could still login at the
console if woke up with an urgent need to see an ls of my /tmp directory, but couldn't ssh
in, period. Configuring the time.conf file by itself doesn't affect ssh; we need to add the pam
module to /etc/pam.d/sshd. My file ends up looking like this:
#%PAM-1.0
account required pam_time.so
auth required pam_stack.so service=system-auth
auth required pam_nologin.so
account required pam_stack.so service=system-auth
password required pam_stack.so service=system-auth
session required pam_stack.so service=system-auth
session required pam_limits.so
session optional pam_console.so
put the time.so module first so that it is the very first thing that is checked. f that module
doesn't give sshd a green light, that's the end of it: no access. That's the meaning of
"required": the module HAS to say that it is happy. The "account" type is specified here.
That's a bit of a confusing thing: we have "account", "auth", "password" and "session". The
man page isn't all that helpful:
account - provide account verification types of service: has the user's
password expired?; is this user permitted access to the requested ser-
vice?
authentication - establish the user is who they claim to be. TypicaIIy
this is via some challenge-response request that the user must satisfy: if you are who
you claim to be please enter your password. Not all authentications are of this type, there
exist hardware based authenti-
cation schemes (such as the use of smart-cards and biometric devices), with suitable
b.sadhiq
84 www.altnix.com
modules, these may be substituted seamlessly for more standard approaches to
authentication - such is the flexibility of
Linux-PAM.
password - this group's responsibility is the task of updating authen-tication
mechanisms. Typically, such services are strongly coupled to those of the auth group.
Some authentication mechanisms lend themselves
well to being updated with such a function. Standard UN*X password-
based access is the obvious example: please enter a replacement pass-word.
session - this group of tasks cover things that should be done prior to a service being
given and after it is withdrawn. Such tasks include the
maintenance of audit trails and the mounting of the user's home direc- tory. The
session management group is important as it provides both an opening and closing
hook for modules to affect the services available to a user.
think that the distinction between account and session in that man page is a little
confusing. think it would be quite reasonable to think you should use "session" for this
module. Now, sometimes you have a man page for the module that shows you what to use,
but pam_time doesn't help us there. Technically, it's not up to the library: the application is
the one that is checking with account or session, but keep this in mind: session happens
AFTER authentication. liked the older PAM manual better, which said:
auth modules provide the actual authentication, perhaps asking
for and checking a password, and they set "credentials" such
as group membership or kerberos "tickets."
account modules check to make sure that the authentication
is allowed (the account has not expired, the user is allowed
to log in at this time of day, and so on).
password modules are used to set passwords.
session modules are used once a user has been authenticated to allow them to
use their account,perhaps mounting the user's home directory or making their mailbox
available.
For me, that was more clear.
Stacking
n this case, only wanted to apply this restriction to ssh. f 'm physically at the box, want
no time restrictions. f DD want these same restrictions, 'd make the same change to
/etc/pam.d/login. But what if there are a whole bunch of things want to apply the same
rules to? RedHat has a special module "pam_stack". t functions much like an "include"
statement in any programming language. We saw it in my /etc/pamd/sshd file:
b.sadhiq
85 www.altnix.com
auth required pam_stack.so service=system-auth
That says to look in /etc/pam.d/system-auth for other modules to use. Both login and sshd
have this line (as does just about every other file in /etc/pam.d/), so we can look in system-
auth to see what gets called by them:
#%PAM-1.0
# This file is auto-generated.
# User changes will be destroyed the next time authconfig is run.
auth required /Iib/security/ISA/pam_env.so
auth sufficient /Iib/security/ISA/pam_unix.so
Iikeauth nuIIok
auth required /Iib/security/ISA/pam_deny.so
auth required /Iib/security/ISA/pam_taIIy.so
no_magic_root onerr=faiI
account required /Iib/security/ISA/pam_unix.so
account required /Iib/security/ISA/pam_taIIy.so onerr=faiI fiIe=/var/Iog/faiIIog
deny=1 no_magic_root even_deny_root_account
password required /Iib/security/ISA/pam_crackIib.so retry=3 type=
password sufficient /Iib/security/ISA/pam_unix.so nuIIok use_authtok md5
shadow
password required /Iib/security/ISA/pam_deny.so
session required /Iib/security/ISA/pam_Iimits.so
session required /Iib/security/ISA/pam_unix.so
Therefor, if we really wanted our time restrictions to apply to just about everything, we could
add it to system-auth. Note the warning about authconfig though, and also consider that
you will be making sudden sweeping changes to a LOT of applications and services.
Other
What if a PAM aware app doesn't have a file in /etc/pam.d? n that case, it uses the "other"
file, which looks like this by default:
#%PAM-1.0
auth required /Iib/security/ISA/pam_deny.so
account required /Iib/security/ISA/pam_deny.so
password required /Iib/security/ISA/pam_deny.so
session required /Iib/security/ISA/pam_deny.so
That "deny" module is a flat-out no access, red light, stop you dead right here module that
is always going to say no. That's excellent from a security point of view, but can be a bit
harsh should you accidentally delete something like "login". Login would now use the
b.sadhiq
86 www.altnix.com
"other" file, and you couldn't login. That could be unpleasant.
There are many, many useful and clever PAM modules. While our brain wave interpreter
doesn't exist yet, many other possibilities are available to you. There are modules to
automatically black list hosts that have many failed logins, and much more. See
http://www.kernel.org/pub/linux/libs/pam/modules.html.
Use of pam_IistfiIe.so moduIe
This PAM module authenticates users based on the contents of a specified file. For
example, if username exists in a file /etc/sshd/ssh.allow, sshd will grant login access.
How do I configure pam_IistfiIe.so moduIe to deny access?
You want to block a user, if user-name exists in a file /etc/sshd/sshd.deny file.
Open /etc/pam.d/ssh (or /etc/pam.d/sshd for RedHat and friends)
# vi /etc/pam.d/ssh
Append following line:
auth required pam_listfile.so item=user sense=deny file=/etc/sshd/sshd.deny
onerr=succeed
Save and close the file
Now add all usernames to /etc/sshd/sshd.deny file. Now a user is denied to login via sshd if
they are listed in this file:
# vi /etc/sshd/sshd.deny
Append username per line:
user1
user2
...
Restart sshd service:
# /etc/init.d/sshd restart
Understanding the config directives:
O auth required pam_IistfiIe.so : Name of module required while authenticating users.
O item=user : Check the username
O sense=deny : Deny user if existing in specified file
O fiIe=/etc/sshd/sshd.deny : Name of file which contains the list of user (one user per
line)
O onerr=succeed : f an error is encountered PAM will return status PAM_SUCCESS.
b.sadhiq
87 www.altnix.com
How do I configure pam_IistfiIe.so moduIe to aIIow access?
You want to ALLOW a user to use ssh, if user-name exists in a file /etc/sshd/sshd.allow file.
Open /etc/pam.d/ssh (or /etc/pam.d/sshd for RedHat and friends)
# vi /etc/pam.d/ssh
Append following line:
auth required pam_listfile.so item=user sense=allow file=/etc/sshd/sshd.allow onerr=fail
Save and close the file.
Now add all usernames to /etc/sshd/sshd.allow file. Now a user is allowed to login via sshd
if they are listed in this file.
# vi /etc/sshd/sshd.allow
Append username per line:
tony
om
rocky
Restart sshd service (optional):
# /etc/init.d/sshd restart
Now if paul try to login using ssh he will get an error:
Permission denied (publickey,keyboard-interactive).
Following log entry recorded into my log file (/var/log/secure or /var/log/auth.log file)
tail -f /var/log/auth.log
Output:
Jul 30 23:07:40 p5www2 sshd[12611]: PAM-listfile: Refused user paul for service ssh
Jul 30 23:07:42 p5www2 sshd[12606]: error: PAM: Authentication failure for paul from
125.12.xx.xx
Understanding the config directives:
8. auth required pam_IistfiIe.so : Name of module required while authenticating users.
9. item=user : Check or specify the username
10. sense=aIIow : Allow user if existing in specified file
11. fiIe=/etc/sshd/sshd.aIIow : Name of file which contains the list of user (one user per
line)
12. onerr=faiI : f filename does not exists or username formatting is not coreect it will not
allow to login.
http://www.kernel.org/pub/linux/libs/pam/Linux-PAM-html/
http://www.kernel.org/pub/linux/libs/pam/Linux-PAM-html/Linux-PAM_MW.html
b.sadhiq
88 www.altnix.com
LVM
Create Partitions
For this Linux lvm example you need an unpartitioned hard disk
/dev/sdb. First you need to create physical volumes. To do this you
need partitions or a whole disk. It is possible to run pvcreate
command on /dev/sdb, but I prefer to use partitions and from
partitions I later create physical volumes.
b.sadhiq
89 www.altnix.com
Use your preferred partitioning tool to create partitions. In this
example I have used cfdisk.
Partitions are ready to use.
3. Create physical volumes
Use the pvcreate command to create physical volumes.
$ pvcreate /dev/sdb1
$ pvcreate /dev/sdb2
b.sadhiq
90 www.altnix.com
The pvdisplay command displays all physical volumes on your system.
$ pvdisplay
Alternatively the following command should be used:
$ pvdisplay /dev/sdb1
4. Create Virtual Group
At this stage you need to create a virtual group which will serve
as a container for your physical volumes. To create a virtual group
with the name "mynew_vg" which will include /dev/sdb1 partition,
you can issue the following command:
$ vgcreate mynew_vg /dev/sdb1
To include both partitions at once you can use this command:
$ vgcreate mynew_vg /dev/sdb1 /dev/sdb2
b.sadhiq
91 www.altnix.com
Feel free to add new physical volumes to a virtual group by using
the vgextend command.
$ vgextend mynew_vg /dev/sdb2
5. Create Logical Volumes
From your big cake (virtual group) you can cut pieces (logical
volumes) which will be treated as a partitions for your linux
system. To create a logical volume, named "vol01", with a size of
b.sadhiq
92 www.altnix.com
400 MB from the virtual group "mynew_vg" use the following command:
O create a logical volume of size 400 MB -L 400
O create a logical volume of size 4 GB -L 4G
$ lvcreate -L 400 -n vol01 mynew_vg
With a following example you will create a logical volume with a
size of 1GB and with the name vol02:
$ lvcreate -L 1000 -n vol02 mynew_vg
b.sadhiq
93 www.altnix.com
$ lvremove /dev/mynew_vg/vol02
More workAround
1. After Creating all the Partition,change the ID of that
particular partition from ID 3 to ID e which is assign for
LVM
2. Don't format LVM partition
b.sadhiq
94 www.altnix.com
3. As we can see that to access the partition in Linux, we have
to go through /dev/had, Similarly in LVM one cannot access the
partition directly, you have to go through
4. PvPhysical Volume
5. Since we have /dev/hda5 is our /home LVM partition so we have
to create physical volume of /dev/hda5
6. $ pvdisplay
. $ pvcreate /dev/hda5
. $ vgscan
9. $ vgcreate myvol /dev/hda5
vgdisplay ---
10. $ lvcreate -L <+lvsize -n lv1 myvol
11. $ lvdisplay
12. $ mke2fs -j /dev/myvol/lv1
13. $ mount /dev/myvol/lv1 /home
14. $ df -h
More Workaround
Add another DD which is connecting to motherboard as
Primary Slave
Therefore that disk should be readable by Linux as /dev/hdb
2 $ fdisk /dev/hdb
1 Create a single primary partition the size of entire disk
and change the ID of that partition to e.
4 Create PV of new drive which will be
$ pvcreate /dev/hdb1
Add this new PV into a existing VG (MYVJL)
$ vgextend myvol /dev/hdb1
o Extend your Logical Volume into a existing LV
$ lvextend -L +2000M /dev/myvol/lv1
$resize2fs /dev/myvol/lv1
7 The above command will extend Logical Volume by 2GB, which
mean our /home is above 2gb as its mounting on it by
/dev/myvol/lv1 /home ext3 default 1 2
? $mount -a
7 $ df -h
b.sadhiq
95 www.altnix.com
) Increase Your LVM size up to 6GB
Note: We have exiting value is 2GB, therefore to increases
LVM up to 6GB
$ lvextend -L +4000M /dev/myvol/lv1
$ resize2fs /dev/myvol/lv1
) Reduce Your LVM size up to GB
Note: we have now LVM up to is 6GB, which have to decrease
up to 1GB, therefore
$ umount /home
$ e2fsck -yc /dev/myvol/lv1
$ resize2fs /dev/myvol/lv1 1000M
$ lvreduce -L 1000M /dev/myvol/lv1
$ mount -a
$ df -h
b.sadhiq
96 www.altnix.com
The Linux ScheduIers cron-cronoIogy-sequence cronoIogicaI order-
date-wise
Cron job are used to schedule commands to be executed periodically i.e. to setup
commands which will repeatedly run at a set time, you can use the cron jobs.
crontab is the command used to install, deinstall or list the tables used to drive the cron
daemon in Vixie Cron. Each user can have their own crontab, and though these are files in
/var/spool/cron/crontabs, they are not intended to be edited directly. You need to use
crontab command for editing or setting up your own cron jobs.
To edit your crontab file, type the following command:
$ crontab -e
Syntax of crontab
Your cron job looks like as follows:
1 2 3 4 5 /path/to/command arg1 arg2
Where,
O 1: Minute (0-59)
O 2: Hours (0-23)
O 3: Day (0-31)
O 4: Month (0-12 [12 == December])
O 5: Day of the week(0-7 [7 or 0 == sunday])
O /path/to/command - Script or command name to schedule
Same above five fields structure can be easily remembered with following diagram:
* * * * * command to be executed
- - - - -
| | | | |
| | | | ----- Day of week (0 - 7) (Sunday=0 or 7)
| | | ------- Month (1 - 12)
| | --------- Day of month (1 - 31)
| ----------- Hour (0 - 23)
------------- Minute (0 - 59)
b.sadhiq
97 www.altnix.com
ExampIe(s)
f you wished to have a script named /root/backup.sh run every day at 3am, my crontab
entry would look like as follows:
crond* -----> Binary or App server daemon
/etc/rc.d/init.d/crond -----> nitscript to start crond server
/etc/crontab -----> System crontab file
mins hrs DOM MOY DOW
00-59 00-23 1-31 1-12 0-7 (0=Sun 1=Mon, 2=Tue, 3=Wed,4=Thu, 5=Fri, 6=Sat and
7=Sun)
Each of the time-related fields may contain:
O A '*', which matches everything, or matches any value
O A single integer, which matches exactly
O Two integers seperated by a dash, matching a range of values ie
8-10 in the hr field would match 8am,9am and 10am.
8-10,13 would match 8am,9am,10am and 1pm
O A comma-seperated series of ints or ranges, matching any listed value
O */2 in the hr field refers to midnote, 2am, 4am and so forth
ie the cmd is executed every 2 hrs
O 0-10/2 in the hr field refers to midnite, 2am, 4am, 6am, 8am and 10am
Note:
O A crontab entry is considered to match the current time when the min and hr fields
match the curr time and the mth field matches the current month
O An entry is considered to match the current date when the day of month field [3rd]
matches the current day of the mth OR the day of week [5th] field matches the current
day of the week:
T S NOT NECESSARY THAT BOTH THE DAY OF THE MTH AND DAY OF
THEWEEK MATCH!
O f both the time and date match the current time and date the cmd is executed!
O Never put a '*' in the first field unless u want the cmd to run every minute
b.sadhiq
98 www.altnix.com
O You MAY hand-edit this file but it is never necessary since run-parts does everything.
Simply put a shell script in the appropriate /etc/cron.*/ dirs
Also the crond* daemon need not be restart. t will do just that every minute anyway
Example: Users often forget to shutdown their machines and go home. Hence, machine
should auto shutdown at 11 pm
/etc/crontab
nstall your cronjob:# crontab e
00 23 * * * root /sbin/shutdown -h now
b) Append following entry:
0 3 * * * /root/backup.sh
Run five minutes after midnight, every day:
5 0 * * * /path/to/command
Run at 2:15pm on the first of every month:
15 14 1 * * /path/to/command
Run at 10 pm on weekdays:
0 22 * * 1-5 /path/to/command
Run 23 minutes after midnigbt, 2am, 4am ..., everyday:
23 0-23/2 * * * /path/to/command
Run at 5 after 4 every sunday:
5 4 * * sun /path/to/command
f you run many sites, you can use this tip to make managing your cron jobs easier. To
minimize the clutter, create a /etc/cron.5min directory and have crontab read this directory
every five minutes.
*/5 * * * * root run-parts /etc/cron.5min
45 * * * * /usr/bin/lynx -source http://example.com/cron.php
45 * * * * /usr/bin/wget -O - -q -t 1 http://www.example.com/cron.php
45 * * * * curl --silent --compressed http://example.com/cron.php
00 11,16 * * * /home/sadhiq/bin/incremental-backup
O 00 0th Minute (Top of the hour)
b.sadhiq
99 www.altnix.com
O 11,16 11 AM and 4 PM
O Every day
O Every month
00 09-18 * * * /home/ramesh/bin/check-db-status
O Every day of the week
O 00 0th Minute (Top of the hour)
O 09-18 9 am, 10 am,11 am, 12 am, 1 pm, 2 pm, 3 pm, 4 pm, 5 pm, 6 pm
O Every day
O Every month
O Every day of the week
*/10 * * * * /home/sadhiq/check-disk-space
Cron jobs saved in to /var/spooI/cron/username
crontab -I --> To list your crontab jobs
$ crontab r --> To remove or erase all crontab jobs
Use speciaI string to save time
nstead of the first five fields, you can use any one of eight special strings. t will not just
save your time but it will improve readability.
Special string Meaning
@reboot Run once, at startup.
@yearly Run once a year, "0 0 1 1 *".
@annually (same as @yearly)
@monthly Run once a month, "0 0 1 * *".
@weekly Run once a week, "0 0 * * 0".
@daily Run once a day, "0 0 * * *".
b.sadhiq
100 www.altnix.com
@midnight (same as @daily)
@hourly Run once an hour, "0 * * * *".
Run ntpdate every hour:
@hourly /path/to/ntpdate
Typical /etc/crontab file entries:
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MALTO=root
HOME=/
$ run-parts
01 * * * * root run-parts /etc/cron.hourly
02 4 * * * root run-parts /etc/cron.daily
22 4 * * 0 root run-parts /etc/cron.weekly
42 4 1 * * root run-parts /etc/cron.monthly
Directory Description
/etc/cron.d/ Put all scripts here and call them from /etc/crontab file.
/etc/cron.daily/ Run all scripts once a day
/etc/cron.hourly/ Run all scripts once an hour
/etc/cron.monthly/ Run all scripts once a month
/etc/cron.weekly/ Run all scripts once a week
b.sadhiq
101 www.altnix.com
How do I use above directories to put scripts?
Here is a sample shell script (clean.cache) to clean up cached files every 10 days. This
script is directly created at /etc/cron.daliy/ directory i.e. create a file called
/etc/cron.daily/clean.cache:
/bin/bash
CROOT="/tmp/cachelighttpd/"
DAYS=10
LUSER="lighttpd"
LROUP="lighttpd"
start cleaning
/usr/bin/find ${CROOT} -type f -mtime +${DAYS} | xargs -r /bin/rm
1 /irectory /elete/ by some other script just get it back
if [ ! -d $CROOT ]
then
/bin/mkdir -p $CROOT
/bin/chown ${LUSER}:${LROUP} ${CROOT}
fi
Cron Access Perms
/etc/cron.aIIow and /etc/cron.deny
f a user is only in /etc/cron.allow, then all others are denied
f a user is only in /etc/cron.deny then all others are allowed/not affected
f cron.deny is touched, then no users is allowed to create a crontab
f cron.allow is touched, then no users is allowed to create a crontab
b.sadhiq
102 www.altnix.com
AT
'at' executes a command once on a particular day, at a particular time. at will add a
particular command to be executed.
Examples:
$ at 21:30
You then type the commands you want executed then press the end-of-file character
(normally CTRL-D ). Also try:
$ at now + time
This will run at at the current time + the hours/mins/seconds you specify (use at now + 1
hour to have command(s) run in 1 hour from now...)
You can also use the -f option to have at execute a particular file (a shell script).
$ at -f shell_script now + 1 hour
This would run the shell script 1 hour from now.
atq Will list jobs currently in queue for the user who executed it, if root executes at it will list
all jobs in queue for the at daemon. Doesn't need or take any options.
atrm Will remove a job from the 'at' queue.
Command syntax:
$ atrm job_no
Will delete the job "job_no" (use atq to find out the number of the job)
$ at -f myjobs.txt now + 1 hour
$ at -f myjob now + 1 min
$ at 10 am tomorrow
$ at 11:00 next month
$ at 22:00 today
$ at now + 1 week
$ at noon
b.sadhiq
103 www.altnix.com
Anacron
,3,.743 is another tool designed for systems which are not always on, such as home
computers.
While cron will not run if the computer is off, anacron will simply run the command when the
computer is next on (it catches up with things).
Quota
Important Note:
1. Quotas can only be created for partitions.
2. Quota is of two types, user and group.
3. f 1 MB quota is set for the partition /home, then every directory under /home or every
user on the system, since each directory in /home represents an user, can use a max of
1 MB.
EnabIing Quotas
1. o to /etc/fstab and in the permissions field, enter "usrquota" followed by a "," for the
partition where you want to enable quota in our case /home.
Note: f you want to enable group quota then enter "grpquota" instead of "usrquota"
Reboot and directly jump to step 5! else...
b.sadhiq
104 www.altnix.com
2. Unmount and mount /home for the changes to take effect
$ umount /home
$ mount /home
or
$ mount -o remount /home
Note:f the system is rebooted after step 2 skip step 3&4 and jump to step 5.
3. To scan /home and enable quota
$ quotacheck -vcu /home
4. To turn on quota on /home
$ quotaon -v /home
5. To check if quota is on or not
$ repquota -a
ImpIementing Quotas
6. To edit quota for a user
$ edquota -u <username>
Note: u stands for user, for group type g and give groupname
7. To edit grace period
$ edquota -t
8. To copy a quota setting of one user to another user
$ edquota -p <source_user> <user> OR
For aII users
$ edquota -p <source_user> `awk -F: '$3>499 {print $1}' /etc/passwd`
Repairing aquota.user fiIe
9. Boot in single mode
10. Turn off quotas
$ quotaoff -v /home
Enable Quota on Filesytem -> /home
Cond: if there is no /home partition, imply quota on / filesystem
PracticaI
$ vi /etc/fstab
b.sadhiq
105 www.altnix.com
/dev/hda7 /home ext3 defaults,usrquota 0 0
Remount the /home filesystem with usrquota parameters
$ mount -o remount /home
Confirm whether usrquota is implied
$ mount
It shouId Iike this:
/dev/hda7 on /home type ext3 (rw,usrquota)
Create quota database file i.e aquota.user on /home
$ quotacheck -cuv /home --> This creates aquota.user under /home
Enable the quota on /home
$ quotaon /home
Set user level quota on user neo restricting the size below 70k
$ edquota -u neo
This opens up a temp file under /tmp and vi as a editor
Disk quotas for user neo (uid 529):
<---- file size quota ----------> | <------ No. of files quota -->
Filesystem blocks soft hard inodes soft hard
/dev/hda7 11 50 69 11 0 0
Quota mplemented for the user gets updated in /home/aquota.user
Confirm quota really works or not
Login as neo
$ su - neo
$ dd if=/dev/zero of=/home/neo/data.tmp bs=1k count=70
This should show the below error
------------------------------------------------------
warning, user block quota exceeded.
b.sadhiq
106 www.altnix.com
dd: writing data.tmp: Disk quota exceeded
------------------------------------------------------
f user neo wants to view his own quota
$ quota
As a root user you would be interested in viewing the quota statistics on user level basis.
# repquota -a
How to enabIe grpquota i.e. roup Quota
$ vi /etc/fstab
/dev/hda7 /home ext3 defaults,usrquota,grpquota 0 0
Remount the /home filesystem with usrquota and grpquota parameters
$ mount -o remount /home
Confirm whether usrquota is implied
$ mount
It shouId Iook Iike this:
/dev/hda7 on /home type ext3 (rw,usrquota,grpquota)
Create quota database file i.e aquota.group, aquota.user on /home
$ quotacheck -cugv /home
--> This creates aquota.group, aquota.user under /home
How to set grpquota
$ edquota -g ADMNS
How to disabIe quota
# quotaoff /home
b.sadhiq
107 www.altnix.com
How to impIy the quota settings meant for user neo onto user champu
# edquota -p neo jane
Commands
O quota - display disk usage and limits
O rquota - implement quotas on remote machines
O fstab - static information about the filesystems
O edquota - edit user quotas
O setquota - set disk quotas (Command line editor)
O quotacheck - scan a filesystem for disk usage, create, check and repair quota files
O quotaon - turn filesystem quotas on
O quotaoff - turn filesystem quotas off
KerneI CompiIation
if you want to update the kernel from new source code you have downloaded, or you have
applied a patch to add new functionality or hardware support, you will need to compile and
install a new kernel to actually use that new functionality. Compiling the kernel involves
translating the kernel's contents from human-readable code to binary form. nstalling the
kernel involves putting all the compiled files where they belong in /boot and /lib and making
changes to the bootloader.
The process of compiling the kernel is almost completely automated by the make utility as
is the process of installing. By providing the necessary arguments and following the steps
covered next, you can recompile and install a custom kernel for your use.
BasicaIIy, there are three types of kerneI:
- MonoIithic KerneI - Micro KerneI - ExoKerneI
MonoIithic: As the name itself suggests, the kernel has every services like, FS
Management, MM, Process Management, etc. in the kernel space. t does not run as a
seperate process. So, as you guess, there is no context switching, when you ask for a
service. But, the probability of a monolithic kernel getting struck is more. Because, if there
is a bug in the kernel itself, nothing can rescue it. Linux and Windows are good examples of
Monolithic kernel. Linux, being a monolithic kernel, you can insert modules into the kernel
dynamically using insmod command.
b.sadhiq
108 www.altnix.com
Micro KerneI: Micro kernel runs all the services as a daemon in the user space. So, if a
problem occurs in any of the service, the kernel will be able to decide what to do next. But,
you pay-off the time to switch to a service in this type of kernel. Micro kernels are some
what difficult to design and build than the monolithic kernel. There are always a discussion
over the internet, talking about the advantage and disadvantages of monolithic and micro
kernel.
Exo KerneI: Exo kernel is not yet stabilized. t's under design and research. The user mode
processes running in this type of kernel has the ability to access kernel resources like
process tables, etc directly.
Structure of monolithic and microkernel-based operating systems, respectively
CompiIation
Steps to compiIe kerneI - Redhat 9
nstall dep
kernel-source-2.4.20-8.i386.rpm
binutils-2.13.90.0.18-9.i386.rpm
glibc-kernheaders-2.4-8.10.i386.rpm
cpp-3.2.2-5.i386.rpm
gcc-3.2.2-5.i386.rpm
glibc-2.3.2-11.9.i386.rpm
libgcc-3.2.2-5.i386.rpm
glibc-common-2.3.2-11.9.i386.rpm
ncurses-5.3-4.i386.rpm
glibc-devel-2.3.2-11.9.i386.rpm
ncurses-devel-5.3-4.i386.rpm
Once these aII dependencies are instaIIed:
1) o to /usr/src/linux-2.4/
2) edit Makefile [Top-Level]
b.sadhiq
109 www.altnix.com
parameter EXTRAVERSON=-8champu
3) make mrproper [delete the .config]
architecture
4) cp /usr/src/linux-2.4/configs/kernel-2.4.18-i686.config [see uname -m]
/usr/src/linux-2.4/.config
or simply
cp -p configs/kernel-2.4.18-i686.config .config
5) make oldconfig - To update the .config with running kernel parameters
6) make config / make menuconfig (for Text) / make xconfig - make necessary changes
enable ntfs disable sound & bluetooth
7) make dep - checks dependecies & construts MAKEFLE.
8) make clean - cleans unwanted files form memory loaded by above commands.
9) make bzmage - Actual kernel compilation process
10) make modules Actual KLM compilation process
11) make modules_install #check in /lib/modules/2.4.20-8champu/
12) cp /usr/src/linux-2.4.20-8/arch/i386/boot/bzmage
/boot/vmlinuz-2.4.20-8champu
13) cp /usr/src/linux-2.4.18-14/System.map
/boot/System.map-2.4.20-8champu
14) cp /usr/src/linux-2.4.20-8/.config /boot/config-2.4.20-8champu [OPTONAL]
15) mkintrd /boot/initrd-2.4.20-8champu.img 2.4.20-8champu
16) vi /etc/grub.conf #Add the new customized kernel entries
title REDHAT 9champu (customized)
b.sadhiq
110 www.altnix.com
root (hd0,8)
kernel /vmlinuz-2.4.20-8champu ro root=/dev/hda11 rhgb quiet
initrd /initrd-2.4.20-8champu.img
note hd0,8 for boot partition - (/dev/hda9 -1=hda8) (/de/hda11 is "/)
17) reboot
Centos 5 Steps to compiIe kerneI 2.6 :-
1> copy kernel tarball file ino /usr/src/kernels/ location & untar into that location
2>tar -jxvf /usr/src/kernels/ linux-2.6. 18.2.tar. bz2
3>cd /usr/src/kernels/ linux-2.6. 18.2
4>make gconfig (graphical)
make menuconfig (text)
5> make clean
6> make bzmage
7> make modules
8> make modules_install
9>cp arch/i386/boot/ bzmage /boot/vmlinuz- 2.6.18-2
10>cp /usr/src/kernels/ linux-2.6. 18.2/System. map /boot/System. map-2.6.18- 2
11>ln -s /boot/System. map-2.6.18- 2 /boot/System. map
12> Create initrd :--
$first check into /lib/modules/ 2.6.18.2 --> this is created or not
then execute next command
mkinitrd /boot/initrd- 2.6.18.2. img 2.6.18.2
Final Steps
$vi /etc/grub.conf
default=0
timeout=77
splashimage= (hd0,0)/grub/ splash.xpm. gz
title Red Hat Enterprise Linux AS (2.6.9-34.EL)
root (hd0,0)
kernel /vmlinuz-2.6. 9-34.EL ro root=LABEL=/ rhgb quiet
initrd /initrd-2.6. 9-34.EL.img
title Red Hat Enterprise Linux AS (2.6.18.2)
root (hd0,0)
kernel /vmlinuz-2.6. 18-2 ro root=LABEL=/ rhgb quiet
initrd /initrd-2.6. 18.2.img
KerneI Definition
http://www.linfo.org/kernel.html
KerneI compiIation
http://www.cyberciti.biz/tips/compiling-linux-kernel-26.html
http://book.opensourceproject.org.cn/distrib/ubuntu/unleashed/opensource/0672329093/ch
b.sadhiq
111 www.altnix.com
35lev1sec7.html
http://wiki.centos.org/HowTos/Custom_Kernel
This is one the essential and important task. Many time we upgrade our kernel and some
precompiled drivers won't work with Linux. Especially if you have weird hardware; then
vendor may send you driver code aka C files to compile. Or even you can write your own
Linux kernel driver. Compiling kernel driver is easy. Kernel 2.6.xx makes it even much more
easier. Following steps are required to compile driver as module:
1) You need running kernel source code; if you don't have a source code download it from
kernel.org. Untar kernel source code (tar ball) in /usr/src using tar command:
$ tar -zxvf kernel* -C /usr/src
To be frank kernel headers are more than sufficient to compile kernel modules / drivers.
See how to install kernel headers under Debian / Ubuntu Linux or RHEL / CentOS / Fedora
Linux.
Next go to your kernel module source code directory and simply create the Makefile file as
follows (assuming your kernel module name is foo):
$ vi Makefile
3) Add following text to it:
obj-m = foo.o
KVERSON = $(shell uname -r)
all:
make -C /lib/modules/$(KVERSON)/build M=$(PWD) modules
clean:
make -C /lib/modules/$(KVERSON)/build M=$(PWD) clean
4) Compile module using make command (module build can be done by any user) :
$ make
t will finally creates the foo.ko module in current directory. You can see all actual compile
command stored in .foo* files in same directory.
5) Once module compiled successfully, load it using insmod or modprobe command. You
need to be root user or privileged user to run insmod:
# insmod foo.ko
ExampIe: heIIo.c moduIe
1) hello.c C source code. Copy following code and save to hello.c
$ mkdir demo; cd demo
$ vi hello.c
2)Add following c source code to it:
b.sadhiq
112 www.altnix.com
#include <linux/module.h> /* Needed by all modules */
#include <linux/kernel.h> /* Needed for KERN_NFO */
#include <linux/init.h> /* Needed for the macros */
static int __init hello_start(void)
{
printk(KERN_NFO "Loading hello module...\n");
printk(KERN_NFO "Hello world\n");
return 0;
}
static void __exit hello_end(void)
{
printk(KERN_NFO "oodbye Mr.\n");
}
module_init(hello_start);
module_exit(hello_end);
This is an example modified from original source for demonstration purpose.
3) Save the file. Create new Makefile as follows:
$ vi Makefile
Append following make commands:
obj-m = hello.o
KVERSON = $(shell uname -r)
all:
make -C /lib/modules/$(KVERSON)/build M=$(PWD) modules
clean:
make -C /lib/modules/$(KVERSON)/build M=$(PWD) clean
4) Save and close the file.
5) Compile hello.c module:
$ make
6) Become a root user (use su or sudo) and load the module:
$ su -
$ insmod hello.ko
Note you can see message on screen if you are logged in as root under run level 3.
7) Verify that module loaded:
$ lsmod | less
b.sadhiq
113 www.altnix.com
8) See message in /var/log/message file:
$ tail -f /var/log/message
9) Unload the module:
$ rmmod hello
10) Load module when Linux system comes up. File /etc/modules use to load kernel boot
time. This file should contain the names of kernel modules that are to be loaded at boot
time, one per line. First copy your module to /lib/modules/$(uname -r)/kernel/drivers.
Following are suggested steps:
(a) Create directory for hello module:
$ mkdir -p /lib/modules/$(uname -r)/kernel/drivers/hello
(b) Copy module:
$ cp hello.ko /lib/modules/$(uname -r)/kernel/drivers/hello/
(c) Edit /etc/modules file under Debian Linux:
$ vi /etc/modules
(d) Add following line to it:
hello
(e) Reboot to see changes. Use lsmod or dmesg command to verify module loaded or not.
$ cat /proc/modules
OR
$ lsmod | less
Most people have a fairly recent kernel. But since the kernel is constantly being updated,
people on modems (such as myself) don't like downloading the whole source everytime a
new version of the kernel comes out... t is a pain to download 14+ megs of stuff when 95%
of it is the same stuff that you already have in your kernel source diectory.
For this reason, kernel patches are released. Kernel patches contain only the files that have
changed since the last kernel, hence making it less of a pain to upgrade.
t is a good idea to back up your old kernel tree before you do anything to it, just in case
something messes up. To do this, do the following:
Become root and then go into your kernel source directory (for me it was /usr/src/linux-
2.2.10) and do a 'make clean' to clean it up so you don't compress a lot of crap you don't
need as follows
# cd /usr/src/linux-2.2.10
# make clean
Now you need to go to backup the tree, did this by doing the following:
# cd /usr/src/
# tar zcvf linux-2.2.10-tree.tar.gz linux-2.2.10
Now with that backed up, you can go ahead and change the stuff with less worrying...
b.sadhiq
114 www.altnix.com
f you have kernel 2.2.10, like did, and 2.2.12 is the current stable release (or at least it is
as am writing this) you need all of the patch files after 2.2.10. So in my case, needed to
get patch-2.2.11.gz and patch-2.2.12.gz
http://www.kernelnotes.org is where got mine from, but 'm sure there are mirrors where
you can get the patches from, more on this is on www.kernelnotes.org.
Note: When downloaded this file using netscape, it un-gzipped it for me as it was
downloading... so didn't have to do the following step that you would have to do if you
were using a program such as 'ftp'
un-gzip the file by doing the following:
# gzip -d patch-2.2.11.gz
# gzip -d patch-2.2.12.gz
This will leave you with patch-2.2.11 and patch-2.2.12 (unless you downloaded the file with
netscape, and this step would already have been done for you)
Now move the files to your kernel source directory (using the mv command,
mv patch-2.2. /usr/src/Iinux-2.2.10
Now change into your kernel source directory (/usr/src/linux-2.2.10 in my case)
Now you need to apply the patch the the source... Order is important here. Start with the
lowest and go to the highest, like the following:
# patch -p1 < patch-2.2.11
# patch -p1 < patch-2.2.12
Both of these commands will give you lots of output telling you what files are being patched,
etc.
After applied the patches, went ahead and renamed my source directory to reflect the
patches applied (mv /usr/src/linux-2.2.10 /usr/src/linux-2.2.12) and then removed the old
/usr/src/linux link and replaced it with the new location (rm /usr/src/linux and then ln -s
/usr/src/linux-2.2.12 /usr/src/linux)
Now just compile your kernel
KerneI Patch with .KO
http://wiki.centos.org/HowTos/BuildingKernelModules
http://www.cyberciti.biz/tips/compiling-linux-kernel-module.html
Patch
http://www.cyberciti.biz/tips/how-to-patch-running-linux-kernel.html
Patch O Matic
http://www.fifi.org/doc/iptables-dev/html/netfilter-extensions-HOWTO-2.html
Patch without reboots - KspIice
http://www.cyberciti.biz/tips/debian-centos-redhat-hotfix-patch-linux-kernel.html
Faq
b.sadhiq
115 www.altnix.com
http://kernelnewbies.org/FAQ
KerneI Tuning
Kernel tuning with sysctl
The Linux kernel is flexible, and you can even modify the way it works on the fly by
dynamically changing some of its parameters, thanks to the sysctl command. Sysctl
provides an interface that allows you to examine and change several hundred kernel
parameters in Linux or BSD. Changes take effect immediately, and there's even a way to
make them persist after a reboot. By using sysctl judiciously, you can optimize your box
without having to recompile your kernel, and get the results immediately.
To start getting a taste of what sysctl can modify, run sysctl -a and you will see all the
possible parameters. The list can be quite long: in my current box there are 712 possible
settings.
$ sysctl -a
kernel.panic = 0
kernel.core_uses_pid = 0
kernel.core_pattern = core
kernel.tainted = 129
many lines snippe/
b.sadhiq
116 www.altnix.com
f you want to get the value of just a single variable, use something like sysctl
vm.swappiness, or just sysctl vm to list all variables that start with "vm." Add the -n option to
output just the variable values, without the names; -N has the opposite effect, and produces
the names but not the values.
You can change any variable by using the -w option with the syntax sysctl -w
variable=value. For example, sysctl -w net.ipv6.conf.all.forwarding=1 sets the
corresponding variable to true (0 equals "no" or "false"; 1 means "yes" or "true") thus
allowing P6 forwarding. You may not even need the -w option -- it seems to be deprecated.
Do some experimenting on your own to confirm that.
sysctl values are loaded at boot time from the /etc/sysctl.conf file. This file can have blank
lines, comments (lines starting either with a "#" character or a semicolon), and lines in the
"variable=value" format. For example, my own sysctl.conf file is listed below. f you want to
apply it at any time, you can do so with the command sysctl -p.
# Disable response to broadcasts.
net.ipv4.icmp_echo_ignore_broadcasts = 1
# enable route verification on all interfaces
net.ipv4.conf.all.rp_filter = 1
# enable ipV6 forwarding
net.ipv6.conf.all.forwarding = 1
# increase the number of possible inotify(7) watches
fs.inotify.max_user_watches = 65536
sysctI and the /proc directory
The /proc/sys virtual directory also provides an interface to the sysctl parameters, allowing
you to examine and change them. For example, the /proc/sys/vm/swappiness file is
equivalent to the vm.swappiness parameter in sysctl.conf; just forget the initial "/proc/sys/"
part, substitute dots for the slashes, and you get the corresponding sysctl parameter. (By
the way, the substitution is not actually required; slashes are also accepted, though it
seems everybody goes for the notation with the dots instead.) Thus, echo 10
>/proc/sys/vm/swappiness is exactly the same as sysctl -w vm.swappiness=10. But as a
rule of thumb, if a /proc/sys file is read-only, you cannot set it with sysctl either.
Iinux network optimie with sysctI
Disabling the TCP options reduces the overhead of each TCP packet and might help to get
the last few percent of performance out of the server. Be aware that disabling these options
most likely decreases performance for high-latency and lossy links.
* net.ipv4.tcp_sack = 0
* net.ipv4.tcp_timestamps = 0
ncreasing the TCP send and receive buffers will increase the performance a lot if (and only
if) you have a lot of large files to send.
b.sadhiq
117 www.altnix.com
* net.ipv4.tcp_wmem = 4096 65536 524288
* net.core.wmem_max = 1048576
f you have a lot of large file uploads, increasing the receive buffers will help.
* net.ipv4.tcp_rmem = 4096 87380 524288
* net.core.rmem_max = 1048576
# These ensure that TME_WAT ports either get reused or closed fast.
net.ipv4.tcp_fin_timeout = 1
net.ipv4.tcp_tw_recycle = 1
# TCP memory
net.core.rmem_max = 16777216
net.core.rmem_default = 16777216
net.core.netdev_max_backlog = 262144
net.core.somaxconn = 262144
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_orphans = 262144
net.ipv4.tcp_max_syn_backlog = 262144
net.ipv4.tcp_synack_retries = 2
net.ipv4.tcp_syn_retries = 2
# you shouldn't be using conntrack on a heavily loaded server anyway, but these are
# suitably high for our uses, insuring that if conntrack gets turned on, the box doesn't die
net.ipv4.ip_conntrack_max = 1048576
net.nf_conntrack_max = 1048576
# increase Linux TCP buffer limits
echo 8388608 > /proc/sys/net/core/rmem_max
echo 8388608 > /proc/sys/net/core/wmem_max
# increase Linux autotuning TCP buffer limits
echo "4096 87380 8388608" > /proc/sys/net/ipv4/tcp_rmem
echo "4096 65536 8388608" > /proc/sys/net/ipv4/tcp_wmem
#echo 65536 > /proc/sys/fs/file-max # physical RAM * 256/4
echo "1024 65000" > /proc/sys/net/ipv4/ip_local_port_range
#echo 1 > /proc/sys/net/ipv4/tcp_syncookies
echo 8192 > /proc/sys/net/ipv4/tcp_max_syn_backlog
# Decrease the time default value for tcp_fin_timeout connection
#echo 30 > /proc/sys/net/ipv4/tcp_fin_timeout
#echo 3 > /proc/sys/net/ipv4/tcp_syn_retries
#echo 2 > /proc/sys/net/ipv4/tcp_retries1
# Decrease the time default value for tcp_keepalive_time connection
#echo 1800 >/proc/sys/net/ipv4/tcp_keepalive_time
# Turn off tcp_window_scaling
echo 0 >/proc/sys/net/ipv4/tcp_window_scaling
#echo "67108864" > /proc/sys/kernel/shmmax
# Turn off the tcp_sack
echo 0 >/proc/sys/net/ipv4/tcp_sack # This disables RFC2018 TCP Selective
Acknowledgements
b.sadhiq
118 www.altnix.com
#Turn off tcp_timestamps
echo 0 >/proc/sys/net/ipv4/tcp_timestamps # This disables RFC1323 TCP timestamps
echo 5 > /proc/sys/kernel/panic # reboot 5 minutes later then kernel panic
the third:
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_syncookies = 1
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
Refrence
http://shebangme.blogspot.com/2010/07/kernel-sysctl-configuration-file-for.html
swappiness
http://www.linux.com/archive/feature/146599
Device driver
http://en.wikipedia.org/wiki/Device_driver
Lsmod
http://en.wikipedia.org/wiki/Lsmod
Modprobe
http://en.wikipedia.org/wiki/Modprobe
Oracle + sysctl
http://www.puschitz.com/TuningLinuxForOracle.shtml
http://www.puschitz.com/TuningLinuxForOracle.shtml#SettingSHMMAXParameter
http://www.puschitz.com/TuningLinuxForOracle.shtml#TheSEMMSLParameter
Refrence
http://www.linux.com/archive/feature/126718
http://www.fcicq.net/wp/?p=197
http://www.cyberciti.biz/tips/linux-procfs-file-descriptors.html
http://en.opensuse.org/Kernel_module_configuration
http://www.cyberciti.biz/tips/blade-server-disable-floppy-driver-module.html
BIackIist
Just open your /etc/modprobe.conf file and turn of auto loading using
following syntax:
alias driver-name off
f you are using Debian / Ubuntu Linux...
open /etc/modprobe.d/blacklist file and add drivername using following syntax:
blacklist driver-name
b.sadhiq
119 www.altnix.com
Linux KerneI Magic SysRq keys
Kernel offers you something that allows you to recover your system from a crash or at the
least lets you to perform a proper shutdown using the Magic SysRq Keys. The magic
SysRq key is a select key combination in the Linux kernel which allows the user to perform
various low level commands regardless of the system's state using the SysRq key. t is
often used to recover from freezes, or to reboot a computer without corrupting the
filesystem.
How do I use the magic SysRq keys in emergency?
You need to use following key combination in order to reboot/halt/sync file system etc:
ALT+SysRq+COMMAND-KEY
The 'SysRq' key is also known as the 'Print Screen' key. COMMAND-KEY can be any one
of the following (all keys need to hit simultaneously) :
O 'b' : Will immediately reboot the system without syncing or unmounting your disks.
O 'o' : Will shutdown your system off (if configured and supported).
O 's': Will attempt to sync all mounted filesystems.
O 'u' : Will attempt to remount all mounted filesystems read-only.
O 'e' : Send a STERM to all processes, except for init.
O 'h': Show help, indeed this the one you need to remember.
So whey you need to tell your Linux computer to reboot or when your X server is crashed or
you don't see anything going across the screen then just press:
ALT+SysRQ+s : (Press and hold down ALT, then SysRQ (Print Screen) key and press 's')
-Will try to syn all mounted system
ALT+SysRQ+r : (Press and hold down ALT, then SysRQ (Print Screen) key and press 'r') -
Will reboot the system.
f you wish to shutdown the system instead of reboot then press following key combination:
ALT+SysRQ+o
ipt_sysrq is a new iptables target that allows you to do the same as the magic sysrq key
on a keyboard does, but over the network. Sometimes a remote server hangs and only
responds to icmp echo request (ping). Every administrator of such machine is very unhappy
because (s)he must go there and press the reset button. t takes a long time and it's
inconvenient. So use the Network Magic SysRq and you will be able to do more than just
pressing a reset button. You can remotely sync disks, remount them read-only, then do a
reboot. And everything comfortably and only in a few seconds. Please see Marek Zelem
page to enableP Tables network magic SysRq function.
b.sadhiq
120 www.altnix.com
The magic Sysrq key basically has a key combination of ALT> + SysRq or Prnt Scrn> +
Command key>.
The command key can be one of the following providing a specific functionality
'b' Will imme/iately reboot the system without syncing or unmounting your /isks
'c' Will perform a kexec reboot in or/er to take a crash/ump
'd' Shows all locks that are hel/
'e' Sen/ a SIGTER to all processes, except for init
'f' Will call oom_kill to kill a memory hog process
'g' Use/ by kg/b on ppc an/ sh platforms
'h' Will /isplay help (actually any other key than those liste/ here will /isplay help
but 'h' is easy to remember
'i' Sen/ a SIGKILL to all processes, except for init
'k' Secure Access Key (SAK) Kills all programs on the current virtual console
NOTE: See important comments below in SAK section
'm' Will /ump current memory info to your console
'n' Use/ to make RT tasks nice-able
'o' Will shut your system off (if configure/ an/ supporte/)
'p' Will /ump the current registers an/ flags to your console
'q' Will /ump a list of all running timers
'r' Turns off keyboar/ raw mo/e an/ sets it to XLATE
's' Will attempt to sync all mounte/ filesystems
't' Will /ump a list of current tasks an/ their information to your console
'u' Will attempt to remount all mounte/ filesystems rea/-only
'v' Dumps Voyager SP processor info to your console
'w' Dumps tasks that are in uninterruptable (blocke/) state
'x' Use/ by xmon interface on ppc/powerpc platforms
b.sadhiq
121 www.altnix.com
'0'-'9' Sets the console log level, controlling which kernel messages will be printe/
to your console ('0', for example woul/ make it so that only emergency messages like
PANICs or OOPSes woul/ make it to your console)
Ref -
http://www.susegeek.com/general/linux-kernel-magic-sysrq-keys-in-opensuse-for-crash-
recovery/
http://www.cyberciti.biz/tips/reboot-linux-box-after-a-kernel-panic.html
http://www.cyberciti.biz/tips/reboot-or-halt-linux-system-in-emergency.html
n computing, a device driver or software driver is a computer program allowing higher-
level computer programs to interact with a hardware device.
A driver typically communicates with the device through the computer bus or
communications subsystem to which the hardware connects. When a calling program
invokes a routine in the driver, the driver issues commands to the device. Once the device
sends data back to the driver, the driver may invoke routines in the original calling program.
Drivers are hardware-dependent and operating-system-specific. They usually provide the
interrupt handling required for any necessary asynchronous time-dependent hardware
interface.
Operating systems
b.sadhiq
122 www.altnix.com
The mknod command
MAKEDEV is the preferred way of creating device files which are not present. However
sometimes the MAKEDEV script will not know about the device file you wish to create. This
is where the mknod command comes in. n order to use mknod you need to know the
major and minor node numbers for the device you wish to create. The devices.txt file in the
kernel source documentation is the canonical source of this information.
To take an example, let us suppose that our version of the MAKEDEV script does not know
how to create the /dev/ttyS0 device file. We need to use mknod to create it. We know from
looking at the devices.txt file that it should be a character device with major number 4 and
minor number 64. So we now know all we need to create the file.
# mknod /dev/ttyS0 c 4 64
# chown root.diaIout /dev/ttyS0
# chmod 0644 /dev/ttyS0
# Is -I /dev/ttyS0
crw-rw---- 1 root dialout 4, 64 Oct 23 18:23 /dev/ttyS0
As you can see, many more steps are required to create the file. n this example you can
see the process required however. t is unlikely in the extreme that the ttyS0 file would not
be provided by the MAKEDEV script, but it suffices to illustrate the point.
mknod /opt/champu b 3 10
mount /opt/champu /home
1. Ismod
2. insmod
3. rmmod
4. modprobe
5. modinfo
6. depmod
Ismod is a command on Linux systems which prints the contents of the /proc/modules file.
t shows which loadable kernel modules are currently loaded.
Abridged example output:
# lsmod
Module Size Used by
af_packet 27392 2
8139too 30592 0
snd_cs46xx 96872 3
b.sadhiq
123 www.altnix.com
snd_pcm_oss 55808 1
snd_mixer_oss 21760 2 snd_pcm_oss
ip6table_filter 7424 1
ip6_tables 19728 1 ip6table_filter
ipv6 290404 22
xfs 568384 4
sis900 18052 5
libata 169920 1 pata_sis
scsi_mod 158316 3 usb_storage,sd_mod,libata
usbcore 155312 6 ohci_hcd, usb_storage, usbhid
lsmod
First column is Module name and second column is size of modules i..e
the output format is module name, size, use count, list of referring
modules.
modprobe is a Linux program originally written by Rusty Russell used to add a loadable
kernel module (LKM) to the Linux kernel or remove an LKM from the kernel. t is commonly
used indirectly as udev relies upon modprobe to load drivers for automatically detected
hardware.
Networking
TooIs
$ifconfig
$neat-tui
$/etc/sysconfig/network-scripts/ifcfg-eth0
$netconfig
$ethtool
$ip r l
$telnet
$nmap
$netstat
$ping
$route
$traceroute
$tcpdump n/w traffic tool
b.sadhiq
124 www.altnix.com
$iptraf - Monitor n/w traffic.curses-based tool - Self-explanatory
$etheral - Network Analyzers which does data capture and filtering
$tethral - Captures and displays only the high level protocols
$ ifconfig --> Status of all interfaces
eth0 Link encap:Ethernet HWaddr 00:50:FC:2A:2C:48
inet addr:192.0.34.7 Bcast:192.0.34.255 Mask:255.255.255.0
UP BROADCAST RUNNN MULTCAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:4 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:100
RX bytes:0 (0.0 b) TX bytes:240 (240.0 b)
nterrupt:11 Base address:0xf000
eth1 Link encap:Ethernet HWaddr 00:60:CC:AA:2C:9C
inet addr:192.168.0.20 Bcast:192.168.0.255 Mask:255.255.255.0
UP BROADCAST RUNNN MULTCAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:4 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:100
RX bytes:0 (0.0 b) TX bytes:240 (240.0 b)
nterrupt:11 Base address:0xc000
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNN MTU:16436 Metric:1
RX packets:1407 errors:0 dropped:0 overruns:0 frame:0
TX packets:1407 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
b.sadhiq
125 www.altnix.com
RX bytes:149180 (145.6 Kb) TX bytes:149180 (145.6 Kb)
$ ifconfig eth0 --> Status of eth0 interface
eth0 Link encap:Ethernet HWaddr 00:50:FC:2A:2C:48
inet addr:192.0.34.7 Bcast:192.0.34.255 Mask:255.255.255.0
UP BROADCAST RUNNN MULTCAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:4 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:100
RX bytes:0 (0.0 b) TX bytes:240 (240.0 b)
nterrupt:11 Base address:0xf000
$ ifconfig eth0 P --> Set eth0 to P
$ ifconfig eth0 P:x --> Set eth0 to multiplexed P
$ ifconfig eth0 down --> Bring eth0 down
$ ifdown eth0 --> ditto
$ ifconfig eth0 up --> Bring eth0 up
$ ifup eth0 --> ditto
$ ifconfig eth0 -arp --> Disable use of arp protocol on this interface
$ ifconfig eth0 -allmulti
Enable or disable all-multicast mode. f selected, all multi-cast packets on the network will
be received by the interface.
$ ifconfig eth0 -promisc
Turn off promiscuous mode of the interface eth0. f on, tells the interface to send all traffic
on the NW to the kernel, not just traffic addressed to the m/c Check with ifconfig or netstat -i
$ ifconfig eth0 hw ether CC:CC:CC:CC:CC:CC
b.sadhiq
126 www.altnix.com
Changes the MAC address. Do a 'ifconfig eth0 down' first,change, then 'ifconfig eth0 up'.
MAC addr is changed.
$ ifconfig eth0 172.16.1.77 broadcast 172.16.1.255 netmask 255.255.0.0
Changes P/BC/netmask all in one go!
$ ifconfig eth0 mtu 800
Change mtu to 800
ethtooI - DispIay or change ethernet card settings
$ ethtool ethX
$ ethtool -h
$ ethtool -a ethX
$ ethtool -A ethX [autoneg on|off] [rx on|off] [tx on|off]
$ ethtool -c ethX
$ ethtool -C ethX [adaptive-rx on|off] [adaptive-tx on|off] [rx-usecs N] [rx-frames N] [rx-
usecs-irq N] [rx-frames-irq N] [tx-usecs N] [tx-frames N] [tx-usecs-irq N] [tx-frames-irq N]
[stats-block-usecs N] [pkt-rate-low N] [rx-usecs-low N] [rx-frames-low N] [tx-usecs-low N]
[tx-frames-low N] [pkt-rate-high N]
[rx-usecs-high N] [rx-frames-high N] [tx-usecs-high N]
[tx-frames-high N] [sample-interval N]
$ ethtool -g ethX
$ ethtool - ethX [rx N] [rx-mini N] [rx-jumbo N] [tx N]
$ ethtool -i ethX
$ ethtool -d ethX
$ ethtool -e ethX
$ ethtool -k ethX
$ ethtool -K ethX [rx on|off] [tx on|off] [sg on|off]
b.sadhiq
127 www.altnix.com
$ ethtool -p ethX [N]
$ ethtool -r ethX
$ ethtool -S ethX
$ ethtool -t ethX [offline|online]
$ man ethtool
ping - TCP/IP Diagnostic TooI
Send CMP ECHO_REQUEST to network hosts
There are two types of ping -
The std Unix ping which sends a CMP ECHO REQUEST and receives a CMP ECHO
REPLY frim the remote host if it is UP and running
The other is to send a UDP or TCP pkt to port 7 [echo] of the remote host and see that
whatever you type is echoed back. The host is UP.
$ telnet remote-host echo or 7
and whatever you type will be echoed back to you. system is alive !
$ ping -c -a -n P/Hostname [Count/AudiblePing/No Name Resolution]
ping send a packet of 64 bytes by def. The size if 56 CMP data bytes + 8 bytes for the
header data.
$ ping -s 1600 203.12.10.20
Send a larger pkt size than the MTU of Ethernet [1500], you can force fragmentation. You
can then identify low-level media issue or a congested NW. Since ping works at the P
layer, no server process [HTTP/DNS] is reqd to be running on the target host. Just a
running kernel.
b.sadhiq
128 www.altnix.com
Check the CMP seq no to see that no pkts are dropped and are in sequence.
Run
$ traceroute --> to get the path the pkt is taking and then track down the
offending mid-way routers by pinging each in succession.
$ route ['add'/'del'] [-net|-host] 'addr' {gw 'P'} {netmask 'mask'}
'interface'
DefauIt route:
/etc/sysconfig/network
ATEWAY=P
or
route add default gw gateway-P-addr
Routing determines path a pkt takes from its source thru a maze of NWs to dest.
Like asking for directions in an unfamiliar place. A person may point you tothe right city,
another to a street, another to the right bldg.
Routing is done at the IP Iayer.
When a pkt bound for some other host arrives, the path is found by matching the dest P
addr against the Kernel Routing Table [KRT]. f it matches a route in the KRT, the pkt is
fwd'ed to the 'next-hop gateway' P addr associated with the route.
Two speciaI cases are possibIe here:
Case I: pkt may be destined for some host on a directly connected NW. n this case
the 'next-hop gateway' P addr in the KRT will be one of the localhosts own interfaces and
the pkt is sent directly to its dest. The type of route is what you normally do with the ifconfig
cmd when you configure and interface.
Case II: No route in the KRT matches the dest addr that the pkt wishes to reach. The
default route [ateway] is invoked. Or an error. Most NWs have only one way out and that
is the default route. On the nternet backbone, the routers do not have default routes. The
buck stops here. f they do not have a routing entry for a dest, the dest cannot be reached
and a "network unreachable" CMP error is sent to the sender
The KRT contains info like "To get to NW X from m/c Y, send pkt to m/c Z with a cost of 1
[metric], alongwith TTL and reliability values for that route.
Routing Policy:
O Static routes : For small unconnected NWs
O Dynamic routes : Many subnets, large NWs, connected to the nternet
O Static/Dyn :
b.sadhiq
129 www.altnix.com
$ route
KerneI IP routing tabIe
Destination ateway enmask FIags Metric Ref Use Iface
192.0.34.0 0.0.0.0. 255.255.255.0 U 0 0 0 eth0
192.168.0.0 0.0.0.0. 255.255.255.0 U 0 0 0 eth1
127.0.0.1 0.0.0.0 255.255.255.0 U 0 0 0 lo
0.0.0.0. 192.0.34.1 0.0.0.0 U 0 0 0 eth0
$ route -n
KerneI IP routing tabIe
Destination ateway enmask FIags Metric Ref Use Iface
1. 132.236.227.0 132.236.227.93 255.255.255.0 U 0 0 0 eth0
2. 132.236.212.0 132.236.212.1 255.255.255.192 U 0 0 0 eth1
3. 127.0.0.1 0.0.0.0 255.255.255.0 U 0 0 0 lo
4. default 132.236.227.1 0.0.0.0 U 0 0 0 eth0
5. 132.236.220.64 132.236.212.6 255.255.255.192 U 0 0 0 eth1
Routes 1 and 2 were added by ifconfig when the eth0 and eth1 interfaces were configured
at bootup
This means to reach machine 132.236.227.93 on the NW 132.236.227.0 the W is
machine 132.236.227.93 - the machine itself is its W which implies it can be reached
directly on this NW and one has to go to no other m/c to consult.
Ditto for the next one.
Route 3 is the loopback interface, a pseudo-device that prevents pkts sent from the host to
itself from going out on the NW; instead, they are transferred directly route add default gw
132.236.227.1 eth0
Route 4 is the default route.
t says :
b.sadhiq
130 www.altnix.com
Pkts not explicitly addressed to any of the 3 NWs listed [or to the m/c itself] will be
sent to the default W host, 132.236.227.1
Route 5 says :
To reach NW 132.236.220.64/26, pkts must be sent W host 132.236.212.6 thru eth1.
netstat - Monitoring your TCP/IP NW
Print network connections, routing tables, interface statistics, masquerade connections, and
multicast memberships.
$ netstat -a :
Displays status of all active connections, including nactive [listening] servers waiting for
connects
$ netstat -l :
Show only inactive or listening connections, not establised
$ netstat -p :
Show the PD and name of the program to which each socket belongs
$ netstat -o :
nclude information related to networking timers
$ netstat -r :
Show the kernel routing table
$ netstat vatnp | grep <servicename>
$ netstat tulnp | grep <servicename>
State : TCP/P connection [socket] state
ESTABLSHED
The socket has an established connection.
SYN_SENT
The socket is actively attempting to establish a connection to the remote
host
b.sadhiq
131 www.altnix.com
Debug Note :
f you find a connection that stays in this state, then a local process is trying very
hard to contact a non-existent or inaccessible NW server.
SYN_RECV
A connection request has been received from a remote host and is being
initialized
FIN_WAIT1
The socket is closed, and the connection is shutting down.
FIN_WAIT2
Connection is closed, and the socket is waiting for a shutdown from the remote
end.
TIME_WAIT
The socket is waiting after close to handle packets still in thenetwork.
CLOSED The socket is not being used.
CLOSE_WAIT
The remote host end has shut down its connection, and the localhost is waiting
for the socket to close.
LAST_ACK
The remote end has shut down, and the socket is closed. Waiting for
acknowledgement.
LISTEN The socket is listening for incoming connections.Specify -l option to see this.
CLOSIN
Both sockets are shut down but we still donCt have all our data sent.
UNKNOWN
The state of the socket is unknown.
USER The login D of the user who owns the socket
b.sadhiq
132 www.altnix.com
FTP
Active FTP
Passive FTP
Users
ReguIar FTP
Anonymous FTP
Vsftpd.conf
anon_root=/data/directory
# Allow anonymous FTP?
anonymous_enable=YES
# The directory which vsftpd will try to change into after an anonymous login. (Default =
/var/ftp)
anon_root=/data/directory
# Uncomment this to allow local users to log in.
local_enable=YES
# Uncomment this to enable any form of FTP write command.
# (Needed even if you want local users to be able to upload files)
write_enable=YES
# Uncomment to allow the anonymous FTP user to upload files. This only
# has an effect if global write enable is activated. Also, you will
# obviously need to create a directory writable by the FTP user.
# anon_upload_enable=YES
# Uncomment this if you want the anonymous FTP user to be able to create
# new directories.
# anon_mkdir_write_enable=YES
# Activate logging of uploads/downloads.
xferlog_enable=YES
b.sadhiq
133 www.altnix.com
# You may override where the log file goes if you like.
# The default is shown below.
xferlog_file=/var/log/vsftpd.log
Other vsftpd.conf Options
There are many other options you can add to this file:
O Limiting the maximum number of client connections (max_clients)
O Limiting the number of connections by source P address (max_per_ip)
O The maximum rate of data transfer per anonymous login. (anon_max_rate)
O The maximum rate of data transfer per non-anonymous login. (local_max_rate)
Descriptions on this and more can be found in the vsftpd.conf man pages.
Anonymous upIoad
mkdir /var/ftp/pub/upload
chmod 722 /var/ftp/pub/upload
ftpd_banner= New Banner Here
write_enable = NO
Check fiIes under the foIIowing
$ cd /etc/vsftpd/
$ ls
ftpusers users_Iist vsftpd.conf vsftpd.conf_migrate.sh
b.sadhiq
134 www.altnix.com
Types of FTP
From a networking perspective, the two main types of FTP are active and passive. n active
FTP, the FTP server initiates a data transfer connection back to the client. For passive FTP,
the connection is initiated from the FTP client. These are illustrated in Figure 15-1.
Figure 15-1 Active and Passive FTP IIIustrated
From a user management perspective there are also two types of FTP: regular FTP in
which files are transferred using the username and password of a regular user FTP server,
and anonymous FTP in which general access is provided to the FTP server using a well
known universal login method.
Take a closer look at each type.
Active FTP
The sequence of events for active FTP is:
1. Your client connects to the FTP server by establishing an FTP control connection to port
21 of the server. Your commands such as 'ls' and 'get' are sent over this connection.
2. Whenever the client requests data over the control connection, the server initiates data
transfer connections back to the client. The source port of these data transfer
connections is always port 20 on the server, and the destination port is a high port
(greater than 1024) on the client.
3. Thus the ls listing that you asked for comes back over the port 20 to high port
connection, not the port 21 control connection.
FTP active mode therefore transfers data in a counter intuitive way to the TCP standard, as
it selects port 20 as it's source port (not a random high port that's greater than 1024) and
connects back to the client on a random high port that has been pre-negotiated on the port
21 control connection.
b.sadhiq
135 www.altnix.com
Active FTP may fail in cases where the client is protected from the nternet via many to one
NAT (masquerading). This is because the firewall will not know which of the many servers
behind it should receive the return connection.
Passive FTP
Passive FTP works differently:
1. Your client connects to the FTP server by establishing an FTP control connection to port
21 of the server. Your commands such as ls and get are sent over that connection.
2. Whenever the client requests data over the control connection, the client initiates the
data transfer connections to the server. The source port of these data transfer
connections is always a high port on the client with a destination port of a high port on
the server.
Passive FTP should be viewed as the server never making an active attempt to connect to
the client for FTP data transfers. Because client always initiates the required connections,
passive FTP works better for clients protected by a firewall.
As Windows defaults to active FTP, and Linux defaults to passive, you'll probably have to
accommodate both forms when deciding upon a security policy for your FTP server.
ReguIar FTP
By default, the VSFTPD package allows regular Linux users to copy files to and from their
home directories with an FTP client using their Linux usernames and passwords as their
login credentials.
VSFTPD also has the option of allowing this type of access to only a group of Linux users,
enabling you to restrict the addition of new files to your system to authorized personnel.
The disadvantage of regular FTP is that it isn't suitable for general download distribution of
software as everyone either has to get a unique Linux user account or has to use a shared
username and password. Anonymous FTP allows you to avoid this difficulty.
Anonymous FTP
Anonymous FTP is the choice of Web sites that need to exchange files with numerous
unknown remote users. Common uses include downloading software updates and MP3s
and uploading diagnostic information for a technical support engineers' attention. Unlike
regular FTP where you login with a preconfigured Linux username and password,
anonymous FTP requires only a username of anonymous and your email address for the
password. Once logged in to a VSFTPD server, you automatically have access to only the
default anonymous FTP directory (/var/ftp in the case of VSFTPD) and all its subdirectories.
ood UI ftp cIients
O 1.1. kasablanca
O 1.2. ftpcube
O 1.3. gftp
b.sadhiq
136 www.altnix.com
O 1.4. iglooftp
O 1.5. konqueror
O 1.6. filezilla
ConsoIe ftp cIients
O 2.1. NU Midnight Commander
O 2.2. ftp
O 2.3. yafc
O 2.4. ncftp
ProbIems With FTP And FirewaIIs
FTP frequently fails when the data has to pass through a firewall, because firewalls are
designed to limit data flows to predictable TCP ports and FTP uses a wide range of
unpredictable TCP ports. You have a choice of methods to overcome this.
Note: The Appendix , "Codes, Scripts, and Configurations", contains examples of how to
configure the VSFTPD Linux firewall to function with both active and passive FTP.
CIient Protected By A FirewaII ProbIem
Typically firewalls don't allow any incoming connections at all, which frequently blocks
active FTP from functioning. With this type of FTP failure, the active FTP connection
appears to work when the client initiates an outbound connection to the server on port 21.
The connection then appears to hang, however, as soon as you use the ls, dir, or get
commands. The reason is that the firewall is blocking the return connection from the server
to the client (from port 20 on the server to a high port on the client). f a firewall allows all
outbound connections to the nternet, then passive FTP clients behind a firewall will usually
work correctly as the clients initiate all the FTP connections.
O Solution
Table shows the general rules you'll need to allow FTP clients through a firewall:
CIient Protected by FirewaII - Required RuIes for FTP
method Source Address ce Port Destination Address Destination Port Connection Type
Allow outgoing control connections to server
Channel FTP client/network High
1
FTP server
2
21 New
FTP server
2
21 FTP client/network High Established
3
Allow the client to establish data channels to remote server
b.sadhiq
137 www.altnix.com
Active
FTP
FTP server
2
20 FTP client / network High New
FTP client/network High FTP server
2
20 Established
3
FTP FTP client/network High FTP server
2
High New
FTP server
2
High FTP client/network High Established
3
1
reater than 1024.
2
n some cases, you may want to allow all nternet users to have access, not just a specific
client server or network.
3
Many home-based Iirewall/routers automatically allow traIIic Ior already established connections.
This rule may not be necessary in all cases.
Server Protected By A Firewall Problem
Typically firewalls don't let any connections come in at all. When a an incorrectly configured
firewall protects an FTP server, the FTP connection from the client doesn't appear to work
at all for both active and passive FTP.
O Solution
TabIe 15-2 RuIes needed to aIIow FTP servers through a firewaII.
Method Source Address
source
Port
Destination
Address
Destination
Port
Connection
Type
Allow incoming control connections to server
control
Channel
FTP client/network
2
High
1
FTP server 21 New
FTP server 21 FTP client/network
2
High Established
3
Allow server to establish data channel to remote client
FTP FTP server 20 FTP client/network
2
High New
FTP client/network
2
High FTP server 20 Established
3
Passive
FTP
FTP client/network
2
High FTP server High New
FTP server High FTP client/network
2
High Established
3