UNIX / Linux Tutorial: Linux Quick Command Reference
UNIX / Linux Tutorial: Linux Quick Command Reference
Using Linux is very similar to using Microsoft (MSDOS). In this book, the author has
attempted
explain the many different commands within Linux in a simple manner, easy enough for
anyone to
comprehend and master.
Inside the book, you'll discover:
Linux Quick Command Reference is one of the best books for learning the fantastic
Linux
Operating System.
HOME
1.6 Exploring The File System 1.8 Wildcards 1.9.3 Pipes 1.10.3
Permissions Dependencies
1.12.4 Stopping And Restarting Jobs 1.13.3 Inserting Text 1.13.9 Including Other Files
1.14.3 Shell Initialization Scripts System Administration 2.3.1 The /etc/imitate file
2.4 Managing File Systems 2.6 Managing Users 2.6.5 Groups 2.7.2 gzip and compress
http://personal.bellsouth.net/atl/p/s/psadler/KnowledgeWorks.htm
© copyright KnowledgeWorks, Inc. (2001)
BACK
HOME
1.6 Exploring The File System 1.8 Wildcards 1.9.3 Pipes 1.10.3 Permissions Dependencies
1.12.4 Stopping And Restarting Jobs 1.13.3 Inserting Text 1.13.9 Including Other Files
1.14.3 Shell Initialization Scripts System Administration 2.3.1 The /etc/imitate file
2.4 Managing File Systems 2.6 Managing Users 2.6.5 Groups 2.7.2 gzip and compress
http://personal.bellsouth.net/atl/p/s/psadler/KnowledgeWorks.htm
© copyright KnowledgeWorks, Inc. (2001)
1.3 First Steps Into Linux | 1.3.1 Moving Around | 1.3.2 Look At The Contents Of Directories | | 1.3.3 Creating
New Directories
1.3.5 Moving Files | 1.3.6 Deleting Files And Directories | 1.3.7 Looking
At Files | 1.3.8 Getting Online Help | 1.4 Accessing MS-DOS files 1.5 Summary
Of Basic Unix Commands
1.8 Wildcards
1.9.3 Pipes
1.9.4 Non-Destructive Redirection Of Output | 1.10 File Permissions 1.10.1 Concepts Of File Permissions | 1.10.2
Interpreting File Permissions
1.10.4 Changing Permissions | 1.11 Managing File Links | 1.11.1 Hard Links 1.11.2 Symbolic Links | 1.12 Job Control |
1.12.1 Jobs And Processes 1.12.2 Foreground And Background | 1.12.3 Backgrounding And Killing Jobs
1.13.4 Deleting Text | 1.13.5 Changing Text | 1.13.6 Commands For Moving The Cursor | 1.13.7 Saving Files And
Quitting vi | 1.13.8 Editing Another File
Chapter 2
System Administration
2.1 The Root Account | 2.2 Booting The System | 2.2.1 Using LILO |
2.3 Shutting Down |
2.4.1 Mounting File Systems | 2.4.2 Device Driver Names | 2.4.3 Checking File Systems 2.5 Using A Swap File
2.6.1 User Management Concepts | 2.6.2 Adding Users | 2.6.3 Deleting Users |
2.6.4 Setting User Attributes
2.6.5 Groups
2.7.3 Putting Them Together | 2.8 Using Floppies And Making Backups | 2.8.1
Using Floppies For Backups | 2.8.2 Backups With A Zip Drive
2.9.4 Upgrading The Libraries | 2.9.5 Upgrading gcc | 2.9.6 Upgrading Other
Software | 2.10 Miscellaneous Tasks
HOME
1.6 Exploring The File System 1.8 Wildcards 1.9.3 Pipes 1.10.3 Permissions Dependencies
1.12.4 Stopping And Restarting Jobs 1.13.3 Inserting Text 1.13.9 Including Other Files
1.14.3 Shell Initialization Scripts System Administration 2.3.1 The /etc/imitate file
2.4 Managing File Systems 2.6 Managing Users 2.6.5 Groups 2.7.2 gzip and compress
http://personal.bellsouth.net/atl/p/s/psadler/KnowledgeWorks.htm
© copyright KnowledgeWorks, Inc. (2001)
1.1 Introduction | 1.2 Basic Concepts | 1.2.1 Creating An Account | 1.2.2 Logging In
1.2.3 Virtual Consoles | 1.2.4 Shells And Commands | 1.2.5 Logging Out
1.2.6 Changing Your Password | 1.2.7 Files And Directories | 1.2.8 The Directory Tree
1.2.9 The Current Working Directory
1.1 Introduction
If you’re new to Unix or Linux, you may be a bit intimidated by the size and apparent complexity of the
system before you. This chapter does not go into great detail or cover advanced topics. Instead, I want
you to hit the ground running.
I assume very little here about your background, except perhaps that you have some familiarity with
personal computer systems and MS-DOS. However, even if you’re not an MS-DOS user, you should be
able to understand everything here. At first glance, both Operating Systems (O/S) look a lot like
MS-DOS--after all, parts of MS-DOS were modeled on the CP/M Operating System (O/S), which in turn
was modeled from Unix. However, only the most superficial features of Unix and Linux resemble
MS-DOS. Even if you’re completely new to the PC world, this tutorial should help.
Before we begin: Don’t be afraid to experiment. The system won’t bite you! You can’t destroy anything
by working on the system. Both Networking Operating Systems (NOS) have built-in security features to
prevent "normal" users from damaging files that are essential to the system. Even so, the worst thing that
can happen is that you may delete some or all of your files and you’ll have to re-install the system. So, at
this point, you have nothing to lose.
e.g. From now on, I will refer to both Networking Operating Systems (NOS)as "Linux" unless referred to
otherwise. The majority of commands listed or referred to, in this tutorial reference, are interchangeable
with either systems.
1.2 Basic Concepts
Linux is a multitasking, multi-user Operating System, which means that many people can run many
different applications on one computer at the same time. This differs from MS-DOS, where only one
person can use the system at any one time. Under Linux, to identify yourself to the system, you must log
in, which entails entails entering you login name (the name the system uses to identify you), and entering
your password, which is your personal key for logging in to your account. Because only you know your
password, no one else can log into the system under your user name.
On traditional Unix Systems, the System Administrator assigns you a user name and an initial password
when you are given an account on the system. However, with Linux, if you’re the system administrator,
you must set up your own account before you can log in. For the following discussions, we’ll use the
imaginary user name, "patrick".
In addition, each system has a host name assigned to it. It is this host name that gives your machine a
name. The host name is used to identify individual machines on a network, but even if your machine isn’t
networked, it should have a host name. For our examples below, the system’s host name is "house".
1.2.1 Creating An Account
Before you can use a newly installed Linux system, you must set a user account for yourself. It’s usually
not a good idea to use the root account for normal use; you should reserve the root account for running
privileged commands and for maintaining the system as discussed below.
In order to create an account for yourself, log in as root and use the useradd or adduser command.
1.2.2 Logging In
At login time, you’ll see a prompt resembling the following:
house login:
Enter your user name and press the Enter key. Our hero, patrick ,would type:
house login: patrick
password:
Next, enter your password. The characters you enter won’t be echoed to the screen, so type carefully. If
you mistype your password, you’ll see the message:
login incorrect.
and you’ll have to try again.
Once you have correctly entered the user name and password, you are officially logged into the system
and are free to roam.
1.2.3 Virtual Consoles
The System’s console is the monitor and keyboard connected directly to the system. (Because Linux is a
multi-user Operating System, you may have other terminals connected to serial ports on your system, but
these would not be the console.)
Linux, like some other version of Unix, provides access to virtual consoles or (VC’s), that let you have
more than one login session on the console at one time.
To demonstrate this, login to your system. Next, press Alt-F2. You should see the login: prompt again.
You’re looking at the second virtual console. To switch back to the first VC, press Alt-F1. Voila!!!
of these commands and we will go into them later). The shell also checks to see if the command is an
alias, or substitute name, for another command. If neither of these conditions apply, the shell looks for a
program, on disk, having the specified name. If successful, the shell runs the program, sending the
arguments specified on the command line.
In our example, the shell looks for a program called make and runs it with the argument love. Make is a
program often used to compile large programs and takes as arguments the name of a "target" to compile.
In the case of "make love", we instructed make to compile the target love. Because make can’t find a
target by this name, it fails with a humorous error message and returns us to the shell prompt.
What happens if we type a command to a shell and the shell can’t find a program having the specified
name? Well, we can try the following:
/home/patrick# eat dirt
eat: command not found
/home/patrick#
Quite simply, if the shell can’t find a program having the name given on the command line (here,"eat"), it
prints an error message. You’ll often see this error message if you mistype a command (for example, if
you had typed "mkae love"instead of "make love")..
1.2.5 Logging Out
Before we delve much further, we should tell you how to log out of the System. At the shell prompt, use
the command:
/home/patrick# exit
to log out. There are other ways of logging out, but this is the most foolproof one for now.
1.2.6 changing Your Password
You should also know how to change your password. The command passwd prompts you for your old
password and a new password. It also asks you to re-enter the new password for validation. Be careful
not to forget your password--if you do, you will have to ask the System Administrator to reset it for you.
(If you are the System Administrator, read on.)
1.2.7 Files And Directories
Under most Operating Systems (including Linux), there is the concept of a file, which is just a bundle of
information given a name (called a filename). Examples of files might be your history term paper, an
e-mail message, or an actual program that can be executed. Essentially, anything saved on disk is saved
in an individual file.
Files are identified by their file names. For example, you could name your history paper with the file
name history-paper. These names usually identify the file and its contents in some form that is
meaningful to you. There is no standard format for file names as there is under MS-DOS and some other
Operating Systems; in general a file name can contain any character (except the / character -- see the
discussion of pat names, below) and is limited to 256 characters in length.
With the concept of files comes the concept of directories. A directory is a collection of files. You can
think of it as a folder that contains many different files. Directories are given names, with which you can
identify them. Furthermore, directories are maintained in a tree-like structure; that is, directories may
contain other directories.
Consequently, you can refer to a file by its path name, which is made up of the filename, preceded by the
name of the directory containing the file. For example, let’s say that Patrick has a directory called papers,
which contains three files: history-final, english-lit and masters-thesis. Each of these three files contains
information for three of Patrick's ongoing projects. To refer to the english-lit file, Patrick can specify the
file’s pathname, as in:
papers/english-lit
As you can see, the directory and filename are separated by a single slash(/). For this reason, filenames
themselves cannot contain the / character. MS-DOS users will find this convention familiar, although in
the MS-DOS world the (\)backslash is used instead.
As mentioned, directories can be nested within each other as well. For example, let’s say that there is
another directory within papers, called notes. The notes directory cheat-sheet is:
papers/notes/cheat-sheet
Therefore, a path name is really like a path to the file. The directory that contains a given subdirectory is
known as the parent directory. Here, the directory papers is the parent of the notes directory.
1.2.8 The Directory Tree
Most Linux Systems use a standard layout for files so that system resources and programs can be easily
located. This layout forms a directory tree, which starts at the "/" directory, also known as the "root
directory". Directly underneath / are important subdirectories:
/bin,/etc,/dev and /usr, among others. These directories in turn contain other directories which contain
System configuration files, programs and so on.
In particular, each user has a home directory, which is the directory set aside for that user to store their
files. In the examples above, all of Patrick’s files (like cheat-sheet and history-final) are contained in
Patrick’s home directory. Usually, user home directories are contained under /home and are named for
the user owning that directory.
Patrick’s home directory is: /home/patrick
1.2.9 The Current Working Directory
At any moment, commands that you enter are assumed to be relative to your current working directory.
You can think of your working directory as the directory in which you are currently "located". When you
first login, your working directory is set to your home directory--/home/patrick, in our case. Whenever
you refer to a file, you may refer to it in relationship to your current working directory, rather than
specifying the full pathname of the file.
Here’s an example. Patrick has the directory papers and papers contains the file history-final. If Patrick
wants to look at this file, he can use the command:
/home/patrick# more/home/patrick/papers/history-final
The more command simply displays a file, one screen at a time. However, because Patrick’s current
working directory is/home/patrick, he can instead refer to the file relative to his current location by using
the command:
/home/patrick# more papers/history-final
If you begin a filename (like papers/final) with a character other than /, you’re referring to the file in
terms relative to your current working directory. This is known as a relative path name.
On the other hand, if you begin a file name with a /, the system interprets this as a full path name--that is,
a path name that includes the entire path to the file, starting from the root directory, /. this is known as an
absolute path name.
HOME
1.6 Exploring The File System 1.8 Wildcards 1.9.3 Pipes 1.10.3 Permissions Dependencies
1.12.4 Stopping And Restarting Jobs 1.13.3 Inserting Text 1.13.9 Including Other Files
1.14.3 Shell Initialization Scripts System Administration 2.3.1 The /etc/imitate file
2.4 Managing File Systems 2.6 Managing Users 2.6.5 Groups 2.7.2 gzip and compress
http://personal.bellsouth.net/atl/p/s/psadler/KnowledgeWorks.htm
© copyright KnowledgeWorks, Inc. (2001)
1.2.10 Referring To Home Directories | 1.3 First Steps Into Linux | 1.3.1 Moving Around
1.3.2 Look At The Contents Of Directories | 1.3.3 Creating New Directories
/home/patrick#
(Note the space between the "cd" and the (".."). Every directory has an entry named ".." which refers to the parent directory.
Similarly, every directory has an entry named "." which refers to itself. Therefore, the command:
/home/patrick/papers# cd.
will get us nowhere.
You can also use absolute pathnames with the cd command. To cd into Karl’s home directory, we can use the command:
/home/patrick/papers# cd /home/karl
/home/karl#
Also, using cd without argument will return you to your own home directory.
/home/karl# cd
/home/patrick#
1.3.2 Look At The Contents Of Directories
Now that you know how to move around directories, you might think, "So what?" Moving around directories is fairly useless
by itself, so check out this new command, ls. The ls command displays a listing of files and directories, by default from your
current directory. For example:
/home/patrick# ls
mail
letters
papers
/home/patrick#
Here we can see that Patrick has three entries in his current directory: mail, letters and papers. This doesn’t tell us much-are
these directories or files? We can use the -F option of the ls command to get more detailed information:
/home/patrick# ls--F
mail/
letters/
papers/
/home/patrick#
From the / appended to each filename, we know that these three entries are in fact subdirectories.
Using ls-F may also append * to the end of a filename in the resulting list which would indicate that the file is an executable, or
a program which can be run. If nothing is appended to the filename using ls-F, the file is a "plain old file", that is, it’s neither a
directory nor an executable.
In general, each Unix command may take a number of options in addition to other arguments. These options usually begin with
a - as demonstrated above with the -F option. The -F option tells ls to give more information about the type of the files
involved--in this case, printing a / after each directory name.
If you give ls a directory name, the System will print the contents of that directory.
/home/patrick# ls--F papers
english-lit
hsitory-final
masters-thesis
notes/
/home/patrick#
Or, for a more interesting listing, let’s see what’s in the System’s /etc directory:
/home/patrick# ls /etc
Or, for a more interesting listing, let's see what's in the system's /etc directory:
/home/patrick# ls /etc
If you're a MS-DOS user, you may notice that the filenames can be longer than 8
characters, and can contain periods in any position. You can even use more than one
period
in a filename.
Let's move to the top of the directory tree, and then down to another directory with
the
commands:
/home/patrick# cd ..
/home# cd ..
/# cd usr
/usr# cd bin
/usr/bin#
Try moving around various directories, using ls and cd. In some cases, you may
run into the foreboding "Permission denied" error message. This is simply UNIX
security kicking in: in order to use the ls or cd commands, you must have permission
to
do so.
It's time to learn how to create directories. This involves the use of the mkdir
command.
Try the following:
/home/patrick# ls -F
Mail/
foo/
letters/
papers/
/home/patrick# cd foo
/home/patrick/foo# ls
/home/patrick/foo#
Congratulations! You made a new directory and moved into it. Since there aren't any
files in this new directory, let's learn how to copy files from one place to another.
HOME
1.6 Exploring The File System 1.8 Wildcards 1.9.3 Pipes 1.10.3 Permissions Dependencies
1.12.4 Stopping And Restarting Jobs 1.13.3 Inserting Text 1.13.9 Including Other Files
1.14.3 Shell Initialization Scripts System Administration 2.3.1 The /etc/imitate file
2.4 Managing File Systems 2.6 Managing Users 2.6.5 Groups 2.7.2 gzip and compress
http://personal.bellsouth.net/atl/p/s/psadler/KnowledgeWorks.htm
© copyright KnowledgeWorks, Inc. (2001)
/home/patrick/foo# cp /etc/termcap
/home/patrick/foo# cp /etc/shells
/home/patrick/foo# ls --F
shells termcap
/home/patrick/foo# ls --F
/home/patrick/foo#
The cp command copies the files listed on the command line to the file or directory
given as the last argument. Notice that we use "." to refer to the current directory.
The mv command moves files, rather than copying them. The syntax is very
straight-forward:
/home/patrick/foo# ls -F
/home/patrick/foo#
Notice that the termcap file has been renamed sells. You can also use the mv
command to move a file to a completely new directory.
Note: mv and cp will overwrite a destination file having the same name without asking
you. Be careful when you move a file into another directory. There may already be a
file
having the same name in that directory, which you'll overwrite!
You now have an ugly rhyme developing with the use of the ls command. To delete a
file, use the rm command, which stands for "remove", as shown here:
/home/patrick/foo# ls -F
shells
/home/patrick/foo#
We're left with nothing but shells, but we won't complain. Note that rm by default
won't prompt you before deleting a file—so be careful.
The commands more and cat are used for viewing the contents of files. more displays
a file, one screen at a time, while cat displays the whole file at once.
In case you're interested what shells contains, it's a list of valid shell programs
on
your system. On most systems, this includes /bin/sh,/bin/bash,/bin/csh.
We'll talk about these different types of shells later.
While using more, press Space to display the next page of text, and b to display
the previous page. There are other commands available in more as well, these are just
the
basics. Pressing q will quit more.
Quit more and try cat /etc/termcap. The text will probably fly by too quickly
for you to read it all. The name "cat" actually stands for "concatenate",
which is the real use of the program. The cat command can be used to concatenate the
contents
of several files and save the result to another file. You will see this again in
section
3.14.1.
Almost every UNIX system, including Linux, provides a facility known as manual
pages. These manual pages contain online documentation for system commands,
resources,
configuration files and so on.
The command used to access manual pages is man. If you're interested in learning
about other options of the ls command, you can type:
/home/patrick# man ls
Unfortunately, most manual pages are written for those who already have some idea of
what the command or resource does. For this reason, manual pages usually contain only
the
technical details of the command, without much explanation. However, manual pages can
be an invaluable resource for jogging your memory if you forget the syntax of a
command.
Manual pages will also tell you about commands that we don't cover in this book.
I suggest that you try man for the commands that we've already gone over and whenever
I introduce a new command. Some of these commands won't have manual pages,
for several reasons. For example, the command could be an internal shell command,
or an alias which would not have a manual page of its own. Another example is cd,
which
is an internal shell command. The shell itself actually processes the cd—there is not
a
separate program that implements this command.
If, for some twisted and bizarre reason, you want to access files from MS-DOS, it's
easily done under Linux.
The usual way to access MS-DOS files is to mount an MS-DOS partition or floppy
under Linux, allowing you to access the files directly through the file system. For
example,
if you have an MS-DOS floppy in /dev/fd0, the command:
You can also mount an MS-DOS partition of your hard drive for access. If
you have an MS-DOS partition on /dev/hda1, the command:
mounts it. Be sure to umount the partition when you're done using it. You can have
a MS-DOS partition automatically mounted at boot time if you include the entry in
/etc/fstab. The following line in /etc/fstab will mount an MS-DOS partition on
/dev/hda1 on the directory /dos:
You can also mount the VFAT file systems that are used by Windows 95/98:
This allows access to the long filenames of Windows 95. This only applies to
partitions
that actually have the long filenames stored. You can't mount a normal FAT16 file
system
and use this to get long filenames.
The Mtools software may also be used to access MS-DOS files. The commands mcd,
mdir, mcopy all behave like their MS-DOS counterparts. If you install Mtools, there
should be manual pages available for these commands.
Accessing MS-DOS files is one thing; running MS-DOS programs is another. There is
an MS-DOS Emulator under development for Linux; it is widely available, and included
in
most distributions. It can also be retrieved from a number of locations, including
the various
Linux FTP sites listed in Appendix B. The MS-DOS Emulator is reportedly powerful
enough to run a number of applications, including WordPerfect, from Linux. However,
Linux and MS-DOS are vastly different operating systems. The power of any MS-DOS
emulator under UNIX is limited. In addition, a Microsoft Windows emulator that runs
under X Windows is under development.And there's always KDE and GNOME.
This section introduces some of the most useful basic commands of a UNIX system,
including those that are covered in the previous section.
Note that options usually begin with "-", and in most cases you can specify more than
one option with a single "-". For example, rather than use the command ls -l -F, you
can use ls -lF.
Rather than listing every option for each command, we only present useful or
important
commands at this time. In fact, most of these commands have many options that you'll
never use. You can use man to see the manual pages for each command, which list all
of
the available options.
Also note that many of these commands take as arguments a list of files or
directories,
denoted in this table by " file1 ... fileN". For example, the cp command takes as
arguments a list of files to copy, followed by the destination file or directory.
When copying
more than one file, the destination must be a directory.
Syntax: cd directory
Where directory is the directory which you want to change to. ("."
refers to the current directory, ".." the parent directory. If no directory
is specified it defaults you your home directory.)
Example: cd ../foo sets the current directory up one level, then back
down to foo.
mv Moves one or more file to another file or directory. This command does
the equivalent of a copy followed by the deletion of the original file. You
can use this to rename files, like in the MS-DOS command RENAME.
Syntax: mv files destination
Where files lists the files to move, and destination is the destination file
or directory.
rm Deletes files. Note that when you delete a file under UNIX, they are
unrecoverable
(unlike MS-DOS, where you can usually "undelete" the file).
Syntax: rm files
Where files describes the filenames to delete.
The -i option prompts for confirmation before deleting the file.
rmdir Deletes empty directories. When using rmdir, the current working directory
must not be within the directory to be deleted.
Syntax: rmdir dirs
man Displays the manual page for the given command or resource (that is,
any system utility that isn't a command, such as a library function.)
Syntax: man command
Where command is the name of the command or resource to get help on.
more Displays the contents of the named files, one screen at a time.
Syntax: more files
Where files lists the files to display.
cat Officially used to concatenate files, cat is also used to display the
con-tents
of a file on screen.
Syntax: cat files
Where files lists the files to display.
grep Display every line in one or more files that match the given pattern.
Syntax: grep pattern files
Where pattern is a regular expression pattern, and files lists the files to
search.
HOME
1.6 Exploring The File System 1.8 Wildcards 1.9.3 Pipes 1.10.3 Permissions Dependencies
1.12.4 Stopping And Restarting Jobs 1.13.3 Inserting Text 1.13.9 Including Other Files
1.14.3 Shell Initialization Scripts System Administration 2.3.1 The /etc/imitate file
2.4 Managing File Systems 2.6 Managing Users 2.6.5 Groups 2.7.2 gzip and compress
http://personal.bellsouth.net/atl/p/s/psadler/KnowledgeWorks.htm
© copyright KnowledgeWorks, Inc. (2001)
You now have the skills and the knowledge to understand the Linux file system, and
you have a roadmap.
First, change to the root directory (cd /), and then enter ls -F to display a listing
of its contents. You'll probably see the following directories:
bin, dev, etc, home, install, lib, mnt, proc, root, tmp, user, usr,and var.
Note:
You may see others, and you might not see all of them. Every release of Linux
differs in some respects.
/bin
/bin is short for "binaries", or executables, where many essential system
programs reside. Use ls -F /bin to list the files here. If you
look down the list you may see a few commands that you recognize,
such as cp, ls and mv. These are the actual programs for these commands.
When you use the cp command, for example, you're running the
program /bin/cp.
Using ls -F, you'll see that most (if not all) of the files in /bin have
an asterisk ("*") appended to their filenames. This indicates that the files
are executables.
/dev
The "files" in/dev are device files—they access system devices and
resources like disk drives, modems, and memory. Just as your system can
read data from a file, it can also read input from the mouse by accessing
/dev/mouse.
Filenames that begin with fd are floppy disk devices. fd0 is the first
floppy disk drive, and fd1 is the second. You may have noticed that there
are more floppy disk devices than the two listed above: these represent
specific types of floppy disks. For example, fd1H1440 accesses high-density,
3.5" diskettes in drive 1.
The following is a list of some of the most commonly used device files.
Even though you may not have some of the physical devices listed below,
chances are that you'll have drivers in /dev for them anyway.
The various /dev/ttyS and /dev/cua devices are used for accessing
serial ports. /dev/ttyS0 refers to "COM1" under MS-DOS.
The /dev/cua devices are "callout" devices, and used with
a modem.
Device names that begin with sd are SCSI drives. If you have a
SCSI hard drive, instead of accessing it through /dev/hda, you
would access /dev/sda. SCSI tapes are accessed via st devices,
and SCSI CD-ROM via sr devices.
/etc
/etc contains a number of miscellaneous system configuration files.
These include /etc/passwd (the user database), /etc/rc (the system
initialization script), and so on.
/sbin
/sbin contains essential system binaries that are used for system administration.
/home
/home contains user's home directories. For example, /home/patrick
is the home directory for the user "patrick". On a newly installed system,
there may not be any users in this directory.
/lib
/lib contains shared library images, which are files that contain code
which many programs share in common. Rather than each program using
its own copy of these shared routines, they are all stored in one common
place, in /lib. This makes executable files smaller, and saves space on
your system.
/proc
/proc supports a "virtual file system", where the files are stored in
memory, not on disk. These "files" refer to the various processes running
on the system, and let you get information about the programs and
processes that are running at any given time.
/tmp
Many programs store temporary information and in a file that is deleted
when the program has finished executing. The standard location for these
files is in /tmp.
/usr
/usr is a very important directory which contains subdirectories that
contain some of the most important and useful programs and configuration
files used on the system.
The various directories described above are essential for the system to
operate, but most of the items found in /usr are optional. However, it is
these optional items that make the system useful and interesting. Without
/usr, you'd have a boring system that supports only programs like cp
and ls. /usr contains most of the larger software packages and the
configuration files that accompany them.
/usr/X11R6
/usr/X11R6 contains The X Window System, if you installed it. The
X Window System is a large, powerful graphical environment that provides
a large number of graphical utilities and programs, displayed in
"windows" on your screen. If you're at all familiar with the Microsoft
Windows or Macintosh environments, X Windows will look familiar.
The /usr/X11R6 directory contains all of the X Windows executables,
configuration files, and support files. This is covered in more detail
in Chapter ??.
/usr/bin
/usr/bin is the real warehouse for software on any Linux system,
containing most of the executables for programs not found in other places,
like /bin.
/usr/etc
Just as /etc contains essential miscellaneous system programs and
configuration files, /usr/etc contains miscellaneous utilities and files,
that in general, are not essential to the system.
/usr/include
/usr/include contains include files for the C compiler. These files
(most of which end in .h, for "header") declare data structure names,
subroutines, and constants used when writing programs in C. Files in.
/usr/include/sys are generally used when programming on the
UNIX system level. If you are familiar with the C programming language,
here you'll find header files like stdio.h, which declare functions
like printf().
/usr/g++-include
/usr/g++-include contains include files for the C++ compiler
(much like /usr/include).
/usr/lib
/usr/lib contains the "stub" and "static" library equivalents for the
files found in /lib. When compiling a program, the program is "linked"
with the libraries found in /usr/lib, which then directs the program
to look in /lib when it needs the actual code in the library. In addition,
various other programs store configuration files in /usr/lib.
/usr/local
/usr/local is much like /usr—it contains various programs and
files not essential to the system, but which make the system fun and exciting.
In general, programs in /usr/local are specialized for your
system—consequently, /usr/local differs greatly between Linux
systems.
/usr/man
This directory contains manual pages. There are two subdirectories
in it for every manual page "section" (use the command man
for details). For example, /usr/man/man1 contains the source
(that is, the unformatted original) for manual pages in section 1, and
/usr/man/cat1 contains the formatted manual pages for section 1.
/usr/src
/usr/src contains the source code (the uncompiled instructions) for
various programs on your system. The most important directory here
is /usr/src/linux, which contains the source code for the Linux
kernel.
/var
/var holds directories that often change in size or tend to grow. Many
of those directories used to reside in /usr, but since those who support
Linux are trying to keep it relatively unchangeable, the directories
that change often have been moved to /var. Some Linux distributions
maintain their software package databases in directories under /var.
/var/log
/var/log contains various files of interest to the system administra-
tor, specifically system logs, which record errors or problems with the
system. Other files record logins to the system as well as failed login
attempts. This will be covered in Chapter 4.
/var/spool
/var/spool contains files which are "spooled" to another program.
For example, if your machine is connected to a network, incoming mail
is stored in /var/spool/mail until you read or delete it. Outgoing
or incoming news articles are in /var/spool/news, and so on.
Many of the features we'll cover in this section are features provided by the shell
itself.
Be careful not to confuse Linux (the actual operating system) with a shell—a shell is
just
an interface to the underlying system. The shell provides functionality in addition
to Linux
itself.
A shell is not only an interpreter for the interactive commands you type at the
prompt,
but also a powerful programming language. It lets you to write shell scripts, to
"batch"
several shell commands together in a file. If you know MS-DOS you'll recognize the
similarity to "batch files". Shell scripts are a very powerful tool, that will let
you automate
and expand your use of Linux.
There are several types of shells in the Linux world. The two major types are the
"Bourne shell" and the "C shell". The Bourne shell uses a command syntax like the
original
shell on early UNIX systems, like System III. The name of the Bourne shell on most
Linux
systems is /bin/sh (where sh stands for "shell"). The C shell (not to be confused
with
sea shell) uses a different syntax, somewhat like the programming language C, and on
most
Linux systems is named /bin/csh.
Under Linux, several variations of these shells are available. The two most commonly
used are the Bourne Again Shell, or "Bash" (/bin/bash), and "Tcsh" (/bin/tcsh).
bash is a form of the Bourne shell that includes many of the advanced features found
in
the C shell. Because bash supports a superset of the Bourne shell syntax, shell
scripts
written in the standard Bourne shell should work with bash. If you prefer to use the
C
shell syntax, Linux supports tcsh, which is an expanded version of the original C
shell.
The type of shell you decide to use is mostly a religious issue. Some people prefer
the Bourne shell syntax with the advanced features of bash, and some prefer the more
structured C shell syntax. As far as normal commands such as cp and ls are concerned,
the shell you use doesn't matter—the syntax is the same. Only when you start to write
shell
scripts or use advanced features of a shell do the differences between shell types
begin to
matter.
As we discuss the features of the various shells, we'll note differences between
Bourne
and C shells. However, for the purposes of this manual most of those differences are
minimal. (If you're really curious at this point, read the man pages for bash and
tcsh).
HOME
1.6 Exploring The File System 1.8 Wildcards 1.9.3 Pipes 1.10.3 Permissions Dependencies
1.12.4 Stopping And Restarting Jobs 1.13.3 Inserting Text 1.13.9 Including Other Files
1.14.3 Shell Initialization Scripts System Administration 2.3.1 The /etc/imitate file
2.4 Managing File Systems 2.6 Managing Users 2.6.5 Groups 2.7.2 gzip and compress
http://personal.bellsouth.net/atl/p/s/psadler/KnowledgeWorks.htm
© copyright KnowledgeWorks, Inc. (2001)
1.8 Wildcards
A key feature of most Linux shells is the ability to refer to more than one file by
name
using special characters. These wildcards let you refer to, say, all file names that
contain
the character "n".
The wildcard "*" specifies any character or string of characters in a file name. When
you use the character "*" in a file name, the shell replaces it with all possible
substitutions from file names in the directory you're referencing.
Here's a quick example. Suppose that Patrick has the files frog, joe and stuff in
his current directory:
/home/patrick# ls
/home/patrick#
To specify all files containing the letter "o" in the filename, use the command:
/home/patrick# ls *f*
frog joe
/home/patrick#
As you can see, each instance of "*" is replaced with all substitutions that match
the
wild-card from filenames in the current directory.
The use of "*" by itself simply matches all filenames, because all characters match
the
wildcard.
/home/patrick# ls *
/home/patrick#
/home/patrick# ls f*
frog
/home/patrick# ls *ff
stuff
/home/patrick# ls *f*
frog stuff
/home/patrick# ls s*f
stuff
/home/patrick#
The process of changing a "*" into a series of filenames is called wildcard expansion
and is done by the shell. This is important: an individual command, such as ls, never
sees
the "*" in its list of parameters. The shell expands the wildcard to include all
filenames that match. So, the command:
/home/patrick# ls *o*
One important note about the "*" wildcard: it does not match file names that begin
with a single period ("."). These files are treated as hidden files—while they are
not really hidden, they don't show up on normal ls listings and aren't touched by the
use of
the "*" wildcard.
Here's an example. We mentioned earlier that each directory contains two special
entries:
"." refers to the current directory, and ".." refers to the parent directory.
However, when you use ls, these two entries don't show up:
/home/patrick# ls
/home/patrick#
If you use the -a switch with ls, however, you can display filenames that begin with
"
.".
Observe:
/home/patrick# ls -a
. .. .bash profile .bashrc frog joe
stuff
/home/patrick#
The listing contains the two special entries, "."and"..", as well as two
other "hidden" files—.bash profile and .bashrc. These two files are startup files
used by bash when patrick logs in.
Note that when you use the "*" wildcard, none of the filenames beginning with "
."are displayed.
/home/patrick# ls *
/home/patrick#
This is a safety feature: if the "*" wildcard matched filenames beginning with
".", it would also match the directory names "."and"..". This
can be dangerous when using certain commands.
Another wildcard is "?". The "?" wildcard expands to only a single character.
Thus, "ls ?" displays all one-character filenames. And "ls termca?"
would display:
/home/patrick# ls j?e
joe
/home/patrick# ls f??g
frog
/home/patrick# ls ????f
stuff
/home/patrick#
As you can see, wildcards lets you specify many files at one time. cp and mv commands
actually can copy or move more than one file at a time. For example:
copies all filenames in /etc beginning with "s" to the directory /home/patrick.
The format of the cp command is really:
cp files destination
where files lists the filenames to copy, and destination is the destination file or
directory.
mv has an identical syntax.
If you are copying or moving more than one file, the destination must be a directory.
You can only copy or move a single file to another file.
Many Linux commands get input from what is called standard input and send their
output to standard output (often abbreviated as stdin and stdout). Your shell sets
things up so that standard input is your keyboard, and standard output is the screen.
Here's an example using the cat command. Normally, cat reads data from all of the
files specified by the command line, and sends this data directly to stdout.
Therefore,
using the command:
However, if you don't specify a filename, cat reads data from stdin and sends it
back to stdout. Here's an example:
/home/patrick/papers# cat
Hello there.
Hello there.
Bye.
Bye.
Ctrl-D
/home/patrick/papers#
Each line that you type is immediately echoed back by cat. When reading from standard
input, you indicate the input is "finished" by sending an EOT (end-of-text) signal,
in general, generated by pressing Ctrl-D .
Here's another example. The sort command reads lines of text (again, from stdin,
unless you specify one or more filenames) and sends the sorted output to stdout. Try
the
following:
/home/patrick/papers# sort
bananas
carrots
apples
Ctrl-D
apples
bananas
carrots
/home/patrick/papers#
Now, let's say that you want to send the output of sort to a file, to save our
shopping
list on disk. The shell lets you redirect standard output to a filename, by using the
">" symbol. Here's how it works:
bananas
carrots
apples
Ctrl-D
/home/patrick/papers#
As you can see, the result of the sort command isn't displayed, but is saved to the
file
named shopping-list. Let's look at this file:
apples
bananas
carrots
/home/patrick/papers#
Now you can sort your shopping list, and save it, too! But let's suppose that you are
storing
the unsorted, original shopping list in the file items. One way of sorting the
information
and saving it to a file would be to give sort the name of the file to read, in lieu
of
standard input, and redirect standard output as we did above, as follows:
apples
bananas
carrots
/home/patrick/papers#
However, there's another way to do this. Not only can you redirect standard output,
you
can redirect standard input as well, using the "<" symbol:
apples
bananas
carrots
/home/patrick/papers#
Technically, sort < items is equivalent to sort items, but lets you demonstrate
the following point: sort < items behaves as if the data in the file items was typed
to standard input. The shell handles the redirection. sort wasn't given the name of
the
file (items) to read; as far as sort is concerned, it still reads from standard input
as if
you had typed the data from your keyboard.
This introduces the concept of a filter. A filter is a program that reads data from
standard
input, processes it in some way, and sends the processed data to standard output.
Using
redirection, standard input and standard output can be referenced from files. As
mentioned
above, stdin and stdout default to the keyboard and screen respectively. sort is a
simple filter. It sorts the incoming data and sends the result to standard output.
cat is even
simpler. It doesn't do anything with the incoming data, it simply outputs whatever is
given
to it.
HOME
1.6 Exploring The File System 1.8 Wildcards 1.9.3 Pipes 1.10.3 Permissions Dependencies
1.12.4 Stopping And Restarting Jobs 1.13.3 Inserting Text 1.13.9 Including Other Files
1.14.3 Shell Initialization Scripts System Administration 2.3.1 The /etc/imitate file
2.4 Managing File Systems 2.6 Managing Users 2.6.5 Groups 2.7.2 gzip and compress
http://personal.bellsouth.net/atl/p/s/psadler/KnowledgeWorks.htm
© copyright KnowledgeWorks, Inc. (2001)
1.9.3 Pipes
We already demonstrated how to use sort as a filter. However, these examples assume
that you have data stored in a file somewhere or are willing to type the data from
the
standard input yourself. What if the data that you wanted to sort came from the
output of
another command, like ls?
The -r option to sort sorts the data in reverse-alphabetical order. If you want to
list
the files in your current directory in reverse order, one way to do it is follows:
/home/patrick/papers# ls
english-list
history-final
masters-thesis
notes
Now redirect the output of the ls command into a file called file-list:
notes
masters-thesis
history-final
english-list
home/patrick/papers#
Here, you save the output of ls in a file, and then run sort -r on that file. But
this is
unwieldy and uses a temporary file to save the data from ls.
/home/patrick/papers# ls | j sort - r
notes
masters-thesis
history-final
english-list
/home/patrick/papers#
displays a long list of files, most of which fly past the screen too quickly for you
to read.
So, let's use more to display the list of files in /usr/bin.
Now you can page down the list of files at your leisure.
But the fun doesn't stop here! You can pipe more than two commands together. The
command head is a filter that displays the first lines from an input stream (in this
case,
input from a pipe). If you want to display the last filename in alphabetical order in
the
current directory, use commands like the following:
notes
/home/patrick/papers#
where head -1 displays the first line of input that it receives (in this case, the
stream of
reverse-sorted data from ls).
Using ">" to redirect output to a file is destructive: in other words, the command:
overwrites the contents of the file file-list. If instead, you redirect with the
symbol
">>", the output is appended to (added to the end of) the named file instead of
overwriting
it. For example:
Keep in mind that redirection and pipes are features of the shell—which supports the
use of ">", ">>"and"|". It has nothing to do with the commands themselves.
Because there is typically more than one user on a Linux system, Linux provides a
mechanism known as file permissions, which protect user files from tampering by other
users. This mechanism lets files and directories be "owned" by a particular user. For
example, because Larry created the files in his home directory, Larry owns those
files and
has access to them.
Linux also lets files be shared between users and groups of users. If Larry desired,
he
could cut off access to his files so that no other user could access them. However,
on most
systems the default is to allow other users to read your files but not modify or
delete them
in any way.
Every file is owned by a particular user. However, files are also owned by a
particular
group, which is a defined group of users of the system. Every user is placed into at
least
one group when that user's account is created. However, the system administrator may
grant the user access to more than one group.
Groups are usually defined by the type of users who access the machine. For example,
on a University Linux system users may be placed into the groups student, staff,
faculty or guest. There are also a few system-defined groups (like bin and admin)
which are used by the system itself to control access to resources—very rarely do
actual
users belong to these system groups.
Permissions fall into three main divisions: read, write, and execute. These
permissions
may be granted to three classes of users: the owner of the file, the group to which
the file
belongs, and to all users, regardless of group.
Read permission lets a user read the contents of the file, or in the case of
directories, list
the contents of the directory (using ls). Write permission lets the user write to and
modify
the file. For directories, write permission lets the user create new files or delete
files within
that directory. Finally, execute permission lets the user run the file as a program
or shell
script (if the file is a program or shell script). For directories, having execute
permission
lets the user cd into the directory in question.
Let's look at an example that demonstrates file permissions. Using the ls command
with the -l option displays a "long" listing of the file, including file permissions.
The first field in the listing represents the file permissions. The third field is
the owner
of the file (patrick) and the fourth field is the group to which the file belongs
(users).
Obviously, the last field is the name of the file (stuff). We'll cover the other
fields later.
This file is owned by patrick and belongs to the group users. The string
-rw-r--r-- lists, in order, the permissions granted to the file's owner, the file's
group,
and everybody else.
The first character of the permissions string ("-") represents the type of file. A
"-"
means that this is a regular file (as opposed to a directory or device driver). The
next three
characters ("rw-") represent the permissions granted to the file's owner, patrick.
The
"r" stands for "read" and the "w" stands for "write". Thus, patrick has read and
write
permission to the file stuff.
As mentioned, besides read and write permission, there is also "execute" permission—
represented by an "x". However, a "-" is listed here in place of an "x", so Patrick
doesn't
have execute permission on this file. This is fine, as the file stuff isn't a program
of any
kind. Of course, because Patrick owns the file, he may grant himself execute
permission for
the file if he so desires. (This will be covered shortly).
The next three characters, ("r--"), represent the group's permissions on the file.
The
group that owns this file is users. Because only an "r" appears here, any user who
belongs to the group users may read this file.
The last three characters, also ("r--"), represent the permissions granted to every
other
user on the system (other than the owner of the file and those in the group users).
Again,
because only an "r" is present, other users may read the file, but not write to it or
execute
it.
-rwxr-xr-x The owner of the file may read, write, and execute the file. Users in
the
file's group, and all other users, may read and execute the file.
-rw------- The owner of the file may read and write the file. No other user can
access the file.
-rwxrwxrwx All users may read, write, and execute the file.
HOME
1.6 Exploring The File System 1.8 Wildcards 1.9.3 Pipes 1.10.3 Permissions Dependencies
1.12.4 Stopping And Restarting Jobs 1.13.3 Inserting Text 1.13.9 Including Other Files
1.14.3 Shell Initialization Scripts System Administration 2.3.1 The /etc/imitate file
2.4 Managing File Systems 2.6 Managing Users 2.6.5 Groups 2.7.2 gzip and compress
http://personal.bellsouth.net/atl/p/s/psadler/KnowledgeWorks.htm
© copyright KnowledgeWorks, Inc. (2001)
UNIX/Linux Tutorial
1.10.3 Permissions Dependencies | 1.10.4 Changing Permissions
1.11 Managing File Links | 1.11.1 Hard Links | 1.11.2 Symbolic Links
1.12 Job Control | 1.12.1 Jobs And Processes |
1.12.2 Foreground And Background | 1.12.3 Backgrounding And Killing Jobs
In other words, to access a file at all, you must have execute access to all
directories
along the file's pathname, and read (or execute) access to the file itself.
Typically, users on a Linux system are very open with their files. The usual set of
permissions given to files is -rw-r--r--, which lets other users read the file but
not
change it in any way. The usual set of permissions given to directories is
-rwxr-xr-x,
which lets other users look through your directories, but not create or delete files
within
them.
However, many users wish to keep other users out of their files. Setting the
permissions
of a file to -rw------- will prevent any other user from accessing the file.
Likewise, setting
the permissions of a directory to -rwx------ keeps other users out of the directory
in question.
The command chmod is used to set the permissions on a file. Only the owner of a file
may change the permissions on that file. The syntax of chmod is:
Briefly, you supply one or more of all, user, group, or other. Then you specify
whether
you are adding rights (+) or taking them away (-). Finally, you specify one or more
of
read, write, and execute. Some examples of legal commands are:
chmod +r stuff
Let the owner of the file read, write, and execute the file.
Remove read, write, and execute permission from users other than the
owner and users in the file's group.
Links let you give a single file more than one name. Files are actually identified by
the
system by their inode number, which is just the unique file system identifier for the
file.
A directory is actually a listing of inode numbers with their corresponding
filenames. Each
filename in a directory is a link to a particular inode.
The ln command is used to create multiple links for one file. For example, let's say
that you have a file called foo in a directory. Using ls -i, you can look at the
inode
number for this file:
/home/patrick # ls -i foo
22192 foo
/home/patrick #
Here, foo has an inode number of 22192 in the file system. You can create another
link to
foo, named bar, as follows:
With ls -i, you see that the two files have the same inode:
/ home / patrick #
Now, specifying either foo or bar will access the same file. If you make changes to
foo,
those changes appear in bar as well. For all purposes, foo and bar are the same file.
These links are known as hard links because they create a direct link to an inode.
Note
that you can hard-link files only when they're on the same file system; symbolic
links (see
below) don't have this restriction.
When you delete a file with rm, you are actually only deleting one link to a file. If
you
use the command:
then only the link named foo is deleted, bar will still exist. A file is only truly
deleted
on the system when it has no links to it. Usually, files have only one link, so using
the rm
command deletes the file. However, if a file has multiple links to it, using rm will
delete
only a single link; in order to delete the file, you must delete all links to the
file.
The command ls -l displays the number of links to a file (among other information).
The second column in the listing, "2", specifies the number of links to the file.
Symbolic links, or symlinks, are another type of link, which are different from hard
links. A symbolic link lets you give a file another name, but doesn't link the file
by inode.
The command ln -s creates a symbolic link to a file. For example, if you use the
command:
However, using ls -l, we see that the file bar is a symlink pointing to foo:
/ home / patrick #
The file permissions on a symbolic link are not used (they always appear as
rwxrwxrwx). Instead, the permissions on the symbolic link are determined by the
permissions on the target of the symbolic link (in our example, the file foo).
Functionally, hard links and symbolic links are similar, but there are differences.
For
one thing, you can create a symbolic link to a file that doesn't exist; the same is
not true
for hard links. Symbolic links are processed by the kernel differently than are hard
links,
which is just a technical difference but sometimes an important one. Symbolic links
are
helpful because they identify the file they point to; with hard links, there is no
easy way to
determine which files are linked to the same inode.
Links are used in many places on the Linux system. Symbolic links are especially
important to the shared library images in /lib.
Job control is a feature provided by many shells (including bash and tcsh) that
let you control multiple running commands, or jobs, at once. Before we can delve much
further, we need to talk about processes.
Every time you run a program, you start what is called a process. The command ps
displays a list of currently running processes, as shown here:
/ home / patrick # ps
24 3 S 0:03 (bash)
161 3 R 0:00 ps
/ home / patrick #
The PID listed in the first column is the process ID, a unique number given to every
running process. The last column, COMMAND, is the name of the running command. Here,
we're looking only at the processes which Patrick is currently running. (There are
many other processes running on the system as well—"ps -aux" lists them all.) These
are bash (Patrick's shell), and the ps command itself. As you can see, bash is
running
concurrently with the ps command. bash executed ps when Patrick typed the command.
After ps has finished running (after the table of processes is displayed), control is
returned
to the bash process, which displays the prompt, ready for another command.
A running process is also called a job. The terms process and job are
interchangeable.
However, a process is usually referred to as a "job" when used in conjunction with
job
control—a feature of the shell that lets you switch between several independent jobs.
In most cases users run only a single job at a time—whatever command they last typed
to the shell. However, using job control, you can run several jobs at once, and
switch
between them as needed.
How might this be useful? Let's say you are editing a text file and want to interrupt
your editing and do something else. With job control, you can temporarily suspend the
editor, go back to the shell prompt and start to work on something else. When you're
done,
you can switch back to the editor and be back where you started, as if you didn't
leave the
editor. There are many other practical uses of job control.
Jobs can either be in the foreground or in the background. There can only be one
job in the foreground at a time. The foreground job is the job with which you
interact.It
receives input from the keyboard and sends output to your screen, unless, of course,
you
have redirected input or output. On the other hand, jobs in the background do not
receive
input from the terminal—in general, they run along quietly without the need for
interaction.
Some jobs take a long time to finish and don't do anything interesting while they are
running. Compiling programs is one such job, as is compressing a large file. There's
no
reason why you should sit around being bored while these jobs complete their tasks;
just
run them in the background. While jobs run in the background, you are free to run
other
programs.
Jobs may also be suspended. A suspended job is a job that is temporarily stopped.
After
you suspend a job, you can tell the job to continue in the foreground or the
background
as needed. Resuming a suspended job does not change the state of the job in any way.
The
job continues to run where it left off.
Suspending a job is not equal to interrupting a job. When you interrupt a running
process
(by pressing the interrupt key, which is usually Ctrl-C ), the process is killed, for
good.
Once the job is killed, there's no hope of resuming it. You'll must run the command
again. Also, some programs trap the interrupt, so that pressing Ctrl-C won't
immediately
kill the job. This is to let the program perform any necessary cleanup operations
before exiting. In fact, some programs don't let you kill them with an interrupt at
all.
Let's begin with a simple example. The command yes is a seemingly useless command
that sends an endless stream of y's to standard output. (This is actually useful. If
you piped
the output of yes to another command which asked a series of yes and no questions,
the
stream of y's would confirm all of the questions.)
Try it out:
/home/patrick# yes
y
y
y
y
y
The y's will continue ad infinitum. You can kill the process by pressing the
interrupt key,
which is usually Ctrl-C . So that we don't have to put up with the annoying stream of
(3)
y's, let's redirect the standard output of yes to /dev/null. As you may remember,
/dev/null acts as a "black hole" for data. Any data sent to it disappears. This is
a very effective method of quieting an otherwise verbose program.
Ah, much better. Nothing is printed, but the shell prompt doesn't come back. This is
because yes is still running, and is sending those inane y's to /dev/null. Again, to
kill
the job, press the interrupt key.
(3)You can set the interrupt key with the stty command.
Let's suppose that you want the yes command to continue to run but wanted to get
the shell prompt back so that you can work on other things. You can put yes into the
background, allowing it to run, without need for interaction.
One way to put a process in the background is to append an "&" character to the
end of the command:
[1] 164
/home/patrick #
As you can see, the shell prompt has returned. But what is this "[1] 164"? And is the
yes command really running?
The "[1]" represents the job number for the yes process. The shell assigns a job
number to every running job. Because yes is the one and only job we're running, it is
assigned job number 1.The"164" is the process ID, or PID, number given by the system
to the job. You can use either number to refer to the job, as you'll see later.
You now have the yes process running in the background, continuously sending a
stream of y's to /dev/null. To check on the status of this process, use the internal
shell
command jobs:
/home/patrick# jobs
/home/patrick #
Sure enough, there it is. You could also use the ps command as demonstrated above to
check on the status of the job.
To terminate the job, use the kill command. This command takes either a job number
or a process ID number as an argument. This was job number 1, so using the command:
/home/patrick # kill %1
kills the job. When identifying the job with the job number, you must prefix the
number
with a percent ("%") character.
Now that you've killed the job, use jobs again to check on it:
/home/patrick# jobs
/home/patrick#
The job is in fact dead, and if you use the jobs command again nothing should be
printed.
You can also kill the job using the process ID (PID) number, displayed along with the
job ID when you start the job. In our example, the process ID is 164, so the command:
is equivalent to:
/home/patrick# kill %1
You don't need to use the "%" when referring to a job by its process ID.
HOME
1.6 Exploring The File System 1.8 Wildcards 1.9.3 Pipes 1.10.3 Permissions Dependencies
1.12.4 Stopping And Restarting Jobs 1.13.3 Inserting Text 1.13.9 Including Other Files
1.14.3 Shell Initialization Scripts System Administration 2.3.1 The /etc/imitate file
2.4 Managing File Systems 2.6 Managing Users 2.6.5 Groups 2.7.2 gzip and compress
http://personal.bellsouth.net/atl/p/s/psadler/KnowledgeWorks.htm
© copyright KnowledgeWorks, Inc. (2001)
1.12.4 Stopping And Restarting Jobs | 1.13 Using The vi Editor | 1.13.1 Concepts
1.13.2 Starting vi
First, start the yes process in the foreground, as you did before:
Again, because yes is running in the foreground, you shouldn't get the shell prompt
back.
Now, rather than interrupt the job with Ctrl-C , suspend the job. Suspending a job
doesn't kill it: it only temporarily stops the job until you restart it. To do this,
press the
suspend key, which is usually Ctrl-Z .
ctrl-Z
/ home / patrick #
While the job is suspended, it's simply not running. CPU time isn't used for the job.
However, you can restart the job, which causes the job to run again as if nothing
ever
happened. It will continue to run where it left off.
To restart the job in the foreground, use the fg command (for "foreground").
/home/patrick# fg
The shell displays the name of the command again so you're aware of which job you
just
put into the foreground. Stop the job again with Ctrl-Z . This time, use the bg
command
to put the job into the background. This causes the command to run just as if you
started
the command with "&" as in the last section.
/ home / patrick # bg
/ home / patrick #
And you have your prompt back. Jobs should report that yes is indeed running, and you
can kill the job with kill as we did before.
How can you stop the job again? Using Ctrl-Z won't work, because the job is in the
background. The answer is to put the job in the foreground with fg, and then stop it.
As it
turns out, you can use fg on either stopped jobs or jobs in the background.
There is a big difference between a job in the background and a job that is stopped.
A
stopped job is not running—it's not using any CPU time, and it's not doing any work
(the
job still occupies system memory, although it may have been swapped out to disk). A
job
in the background is running and using memory, as well as completing some task while
you do other work.
However, a job in the background may try to display text on your terminal, which can
be annoying if you're trying to work on something else. For example, if you used the
command:
Another note. The fg and bg commands normally affect the job that was last stopped
(indicated by a "+" next to the job number when you use the jobs command). If you are
running multiple jobs at once, you can put jobs in the foreground or background by
giving
the job ID as an argument to fg or bg, as in:
/ home / patrick # fg %2
/ home / patrick # bg %3
(to put job number 3 into the background). You can't use process ID numbers with fg
or
bg.
/ home / patrick # %2
is equivalent to:
/ home / patrick # fg %2
Just remember that using job control is a feature of the shell. The fg, bg and jobs
commands are internal to the shell. If for some reason you use a shell that doesn't
support
job control, don't expect to find these commands available.
In addition, there are some aspects of job control that differ between bash and tcsh.
In fact, some shells don't provide job control at all—however, most shells available
for
Linux do.
A text editor is a program used to edit files that are composed of text: a letter, C
program, or a system configuration file. While there are many such editors available
for
Linux, the only editor that you are guaranteed to find on any UNIX or Linux system is
vi— the "visual editor." vi is not the easiest editor to use, nor is it very
self-explanatory.
However, because vi is so common in the UNIX/Linux world, and sometimes necessary,
it deserves discussion here.
Your choice of an editor is mostly a question of personal taste and style. Many users
prefer the baroque, self-explanatory and powerful emacs—an editor with more features
than any other single program in the UNIX world. For example, Emacs has its own
built-in
dialect of the LISP programming language, and has many extensions (one of which is an
Eliza-like artificial intelligence program). However, because Emacs and its support
files
are relatively large, it may not be installed on some systems. vi, on the other hand,
is
small and powerful but more difficult to use. However, once you know your way around
vi, it's actually very easy.
This section presents an introduction to vi. We will not discuss all of its features,
only
the ones you need to know to get started. You can refer to the man page for vi if
you're
interested in learning more about this editor's features. Alternatively, you can read
the book.
Learning the vi Editor from O'Reilly and Associates, or the VI Tutorial from
Specialized
Systems Consultants (SSC) Inc.
1.13.1 Concepts
While using vi, at any one time you are in one of three modes of operation. These
modes are called command mode, insert mode and last line mode.
When you start up vi, you are in command mode. This mode lets you use commands
to edit files or change to other modes. For example, typing "x" while in command mode
deletes the character underneath the cursor. The arrow keys move the cursor around
the file
you're editing. Generally, the commands used in command mode are one or two
characters
long.
You actually insert or edit text within insert mode. When using vi, you'll probably
spend most of your time in this mode. You start insert mode by using a command such
as
"i" (for "insert") from command mode. While in insert mode, you can insert text into
the
document at the current cursor location. To end insert mode and return to command
mode,
press Esc .
Last line mode is a special mode used to give certain extended commands to vi. While
typing these commands, they appear on the last line of the screen (hence the name).
For
example, when you type ":" in command mode, you jump into last line mode and can use
commands like "wq" (to write the file and quit vi), or "q!" (to quit vi without
saving
changes). Last line mode is generally used for vi commands that are longer than one
character. In last line mode, you enter a single-line command and press Enter to
execute
it.
1.13.2 Starting vi
The best way to understand these concepts is to fire up vi and edit a file. The
example
"screens" below show only a few lines of text, as if the screen were only six lines
high
instead of twenty-four.
vi filename
Start up vi by typing:
This column of "˜" characters indicates you are at the end of the file. The
represents
the cursor.
HOME
1.6 Exploring The File System 1.8 Wildcards 1.9.3 Pipes 1.10.3 Permissions Dependencies
1.12.4 Stopping And Restarting Jobs 1.13.3 Inserting Text 1.13.9 Including Other Files
1.14.3 Shell Initialization Scripts System Administration 2.3.1 The /etc/imitate file
2.4 Managing File Systems 2.6 Managing Users 2.6.5 Groups 2.7.2 gzip and compress
http://personal.bellsouth.net/atl/p/s/psadler/KnowledgeWorks.htm
© copyright KnowledgeWorks, Inc. (2001)
Now is the time for all good men to come to the aid
of the party.
˜
˜
˜
˜
˜
Type as many lines as you want (pressing Enter after each). You may correct mistakes
with the Backspace key.
There are several ways to insert text other than the i command. The a command inserts
text beginning after the current cursor position, instead of at the current cursor
position.
For example, use the left arrow key to move the cursor between the words "good"
and "men.".
Now is the time for all good men to come to the aid
of the party.
˜
˜
˜
˜
˜
Press a to start insert mode, type "wo", and then press Esc to return to command
mode:
Now is the time for all good women to come to the aid
of the party.
˜
˜
˜
˜
˜
To begin inserting text at the next line, use the o command. Press o and enter
another
line or two:
From command mode, the x command deletes the character under the cursor. If you
press x five times, you'll end up with:
You can delete entire lines using the command dd (that is, press d twice in a row).
If
the cursor is on the second line and you type dd, you'll see:
To delete the word that the cursor is on, use the dw command. Place the cursor on the
word "good", and type dw:
You can replace sections of text using the R command. Place the cursor on the first
letter in "party", press R , and type the word "hungry".
Using R to edit text is like the i and a commands, but R overwrites, rather than
inserts,
text.
The r command replaces the single character under the cursor. For example, move the
cursor to the beginning of the word "Now", and press r followed by C, you'll see:
The "˜" command changes the case of the letter under the cursor from upper- to
lowercase,
and back. For example, if you place the cursor on the "o" in "Cow" above and
repeatedly press ˜ , you'll end up with:
˜
˜
˜
˜
˜
You already know how to use the arrow keys to move around the document. In addition,
you can use the h, j, k and l commands to move the cursor left, down, up, and right,
respectively. This comes in handy when (for some reason) your arrow keys aren't
working
correctly.
The w command moves the cursor to the beginning of the next word; the b command
moves it to the beginning of the previous word.
The 0 command (that's the zero key) moves the cursor to the beginning of the current
line, and the $ command moves it to the end of the line.
When editing large files, you'll want to move forwards or backwards through the file
a screen at a time. Pressing Ctrl-F moves the cursor one screen forward, and
Ctrl-B moves it a screen back.
To move the cursor to the end of the file, press G. You can also move to an arbitrary
line; for example, typing the command 10G would move the cursor to line 10 in the
file.
To move to the beginning of the file, use 1G.
You can couple moving commands with other commands, such as those for deleting
text. For example, the d$ command deletes everything from the cursor to the end of
the
line; dG deletes everything from the cursor to the end of the file, and so on.
To quit vi without making changes to the file, use the command :q!. When you press
the ":", the cursor moves to the last line on the screen and you'll be in last line
mode:
In last line mode, certain extended commands are available. One of them is q!, which
quits vi without saving. The command :wq saves the file and then exits vi. The
command
ZZ (from command mode, without the ":") is equivalent to :wq. If the file has not
been
changed since the last save, it merely exits, preserving the modification time of the
last
change. Remember that you must press Enter after a command entered in last line
mode.
To edit another file, use the :e command. For example, to stop editing test and edit
the file foo instead, use the command:
If you use :e without saving the file first, you'll get the error message:
which means that vi doesn't want to edit another file until you save the first one.
At this
point, you can use :w to save the original file, and then use :e, or you can use the
command:
The "!" tells vi that you really mean it—edit the new file without saving changes to
the
first.
HOME
1.6 Exploring The File System 1.8 Wildcards 1.9.3 Pipes 1.10.3 Permissions Dependencies
1.12.4 Stopping And Restarting Jobs 1.13.3 Inserting Text 1.13.9 Including Other Files
1.14.3 Shell Initialization Scripts System Administration 2.3.1 The /etc/imitate file
2.4 Managing File Systems 2.6 Managing Users 2.6.5 Groups 2.7.2 gzip and compress
http://personal.bellsouth.net/atl/p/s/psadler/KnowledgeWorks.htm
© copyright KnowledgeWorks, Inc. (2001)
:r foo . txt
inserts the contents of the file foo.txt in the text at the location of the cursor.
You can also run shell commands within vi. The :r! command works like :r, but
rather than read a file, it inserts the output of the given command into the buffer
at the
current cursor location. For example, if you use the command:
: r ! ls -F
You can also "shell out" of vi, in other words, run a command from within vi, and
return to the editor when you're done. For example, if you use the command:
: ! ls -F
the ls -F command will be executed and the results displayed on the screen, but not
inserted into the file you're editing. If you use the command:
: shell
vi starts an instance of the shell, letting you temporarily put vi "on hold" while
you execute other commands. Just log out of the shell (using the exit command) to
return to
vi.
vi doesn't provide much in the way of interactive help (most Linux programs don't),
but you can always read the man page for vi. vi is a visual front-end to the ex
editor;
which handles many of the last-line mode commands in vi. So, in addition to reading
the
man page for vi, see ex as well.
As mentioned before, different shells use different syntaxes when executing shell
scripts. For example, Tcsh uses a C-like syntax, while Bourne shells use another
type.
of syntax. In this section, we won't be encountering many differences between the
two, but
we will assume that shell scripts are executed using the Bourne shell syntax.
Let's say that you use a series of commands often and would like to save time by
grouping all of them together into a single "command". For example, the three
commands:
/ home/patrick# wc -l book
/ home/patrick# lp book
concatenates the files chapter 1, chapter 2,and chapter 3 and places the result in
the file book. The second command displays a count of the number of lines in book and
the third command lp book prints book.
Rather than type all these commands, you can group them into a shell script. The
shell
script used to run all these commands might look like this:
# ! / bin / sh
wc -l book
lp book
Shell scripts are just plain text files; you can create them with an editor such as
emacs or
vi.
Let's look at this shell script. The first line, "# ! /bin/sh", identifies the file
as a shell script and tells the shell how to execute the script. It instructs the
shell to
pass the script to /bin/sh for execution, where /bin/sh is the shell program itself.
Why is
this important? On most Linux systems, /bin/sh is a Bourne-type shell, like bash.By
forcing the shell script to run using /bin/sh, you ensure that the script will run
under
a Bourne-syntax shell (rather than a C shell). This will cause your script to run
using the
Bourne syntax even if you use tcsh (or another C shell) as your login shell.
The second line is a comment. Comments begin with the character "#" and continue
to the end of the line. Comments are ignored by the shell—they are commonly used to
identify the shell script to the programmer and make the script easier to understand.
The rest of the lines in the script are just commands, as you would type them to the
shell directly. In effect, the shell reads each line of the script and runs that line
as if you
had typed it at the shell prompt.
Permissions are important for shell scripts. If you create a shell script, make sure
that
you have execute permission on the script in order to run it. When you create text
files,
the default permissions usually don't include execute permission, and you must set
them
explicitly.Briefly, if this script were saved in the file called makebook, you could
use
the command:
/home/patrick# makebook
tcsh, as well as other C-type shells, use a different mechanism for setting variables
3
than is described here. This discussion assumes the use of a Bourne shell like
bash.See
the tcsh manual page for details.
When you assign a value to a variable (using the "=" operator), you can access the
variable by prepending a "$" to the variable name, as demonstrated below:
The variable foo is given the value hello there. You can then refer to this value by
the variable name prefixed with a "$" character. For example, the command:
hello there
/home/patrick#
hello there
/home/patrick#
These variables are internal to the shell, which means that only the shell can access
them. This can be useful in shell scripts; if you need to keep track of a filename,
for
example, you can store it in a variable, as above. Using the set command displays a
list
of all defined shell variables.
However, the shell lets you export variables to the environment. The environment is
the set of variables that are accessible by all commands that you execute. Once you
define
a variable inside the shell, exporting it makes the variable part of the environment
as well.
Use the export command to export a variable to the environment.
Again, here we differ between bash and tcsh. If you use tcsh, another syntax is 3
used for setting environment variables (the setenv command is used). See the tcsh
manual page for more information.
The environment is very important to the UNIX system. It lets you configure certain
commands just by setting variables which the commands know about.
Here's a quick example. The environment variable PAGER is used by the man command
and it specifies the command to use to display manual pages one screen at a time.
If you set PAGER to the name of a command, it uses that command to display the man
pages, instead of more (which is the default).
Set PAGER to "cat". This causes output from man to be displayed to the screen all at
once, without pausing between pages.
/home/patrick# PAGER=cat
Try the command man ls. The man page should fly past your screen without pausing for
you.
Now, if we set PAGER to "more", the more command is used to display the man page:
/home/patrick# PAGER=more
Note that we don't have to use the export command after we change the value of PAGER.
We only need to export a variable once; any changes made to it thereafter will
automatically
be propagated to the environment.
It is often necessary to quote strings in order to prevent the shell from treating
various
characters as special. For example, you need to quote a string in order to prevent
the shell
from interpreting the special meaning of characters such as "*", "?" or a
space. There are many other characters that may need to be protected from
interpretation.
A detailed explanation and description of quoting is described in SSC's Bourne Shell
Tutorial.
The manual pages for a particular command tell you if the command uses any
environment
variables. For example, the man man page explains that PAGER is used to specify the
pager command.
Some commands share environment variables. For example, many commands use the
EDITOR environment variable to specify the default editor to use when one is needed.
The environment is also used to keep track of important information about your login
session. An example is the HOME environment variable, which contains the name of your
home directory:
/home/patrick
Another interesting environment variable is PS1, which defines the main shell prompt.
For example:
To set the prompt back (which contains the current working directory followed by a
"#"
symbol):
/home/patrick#
The bash manual page describes the syntax used for setting the prompt.
The PATH environment variable. When you use the ls command, how does the
shell find the ls executable itself? In fact, ls is in /bin on most systems. The
shell uses
the environment variable PATH to locate executable files for commands you type.
/bin:/usr/bin:/usr/local/bin:.
This is a list of directories for the shell to search, each directory separated by a
":". When you use the command ls, the shell first looks for /bin/ls,
then /usr/bin/ls,and so on.
Note that the PATH has nothing to do with finding regular files. For example, if you
use the command:
the shell does not use PATH to locate the files foo and bar—those filenames are
assumed
to be complete. The shell only uses PATH to locate the cp executable.
This saves you time, and means that you don't have to remember where all the command
executables are stored. On many systems, executables are scattered about in many
places, such as /usr/bin, /bin,or/usr/local/bin. Rather than give the command's
full pathname (such as /usr/bin/cp), you can set PATH to the list of directories
that you want the shell to automatically search.
Notice that PATH contains ".", which is the current working directory. This lets you
create a shell script or program and run it as a command from your current directory
without
having to specify it directly (as in ./makebook). If a directory isn't in your PATH,
then
the shell will not search it for commands to run; this also includes the current
directory.
HOME
1.6 Exploring The File System 1.8 Wildcards 1.9.3 Pipes 1.10.3 Permissions Dependencies
1.12.4 Stopping And Restarting Jobs 1.13.3 Inserting Text 1.13.9 Including Other Files
1.14.3 Shell Initialization Scripts System Administration 2.3.1 The /etc/imitate file
2.4 Managing File Systems 2.6 Managing Users 2.6.5 Groups 2.7.2 gzip and compress
http://personal.bellsouth.net/atl/p/s/psadler/KnowledgeWorks.htm
© copyright KnowledgeWorks, Inc. (2001)
The initialization scripts themselves are simply shell scripts. However, they
initialize
your environment by executing commands automatically when you log in. If you always
use the mail command to check your mail when you log in, you place the command in
the initialization script so it will execute automatically.
Both bash and tcsh distinguish between a login shell and other invocations of the
shell. A login shell is a shell invoked when you log in. Usually, it's the only shell
you'll
use. However, if you "shell out" of another program like vi, you start another
instance
of the shell, which isn't your login shell. In addition, whenever you run a shell
script, you
automatically start another instance of the shell to execute the script.
The initialization files used by bash are: /etc/profile (set up by the system
administrator
and executed by all bash users at login time), $HOME/.bash profile
(executed by a login bash session), and $HOME/.bashrc (executed by all non-login
instances of bash). If .bash profile is not present, .profile is used in-stead.
tcsh uses the following initialization scripts: /etc/csh.login (executed by all
tcsh users at login time), $HOME/.tcshrc (executed at login time and by all new
instances
of tcsh), and $HOME/.login (executed at login time, following .tcshrc).
If .tcshrc is not present, .cshrc is used instead.
A complete guide to shell programming would be beyond the scope of this book. See
the manual pages for bash or tcsh to learn more about customizing the Linux /Unix
environment.
This chapter should give you enough information for basic Linux use. The manual
pages are indispensable tools for learning about Linux. They may appear confusing at
first, but if you dig beneath the surface, there is a wealth of information.
We also suggest that you read a general Linux reference book. Linux has more features
than first meet the eye. Unfortunately, many of them are beyond the scope of this
book.
Other recommended Linux books are listed in the Appendix.
HOME
1.6 Exploring The File System 1.8 Wildcards 1.9.3 Pipes 1.10.3 Permissions Dependencies
1.12.4 Stopping And Restarting Jobs 1.13.3 Inserting Text 1.13.9 Including Other Files
1.14.3 Shell Initialization Scripts System Administration 2.3.1 The /etc/imitate file
2.4 Managing File Systems 2.6 Managing Users 2.6.5 Groups 2.7.2 gzip and compress
http://personal.bellsouth.net/atl/p/s/psadler/KnowledgeWorks.htm
© copyright KnowledgeWorks, Inc. (2001)
System Administration
2.1 The Root Account | 2.2 Booting The System | 2.2.1 Using LILO
2.3 Shutting Down
System Administration
This chapter covers the most important things that you need to know about system
administration under Linux in sufficient detail to start using the system
comfortably. In order
to keep the chapter manageable, it covers just the basics and omits many important
details.
The Linux System Administrator's Guide, by Lars Wirzenius (see Appendix A) provides
considerably more detail on system administration topics. It will help you understand
better
how things work and hang together. At least, skim through the SAG so that you know
what it contains and what kind of help you can expect from it.
Linux differentiates between different users. What they can do to each other and the
system is regulated. File permissions are arranged so that normal users can't delete
or
modify files in directories like /bin and /usr/bin. Most users protect their own
files
with the appropriate permissions so that other users can't access or modify them.
(One
wouldn't want anybody to be able to read one's love letters.) Each user is given an
account
that includes a user name and home directory. In addition, there are special, system
defined
accounts which have special privileges. The most important of these is the root
account,
which is used by the system administrator. By convention, the system administrator is
the
user, root.
There are no restrictions on root. He or she can read, modify, or delete any file on
the system, change permissions and ownerships on any file, and run special programs
like
those which partition a hard drive or create file systems. The basic idea is that a
person who
cares for the system logs in as root to perform tasks that cannot be executed as a
normal
user. Because root can do anything, it is easy to make mistakes that have
catastrophic
consequences.
If a normal user tries inadvertently to delete all of the files in /etc, the system
will not
permit him or her to do so. However, if root tries to do the same thing, the system
doesn't
complain at all. It is very easy to trash a Linux system when using root. The best
way to
prevent accidents is:
Sit on your hands before you press Enter for any command that is
non-reversible.
If you're about to clean out a directory, re-read the entire command to make sure
that
it is correct.
Use a different prompt for the root account. root's .bashrc or .login file
should set the shell prompt to something different than the standard user prompt.
Many people reserve the character "#" in prompts for root and use the prompt
character "$" for everyone else.
Log in as root only when absolutely necessary. When you have finished your
work
as root, log out. The less you use the root account, the less likely you are to
damage the system. You are less likely to confuse the privileges of root with those
of a normal user.
Picture the root account as a special, magic hat that gives you lots of power, with
which you can, by waving your hands, destroy entire cities. It is a good idea to be a
bit
careful about what you do with your hands. Because it is easy to wave your hands in
a destructive manner, it is not a good idea to wear the magic hat when it is not
needed,
despite the wonderful feeling.
Some people boot Linux with a floppy diskette that contains a copy of the Linux
kernel.
This kernel has the Linux root partition coded into it, so it knows where to look for
the root
file system. This is the type of floppy created by Slackware during installation, for
example.
To create your own boot floppy, locate the kernel image on your hard disk. It should
be
in the file /vmlinuz,or/vmlinux. In some installations, /vmlinuz is a soft link to
the actual kernel, so you may need to track down the kernel by following the links.
Once you know where the kernel is, set the root device of the kernel image to the
name
of your Linux root partition with the rdev command. The format of the command is:
where kernel-name is the name of the kernel image, and root-device is the name of the
Linux root partition. For example, to set the root device in the kernel /vmlinuz to
/dev/hda2, use the command:
rdev can set other options in the kernel, like the default SVGA mode to use at boot
time.
The command:
# rdev -h
prints a help message on the screen. After setting the root device, simply copy the
kernel
image to the floppy. Before copying data to any floppy, however, it's a good idea to
use the
MS-DOS FORMAT.COM or the Linux fdformat program to format the diskette. This
lays down the sector and track information that is appropriate to the floppy's
capacity.
Device driver files, as mentioned earlier, reside in the /dev directory. To copy the
kernel in the file /etc/Image to the floppy in /dev/fd0, use the command:
# cp /vmlinuz /dev/fd0
LILO is a separate boot loader which resides on your hard disk. It is executed when
the
system boots from the hard drive and can automatically boot Linux from a kernel image
stored there.
LILO can also be used as a first-stage boot loader for several operating systems,
which
allows you to select the operating system you to boot, like Linux or MS-DOS. With
LILO,
the default operating system is booted unless you press Shift during the boot-up
sequence,
or if the prompt directive is given in the lilo.conf file. In either case, it will be
provide
you with a boot prompt, where you type the name of the operating system to boot (such
as
"linux"or"msdos"). If you press Tab at the boot prompt, a list of
operating systems that the system knows about will be provided.
The easy way to install LILO is to edit the configuration file, /etc/lilo.conf.The
command:
# /sbin / lilo
rewrites the modified lilo.conf configuration to the boot sector of the hard disk,
and
must be run every time you modify lilo.conf.
The LILO configuration file contains a "stanza" for each operating system that you
want to boot. The best way to demonstrate this is with an example. The lilo.conf file
below is for a system which has a Linux root partition on /dev/hda1 and a MS-DOS
partition on /dev/hda2 :
# This forces LILO to prompt you for the OS you want to boot.
# A 'TAB' key at the LILO: prompt will display a list of the OSs
# available to boot according to the names given in the 'label='
# directives below.
prompt
The first operating system stanza is the default operating system for LILO to boot.
Also
note that if you use the "root =" line, above, there's no reason to use rdev to
set the root partition in the kernel image. LILO sets it at boot time.
The Microsoft Windows '95/98 installer will overwrite the LILO boot manager. If you
are going to install Windows '95/98 on your system after installing LILO, make sure
to
create a boot disk first . With the boot disk, you can boot Linux and re-install
LILO after the Windows '95 installation is completed. This is done simply by typing,
as
root, the command /sbin/lilo, as in the step above. Partitions with Windows '95/98
are
configurable to boot with LILO using the same lilo.conf entries that are used to boot
the MS-DOS partition.
Shutting down a Linux system can be tricky. You should never simply turn off the
power or press the reset switch. The kernel keeps track of the disk read/write data
in
memory buffers. If you reboot the system without giving the kernel a chance to write
its
buffers to disk, you can corrupt the file systems.
Other precautions are taken during shutdown as well. All processes are sent a signal
that allows them to die gracefully (by first writing and closing all files, for
example). File
systems are unmounted for safety. If you wish, the system can also alert users that
the
system is going down and give them a chance to log off.
The easiest way to shut down is with the shutdown command. The format of the
command is:
The time argument is the time to shut down the system (in the format hh:mm:ss), and
warning-message is a message displayed on all user's terminals before shutdown.
Alternately, you can specify the time as "now", to shut down immediately. The -r
option
may be given to shutdown to reboot the system after shutting down.
For example, to shut down and reboot the system at 8:00 pm, use the command:
# shutdown -r 20:00
The command halt may be used to force an immediate shutdown without any warning
messages or grace period. halt is useful if you're the only one using the system and
want
to shut down and turn off the machine.
Don't turn off the power or reboot the system until you see the message:
It is very important that you shut down the system, "cleanly," using the shutdown or
halt command. On some systems, pressing Ctrl - Alt - Del will be trapped and
cause a shutdown. On other systems, using the "Vulcan nerve pinch" will reboot the
system immediately and cause disaster.
HOME
1.6 Exploring The File System 1.8 Wildcards 1.9.3 Pipes 1.10.3 Permissions Dependencies
1.12.4 Stopping And Restarting Jobs 1.13.3 Inserting Text 1.13.9 Including Other Files
1.14.3 Shell Initialization Scripts System Administration 2.3.1 The /etc/imitate file
2.4 Managing File Systems 2.6 Managing Users 2.6.5 Groups 2.7.2 gzip and compress
After startup, init remains quietly in the background, monitoring and if necessary
altering the running state of the system. There are many details that the init
program
must see to. These tasks are defined in the /etc/inittab file. A sample /etc/inittab
file is shown below.
Modifying the /etc/inittab file incorrectly can prevent you from logging in to
your system. At the very least, when changing the /etc/inittab file, keep on hand
a copy of the original, correct file, and a boot/root emergency floppy in case you
make
a mistake :
#
# inittab
This file describes how the INIT process should set up
# the system in a certain run-level.
#
# Version: @(#)inittab 2.04 17/05/93 MvS
# 2.10 02/10/95 PV
#
# Author:
Miquel van Smoorenburg,
# Modified by: Patrick J. Volkerding,
# Minor modifications by:
# Robert Kiesling,
#
# Default runlevel.
id:3:initdefault:
# What to do at Ctrl-Alt-Del
ca::ctrlaltdel:/sbin/shutdown -t5 -rfn now
# If power comes back in single user mode, return to multi user mode.
ps:S:powerokwait:/sbin/init 5
# Serial lines
# s1:12345:respawn:/sbin/agetty -L 9600 ttyS0 vt100
s2:12345:respawn:/sbin/agetty -L 9600 ttyS1 vt100
# Dialup lines
d1:12345:respawn:/sbin/agetty -mt60 56000,38400,19200,9600,2400,1200 ttyS0 vt100
#d2:12345:respawn:/sbin/agetty -mt60 56000,38400,19200,9600,2400,1200 ttyS1 vt100
# End of /etc/inittab
Briefly, init steps through a series of run levels, which correspond to various
operational
states of the system. Run level 1 is entered immediately after the system boots,
run levels 2 and 3 are the normal, multi-user operation modes of the system, run
level 4
starts the X Window System via the X display manager xdm, and run level 6 reboots the
system. The run level(s) associated with each command are the second item in each
line of
the /etc/inittab file.
will maintain a login prompt on a serial terminal for runlevels 1–5. The "s2" before
the first colon is a symbolic identifier used internally by init. respawn is an init
keyword
that is often used in conjunction with serial terminals. If, after a certain period
of time,
the agetty program, which spawns the terminal's login: prompt, does not receive input
at the terminal, the program times out and terminates execution. "respawn" tells init
to re-execute agetty, ensuring that there is always a login: prompt at the terminal,
regardless of whether someone has logged in. The remaining parameters are passed
directly
to agetty and instruct it to spawn the login shell, the data rate of the serial line,
the
serial device, and the terminal type, as defined in /etc/termcap or /etc/terminfo.
The /sbin/agetty program handles many details related to terminal I/O on the
system. There are several different versions that are commonly in use on Linux
systems.
d1:12345:respawn:/sbin/agetty -mt60
38400,19200,9600,2400,1200 ttyS0 vt100
which allows users to log in via a modem connected to serial line /dev/ttyS0, the
/sbin/agetty parameters "-mt60" allow the system to step through all of the modem
speeds that a caller dialing into the system might use, and to shut down /sbin/agetty
if there is no connection after 60 seconds. This is called negotiating a connection.
The
supported modem speeds are enumerated on the command line also, as well as the serial
line to use, and the terminal type. Of course, both of the modems must support the
data
rate which is finally negotiated by both machines.
Many important details have been glossed over in this section. The tasks that
/etc/inittab maintains would comprise a book of their own. For further information,
the manual pages of the init and agetty programs, and the Linux Documentation
Project's Serial HOW TO, are starting points.
HOME
1.12.4 Stopping And Restarting Jobs 1.13.3 Inserting Text 1.13.9 Including Other Files
1.14.3 Shell Initialization Scripts System Administration 2.3.1 The /etc/imitate file
2.4 Managing File Systems 2.6 Managing Users 2.6.5 Groups 2.7.2 gzip and compress
2.8.3 Making Backups To Tape Devices 2.9.1 Upgrading The Kernel
http://personal.bellsouth.net/atl/p/s/psadler/KnowledgeWorks.htm
© copyright KnowledgeWorks, Inc. (2001)
The same is true of file systems on the hard drive. The system automatically mounts
file
systems on your hard drive at bootup time. The so-called "root file system" is
mounted
on the directory /. If you have a separate file system for /usr, it is mounted on
/usr. If you
only have a root file system, all files (including those in /usr) exist on that file
system.
mount and umount (not unmount) are used to mount and unmount file systems. The
command:
mount -av
The first field, device, is the name of the partition to mount. The second field is
the
mount point. The third field is the file system type, like ext2 (for ext2fs) or minix
(for
Minix file systems). Table 4.1 lists the various file system types that are mountable
under
Linux.
Not all of these file system types may be available on your system, because the
kernel must have support for them compiled in.
The last field of the fstab file are the mount options. This is normally set to
defaults.
Swap partitions are included in the /etc/fstab file. They have a mount directory
of none, and type swap.Theswapon -a command, which is executed from /etc/rc
or /etc/init.d/boot, is used to enable swapping on all of the swap devices that are
listed in /etc/fstab.
The /etc/fstab file contains one special entry for the /proc file system. The /proc
file
system is used to store information about system processes, available memory, and so
on.
If /proc is not mounted, commands like ps will not work.
The mount command may be used only by root. This ensures security on the system.
You wouldn't want regular users mounting and unmounting file systems on a whim.
Several
software packages are available which allow non-root users to mount and unmount file
systems, especially floppies, without compromising system security.
The mount -av command actually mounts all of the file systems other than the
root file system (in the table above, /dev/hda2). The root file system is
automatically
mounted at boot time by the kernel.
Instead of using mount -av, you can mount a file system by hand. The command:
is equivalent to mounting the file system with the entry for /dev/hda3 in the example
/etc/fstab file, above.
In addition to the partition names listed in the /etc/fstab file, Linux recognizes a
number of fixed and removable media devices. They are classified by type, interface,
and
the order they are installed. For example, the first hard drive on your system, if it
is an
IDE or older MFM hard drive, is controlled by the device driver pointed to by
/dev/hda.The
first partition on the hard drive is /dev/hda1, the second partition is /dev/hda2,
the
third partition is /dev/hda3, and so on. The first partition of the second IDE drive
is
often /dev/hdb1, the second partition /dev/hdb2, and so on. The naming scheme for
the most commonly installed IDE drives for Intel-architecture, ISA and PCI bus
machines,
is given in Table 4.2.
CD-ROM and tape drives which use the extended IDE/ATAPI drive interface also use
these device names.
Note that SCSI CD-ROM and tape drives are named differently than SCSI hard drives.
Removable SCSI media, like the Iomega Zip drive, follow naming conventions for non-
removable SCSI drives :
Floppy disk drives use still another naming scheme, outlined elsewhere in this
tutorial.
It is usually a good idea to check your file systems for damaged or corrupted files
every
now and then. Some systems automatically check their file systems at boot time (with
the
appropriate commands in /etc/rc or /etc/init.d/boot).
The command used to check a file system depends on the type of the file system. For
ext2fs file systems (the most commonly used type), this command is e2fsck. For
example,
the command:
checks the ext2fs file system on /dev/hda2 and automatically corrects any errors.
It is usually a good idea to unmount a file system before checking it, and necessary,
if
e2fsck is to perform any repairs on the file system. The command:
# umount /dev/hda2
unmounts the file system on /dev/hda2. The one exception is that you cannot unmount
the root file system. In order to check the root file system when it's unmounted, you
should
use a maintenance boot/root diskette. You also cannot unmount a file system if any of
the
files which it contains are "busy"—that is, in use by a running process.
For example, you cannot unmount a file system if any user's current working directory
is on
that file system. You will instead receive a "Device busy" error message.
Other file system types use different forms of the e2fsck command, like efsck and
xfsck. On some systems, you can simply use the command fsck, which automatically
determines the file system type and executes the appropriate command.
If e2fsck reports that it performed repairs on a mounted file system, you must reboot
the system immediately. You should give the command shutdown -r to perform the
reboot. This allows the system to re-synchronize the information about the file
system after
e2fsck modifies it.
The /proc file system never needs to be checked in this manner. /proc is a memory
file system and is managed directly by the kernel.
Instead of reserving a separate partition for swap space, you can use a swap file.
However,
you need to install Linux and get everything running before you create the swap file.
With Linux installed, you can use the following commands to create a swap file. The
command below creates a swap file of size 8208 blocks (about 8 Mb) :
This command creates the swap file, /swap.The"count=" parameter is the size of the
swap file in blocks :
This command initializes the swap file. Again, replace the name and size of the
swapfile
with the appropriate values :
# sync
# swapon /swap
Now the system is swapping on the file /swap.The sync command ensures that the file
has been written to disk.
One major drawback to using a swap file is that all access to the swap file is done
through the file system. This means the blocks which make up the swap file may not be
contiguous. Performance may not be as good as a swap partition, where the blocks are
always contiguous and I/O requests are made directly to the device.
Another drawback of large swap files is the greater chance that the file system will
be
corrupted if something goes wrong. Keeping the regular file systems and swap
partitions
separate prevents this from happening.
Swap files can be useful if you need to use more swap space temporarily. If you're
compiling a large program and would like to speed things up somewhat, you can create
a
temporary swap file and use it in addition to the regular swap space.
# swapoff / swap
# rm / swap
Each swap file or partition may be as large as 128 megabytes, but you may use up to 8
swap files or partitions on your system.
HOME
1.12.4 Stopping And Restarting Jobs 1.13.3 Inserting Text 1.13.9 Including Other Files
1.14.3 Shell Initialization Scripts System Administration 2.3.1 The /etc/imitate file
2.4 Managing File Systems 2.6 Managing Users 2.6.5 Groups 2.7.2 gzip and compress
2.8.3 Making Backups To Tape Devices 2.9.1 Upgrading The Kernel
http://personal.bellsouth.net/atl/p/s/psadler/KnowledgeWorks.htm
© copyright KnowledgeWorks, Inc. (2001)
Each user should have his or her own account. It is seldom a good idea to have
several
people share the same account. Security an issue, and accounts uniquely identify
users to
the system. You must be able to keep track of who is doing what.
The system keeps track of the following information about each user:
user name This identifier is unique for every user. Example user names are patrick,
karl,and mdw. Letters and digits may be used, as well as " "and"."
(period). User names are usually limited to 8 characters in length.
user ID
This number, abbreviated UID, is unique for every user. The system
generally keeps track of users by UID, not user name.
group ID
This number, abbreviated GID, is the user's default group. Each user belongs
to one or more groups as defined by the system administrator.
password
This is the user's encrypted password. The passwd command is used to
set and change user passwords.
full name
The user's "real name," or "full name," is stored along
with the username. For example, the user schmoj may be "Joe Schmo" in real life.
home directory
This is the directory the user is initially placed in at login, and where his
or her personal files are stored. Every user is given a home directory,
which is commonly located under /home.
login shell
The shell that is started for the user at login. Examples are /bin/bash
and /bin/tcsh.
This information is stored in the file /etc/passwd. Each line in the file has the
format:
directory:login shell
kiwi:Xv8Q981g71oKK:102:100:Laura
Poole:/home/kiwi:/bin/bash
The next field, "Xv8Q981g71oKK", is the encrypted password. Passwords are not
stored on the system in human-readable format. The password is encrypted using itself
as
the secret key. In other words, one must know the password in order to decrypt it.
This
form of encryption is reasonably secure.
The third field, "102", is the UID. This must be unique for each user. The fourth
field,"100", is the GID. This user belongs to the group numbered 100. Group
information is stored in the file /etc/group.
The fifth field is the user's full name, "Laura Poole". The last two fields are the
user's home directory (/home/kiwi), and login shell (/bin/bash), respectively. It is
not required that the user's home directory be given the same name as the user name.
It
simply helps identify the directory.
When adding users, several steps must be taken. First, the user is given an entry in
/etc/passwd, with a unique user name and UID. The GID, full name, and other infor-
mation must be specified. The user's home directory must be created, and the
permissions
on the directory set so that the user owns the directory. Shell initialization files
must be
installed in the home directory, and other files must be configured system-wide (for
example,
a spool for the user's incoming e-mail).
It is not difficult to add users by hand, but when you are running a system with many
users, it is easy to forget something. The easiest way to add users is to use an
interactive
program which updates all of the system files automatically. The name of this program
is
useradd or adduser, depending on what software is installed.
The adduser command takes its information from the file /etc/adduser.conf,
which defines a standard, default configuration for all new users.
# If QUOTAUSER is set, a default quota will be set from that user with
# 'edquota -p QUOTAUSER newuser'
QUOTAUSER=""
If you'd like to temporarily "disable" a user from logging in to the system without
deleting his or her account, simply prepend an asterisk ("*") to the password field
in
/etc/passwd. For example, changing kiwi's : /etc/passwd entry to :
kiwi:*Xv8Q981g71oKK:102:100:Laura
After you have created a user, you may need to change attributes for that user, like
the
home directory or password. The easiest way to do this is to change the values
directly in
/etc/passwd. To set a user's password, use passwd. The command :
# passwd patrick
will change Patrick's password. Only root may change other users' passwords in this
manner. Users can change their own passwords, however.
On some systems, the commands chfn and chsh allow users to set their own full
name and login shell attributes. If not, the system administrator must change these
attributes
for them.
HOME
1.6 Exploring The File System 1.8 Wildcards 1.9.3 Pipes 1.10.3 Permissions Dependencies
1.12.4 Stopping And Restarting Jobs 1.13.3 Inserting Text 1.13.9 Including Other Files
1.14.3 Shell Initialization Scripts System Administration 2.3.1 The /etc/imitate file
2.4 Managing File Systems 2.6 Managing Users 2.6.5 Groups 2.7.2 gzip and compress
2.8.3 Making Backups To Tape Devices 2.9.1 Upgrading The Kernel
http://personal.bellsouth.net/atl/p/s/psadler/KnowledgeWorks.htm
© copyright KnowledgeWorks, Inc. (2001)
2.6.5 Groups
As mentioned above, each user belongs to one or more groups. The only real
importance of group relationships pertains to file permissions. As you'll recall,
each file has a "group ownership" and a set of group permissions which defines how
users in that group may access the file.
There are several system-defined groups, like bin, mail and sys. Users should not
belong to any of these groups; they are used for system file permissions. Instead,
users
should belong to an individual group like users. You can also maintain several groups
for users, like student, staff and faculty.
The file /etc/group contains information about groups. The format of each line is:
The first group, root, is a special system group reserved for the root account. The
next
group, users, is for regular users. It has a GID of 100. The users mdw and patrick
have access to this group. Remember that in /etc/passwd each user was given.
a default GID. However, users may belong to more than one group, by adding their user
names to other group lines in /etc/group. The groups command lists what groups
you are given access to.
The third group, guest, is for guest users, and other is for "other" users. The user
kiwi is given access to this group as well.
The commands addgroup or groupadd may be used to add groups to your system.
Usually, it's easier just to add entries in /etc/group yourself, as no other
configuration
needs to be done to add a group. To delete a group, simply delete its entry in
/etc/group.
Because the system administrator has so much power and responsibility, when some
users have their first opportunity to login as root. either on a Linux system or
elsewhere,
the tendency is to abuse root's privileges. I have known so-called "system
administrators"
who read other users' mail, delete users' files without warning, and generally behave
like
children when given such a powerful "toy".
Because the administrator has such power on the system, it takes a certain amount of
maturity and self-control to use the root account as it was intended—to run the
system.
There is an unspoken code of honor which exists between the system administrator and
the
users on the system. How would you feel if your system administrator was reading your
e-mail or looking over your files? There is still no strong legal precedent for
electronic
privacy on time-sharing computer systems. On UNIX systems, the root user has the
ability to forego all security and privacy mechanisms on the system. It is important
that the
system administrator develop a trusting relationship with his or her users. I can't
stress that
enough.
System administrators can take two stances when dealing with abusive users: they can
be either paranoid or trusting. The paranoid system administrator usually causes more
harm
than he or she prevents. One of my favorite sayings is, "Never attribute to malice
anything
which can be attributed to stupidity." Put another way, most users don't have the
ability
or knowledge to do real harm on the system. Ninety percent of the time, when a user
causes
trouble on the system (for instance, by filling up the user partition with large
files,
or running multiple instances of a large program), the user is simply unaware that he
or she
is creation a problem. I have come down on users who were causing a great deal of
trouble,
but they were simply acting out of ignorance—not malice.
When you deal with users who cause potential trouble, don't be accusatory. The burden
of proof is on you; that is, the rule of "innocent until proven guilty" still holds.
It is
best to simply talk to the user and question him or her about the trouble instead of
being
confrontational. The last thing you want is to be on the user's bad side. This will
raise a
lot of suspicion about you—the system administrator—running the system correctly. If
a
user believes that you distrust or dislike them, they might accuse you of deleting
files or
breaching privacy on the system. This is certainly not the kind of position you want
to be
in.
The best way to run a system is not with an iron fist. That may be how you run the
military,
but Linux is not designed for such discipline. It makes sense to lay down a few
simple
and flexible guidelines. The fewer rules you have, the less chance there is of
breaking them.
Even if your rules are perfectly reasonable and clear, users will still at times
break them
without intending to. This is especially true of new users learning the ropes of the
system.
It is not patently obvious that you shouldn't download a gigabyte of files and mail
them to
everyone on the system. Users need help to understand the rules and why they are
there.
If you do specify usage guidelines for your system, make sure also that the rationale
for
a particular guideline is clear. If you don't, users will find all sorts of creative
ways to
get around the rule, and not know that they are breaking it.
We don't tell you how to run your system down to the last detail. That depends on how
you're using the system. If you have many users, things are much different than if
you have.
only a few users, or if you're the only user on the system. However, it's always a
good
idea—in any situation—to understand what being the system administrator really means.
Being the system administrator doesn't make a Linux wizard. There are many
administrators
who know very little about Linux. Likewise, many "normal" users know more about
Linux than any system administrator. Also, being the system administrator does not
allow
one to use malice against users. Just because the system gives administrators the
ability to
mess with user files does not mean that he or she has a right to do so.
Being the system administrator is not a big deal. It doesn't matter if your system is
a
tiny 386 or a Cray supercomputer. Running the system is the same, regardless. Knowing
the root password isn't going to earn you money or fame. It will allow you to
maintain
the system and keep it running. That's it.
Before we can talk about backups, we need to introduce the tools used to archive
files
on Unix systems.
The tar command is most often used to archive files. Its command syntax is :
where options is the list of commands and options for tar and files is the list of
files to
add or extract from the archive.
packs all of the files in /etc into the tar archive backup.tar. The first argument to
tar,"cvf", is the tar "command." c tells tar to create a new archive file. v forces
tar to use verbose mode, printing each file name as it is archived. The "f" option
tells
tar that the next argument, backup.tar, is the name of the archive to create. The
rest
of the arguments to tar are the file and directory names to add to the archive.
The command :
Old files with the same name are overwritten when extracting files into an existing 3
directory.
Before extracting tar files it is important to know where the files should be
un-packed.
Let's say that you archive the following files: /etc/hosts, /etc/group,
and /etc/passwd. If you use the command :
the directory name /etc/ is added to the beginning of each file name. In order to
extract
the files to the correct location, use :
#cd /
because files are extracted with the path name saved in the archive file.
# cd /etc
the directory name is not saved in the archive file. Therefore, you need to "cd /etc"
before
extracting the files. As you can see, how the tar file is created makes a large
difference
in where you extract it. The command :
can be used to display a listing of the archive's files without extracting them. You
can see
what directory the files in the archive are stored relative to, and extract the
archive in the
correct location.
HOME
1.6 Exploring The File System 1.8 Wildcards 1.9.3 Pipes 1.10.3 Permissions Dependencies
1.12.4 Stopping And Restarting Jobs 1.13.3 Inserting Text 1.13.9 Including Other Files
1.14.3 Shell Initialization Scripts System Administration 2.3.1 The /etc/imitate file
2.4 Managing File Systems 2.6 Managing Users 2.6.5 Groups 2.7.2 gzip and compress
http://personal.bellsouth.net/atl/p/s/psadler/KnowledgeWorks.htm
gzip is a relatively new tool in the UNIX community. For many years, the compress
command was used instead. However, because of several factors, including a software
patent dispute against the compress data compression algorithm, and the fact that
gzip
is much more efficient, compress is being phased out.
Files that are output by compress end in ".Z." backup.tar.Z is the compressed
version of backup.tar, while backup.tar.gz is the gzipped version
2
.The uncompress command is used to expand a compressed file. It is equivalent to
"compress -d." gunzip knows how to handle compressed files as well.
To archive a group of files and compress the result, use the commands :
# gzip -9 backup.tar
The result is backup.tar.gz. To unpack this file, use the reverse commands :
# gunzip backup.tar.gz
Always make sure that you are in the correct directory before unpacking a tar file.
You can use some Linux cleverness to do this on one command line :
Here, we send the tar file to "-", which stands for tar's standard output. This is
piped to gzip, which compresses the incoming tar file. The result is saved in
backup.tar.gz.
The -c option tells gzip to send its output to standard output, which is redirected
to
backup.tar.gz.
Again, gunzip uncompresses the contents of backup.tar.gz and sends the resulting
tar file to standard output. This is piped to tar, which reads "-", this time
referring to tar's standard input.
Note:
For some time, the extension .z (lowercase "z") was used for gzipped files. The
conventional gzip extension is now .gz..
Happily, the tar command also includes the z option to automatically compress/
uncompress files on the fly, using the gzip compression algorithm.
The command :
is equivalent to :
# gzip backup.tar
# uncompress backup.tar.Z
Refer to the tar and gzip manual pages for more information.
Floppies are often used as backup media. If you don't have a tape drive connected
to your system, floppy disks can be used (although they are slower and somewhat less
reliable).
FORMAT.COM or the Linux fdformat program. This lays down the sector and track
information that is appropriate to the floppy's capacity.
A few of the device names and formats of floppy disks which are accessible by Linux
are given in Table 2.4.
Devices which begin with fd0 are the first floppy diskette drive, which is named the
A:
drive under MS-DOS. The driver file names of second floppy device begin with fd1.
Generally,
the Linux kernel can detect the format of a diskette that has already been formatted—
you can simply use /dev/fd0 and let the system detect the format. But when you first
use completely new, unformatted floppy disks, you may need to use the driver
specification
if the system can't detect the diskette's type.
A complete list of Linux devices and their device driver names is given in Linux
Allocated
Devices, by H. Peter Anvin.
You can also use floppies to hold individual file systems and mount the floppy to
access
the data on it. See section 2.8.4.
The easiest way to make a backup using floppies is with tar. The command :
will make a complete backup of your system using the floppy drive /dev / fd0. The"M"
option to tar allows the backup to span multiple volumes; that is, when one floppy is
full,
tar will prompt for the next. The command :
restores the complete backup. This method can also be used with a tape drive
connected to
your system.
Several other programs exist for making multiple-volume backups; the backflops
program found on tsx-11.mit.edu may come in handy.
Making a complete backup of the system with floppies can be time and resource
consuming. Many system administrators use an incremental backup policy. Every
month, a complete backup is made, and every week only those files which have
been modified in the last week are backed up. In this case, if you trash your system
in the middle of the month, you can simply restore the last full monthly backup, and
then restore the last weekly backups as needed.
The find command is useful for locating files which were modified after a certain
date.
Several scripts for managing incremental backups can be found on sunsite.unc.edu.
Making backups to a Zip drive is similar to making floppy backups, but because Zip
disks commonly have a capacity of 98 Kb, it is feasible to use a single, mounted Zip
disk for a single backup archive.
Zip drives are available with three different hardware interfaces: a SCSI interface,
an
IDE interface and a parallel port PPA interface. Zip drive support is not included as
a
pre-compiled Linux option, but it can be specified when building a custom kernel for
your
system.
The SCSI and PPA interface Zip drives use the SCSI interface and follow the naming
conventions for other SCSI devices.
Zip disks are commonly pre-formatted with a MS-DOS file system. You can either use
the existing MS-DOS filesystem, which must be supported by your Linux kernel, or use
mke2fs or a similar program to write a Linux file system to the disk.
It is often convenient to provide a separate mount point for Zip file systems; for
example,
/zip. The following steps, which must be executed as root, would create the mount
point:
# mkdir /zip
Then you can use /zip for mounting the Zip file system.
This command could be executed from any directory because it specifies absolute path
names. The archive name etc.tgz is necessary if the Zip drive contains a MS-DOS file
system, because any files written to the disk must have names which conform to MS-DOS
8+3 naming conventions; otherwise, the file names will be truncated.
#cd /
To create, for example, an ext2 file system on a Zip drive, you would give the
command
(for an unmounted Zip disk) :
# mke2fs /dev/sda4
With a Zip drive mounted in this manner, with an ext2 file system, it is possible to
back
up entire file systems with a single command :
Note that backing up with tar is still preferable in many cases to simply making an
archival copy with the cp -a command, because tar preserves the original files'
modification
times.
HOME
1.6 Exploring The File System 1.8 Wildcards 1.9.3 Pipes 1.10.3 Permissions Dependencies
1.12.4 Stopping And Restarting Jobs 1.13.3 Inserting Text 1.13.9 Including Other Files
1.14.3 Shell Initialization Scripts System Administration 2.3.1 The /etc/imitate file
2.4 Managing File Systems 2.6 Managing Users 2.6.5 Groups 2.7.2 gzip and compress
http://personal.bellsouth.net/atl/p/s/psadler/KnowledgeWorks.htm
© copyright KnowledgeWorks, Inc. (2001)
Floppy tape drives use the floppy drive controller interface and are controlled by
the
ftape device driver, which is covered below.
To archive the /etc directory a tape device with tar, use the command:
Similarly, to extract the files from the tape, use the commands:
#cd /
# tar xvf /dev/qft0
These tapes, like diskettes, must be formatted before they can be used. The ftape
driver
can format tapes under Linux. To format a QIC-40 format tape, use the command:
--omit-erase --discard-header
Other tape drives have their own formatting software. Check the hardware
documentation
for the tape drive or the documentation of the Linux device driver associated with
it.
Before tapes can be removed from the drive, they must be rewound and the I/O buffers
written to the tape. This is analogous to unmounting a floppy before ejecting it,
because
the tape driver also caches data in memory. The standard Unix command to control tape
drive operations is mt. Your system may not provide this command, depending on
whether
it has tape drive facilities. The ftape driver has a similar command, ftmt, which is
used to
control tape operations.
Of course, substitute the correct tape device driver for your system.
It is also a good idea to retension a tape after writing to it, because magnetic
tapes are
susceptible to stretch. The command:
To obtain the status of the tape device, with a formatted tape in the drive, give the
command:
You can create a file system on a floppy as you would on a hard drive partition. For
example:
# mke2fs /dev/fd0 1440
creates a file system on the floppy in /dev/fd0. The size of the file system must
correspond
to the size of the floppy. High-density 3.5" disks are 1.44 megabytes, or 1440
blocks,
in size. High-density 5.25" disks are 1200 blocks. It is necessary to specify the
size of
the file system in blocks if the system cannot automatically detect the floppy's
capacity.
In order to access the floppy, you must mount the file system contained on it. The
command:
# mount /dev/fd0 /mnt
will mount the floppy in /dev/fd0 on the directory /mnt. Now, all of the files on the
floppy will appear under /mnt on your drive.
The mount point, the directory where you're mounting the file system, must exist
when you use the mount command. If it doesn't exist, create it with mkdir.
See earlier pages for more information on file systems, mounting, and mount points.
Note that any I/O to the floppy is buffered the same as hard disk I/O is. If you
change 3
data on the floppy, you may not see the drive light come on until the kernel flushes
its
I/O buffers. It's important that you not remove a floppy before you unmount it with
the
command:
# umount /dev/fd0
Do not simply switch floppies as you would on a MS-DOS system. Whenever you change
floppies, umount the first floppy and mount the next.
Another duty of the system administrator is the upgrading and installation of new
software.
Linux system development is rapid. New kernel releases appear every few weeks, and
other software is updated nearly as often. Because of this, new Linux users often
feel the
need to upgrade their systems constantly to keep up the the rapidly changing pace.
This is
unnecessary and a waste of time. If you kept up with all of the changes in the Linux
world,
you would spend all of your time upgrading and none of your time using the system.
Some people feel that you should upgrade when a new distribution release is made; for
example, when Slackware comes out with a new version. Many Linux users completely
reinstall their system with the newest Slackware release every time.
The best way to upgrade your system depends on the Linux distribution you have.
Debian, S.u.S.E., Caldera and Red Hat Linux all have intelligent package management
software which allows easy upgrades by installing a new package. For example, the C
compiler, gcc, comes in a pre-built binary package. When it is installed, all of the
files of
the older version are overwritten or removed.
For the most part, senselessly upgrading to "keep up with the trend" is not important
at all. This isn't MS-DOS or Microsoft Windows. There is no important reason to run
the
newest version of all of the software. If you find that you would like or need
features that a
new version offers, then upgrade. If not, don't upgrade. In other words, upgrade only
what
you must, when you must. Don't upgrade for the sake of upgrading. This wastes a lot
of
time and effort.
HOME
1.6 Exploring The File System 1.8 Wildcards 1.9.3 Pipes 1.10.3 Permissions Dependencies
1.12.4 Stopping And Restarting Jobs 1.13.3 Inserting Text 1.13.9 Including Other Files
1.14.3 Shell Initialization Scripts System Administration 2.3.1 The /etc/imitate file
2.4 Managing File Systems 2.6 Managing Users 2.6.5 Groups 2.7.2 gzip and compress
http://personal.bellsouth.net/atl/p/s/psadler/KnowledgeWorks.htm
© copyright KnowledgeWorks, Inc. (2001)
The Linux kernel sources may be retrieved from any of the Linux FTP sites.
On sunsite.unc.edu, for instance, the kernel sources are found
in /pub/Linux/kernel, organized into subdirectories by version number.
Kernel sources are released as a gzipped tar file. For example, the file containing
the
2.0.33 kernel sources is linux-2.0.33.tar.gz.
Kernel sources are unpacked in the /usr/src directory, creating the directory
/usr/src/linux. It is common practice for /usr/src/linux to be a soft link to
another directory which contains the version number, like /usr/src/linux-2.0.33.
This way, you can install new kernel sources and test them out before removing the
old
kernel sources. The commands to create the kernel directory link are:
# cd /usr/src
# mkdir linux-2.0.33
# rm -r linux
# ln -s linux-2.0.33 linux
# tar xzf linux-2.0.33.tar.gz
When upgrading to a newer patchlevel of the same kernel version, kernel patch files
can
save file transfer time because the kernel source is around 7MB after being
compressed by
gzip. To upgrade from kernel 2.0.31 to kernel 2.0.33, you would download the patch
files
patch-2.0.32.gz and patch-2.0.33.gz, which can be found at the same FTP
site as the kernel sources. After you have placed the patches in the /usr/src
directory,.
apply the patches to the kernel in sequence to update the source. One way to do this
would
be:
# cd /usr/src
# gzip -cd patch-2.0.32.gz j patch -p0
# gzip -cd patch-2.0.33.gz j patch -p0
After the sources are unpacked and any patches have been applied, you need to make
sure
that three symbolic links in /usr/include are correct for your kernel distribution.
To
create these links use the commands:
# cd /usr/include
# rm -rf asm linux scsi
# ln -s /usr/src/linux/include/asm-i386 asm
# ln -s /usr/src/linux/include/linux linux
# ln -s /usr/src/linux/include/scsi scsi
After you create the links, there is no reason to create them again when you install
the next
kernel patch or a newer kernel version.
In order to compile the kernel, you must have the gcc C compiler installed on your
system. gcc version 2.6.3 or a more recent version is required to compile the 2.0
kernel.
Next, run the command make dep to update all of the source dependencies. This is
an important step. make clean removes old binary files from the kernel source tree.
The command:
make zImage
message:
Kernel Image Too Large. If this happens, try the command make bzImage,
which uses a compression system that supports larger kernels. The kernel is written
to
/usr/src/linux/arch/i386/boot/bzImage.
Once you have the kernel compiled, you need to either copy it to a boot floppy ( with
a
command like "cp zImage /dev/fd0") or install the image so LILO will boot from
your hard drive.
Support for the Iomega Zip drive, like many other devices, is not generally compiled
into
stock Linux distribution kernels—the variety of devices is simply too great to
support all
of them in a usable kernel. However, the source code for the Zip parallel port device
driver
is included as part of the kernel source code distribution. This section describes
how to add
support for an Iomega Zip parallel port drive and have it co-exist with a printer
connected
to a different parallel port.
You must have installed and successfully built a custom Linux kernel, as described in
the previous section.
Selecting the Zip drive ppa device as a kernel option requires that you answer Y to
the
appropriate questions during the make config step, when you determine the
configuration
of the custom kernel. In particular, the ppa device requires answering "Y" to three
options:
After you have successfully run make config with all of the support options you want
included in the kernel, then run make dep, make clean, and make zImage to build
the kernel, you must tell the kernel how to install the driver. This is done via a
command
line to the LILO boot loader. As described in section 4.2.1, the LILO configuration
file,
/etc/lilo.conf has "stanzas" for each operating system that it knows about, and also
directives for presenting these options to the user at boot time.
Another directive that LILO recognizes is "append=", which allows you to add
boot-time
information required by various device drivers to the command line. In this case, the
Iomega Zip ppa driver requires an unused interrupt and I/O port address. This is
exactly
analogous to specifying separate printer devices like LPT1 : and LPT2 : under MS-DOS.
For example, if your printer uses the hexadecimal (base 16) port address 0x378 (see
the installation manual for your parallel port card if you don't know what the
address is)
and is polled (that is, it doesn't require an IRQ line, a common Linux
configuration), you
would place the following line in your system's /etc/lilo.conf file:
append="lp=0x378,0".
It is worth noting that Linux automatically recognizes one /dev/lp port at boot
time, but
when specifying a custom port configurations, the boot-time instructions are needed.
The "0" after the port address tells the kernel not to use a IRQ (interrupt request)
line for the printer. This is generally acceptable because printers are much slower
than CPUs,
so a slower method of accessing I/O devices, known as polling, where the kernel
periodically
checks the printer status on its own, still allows the computer to keep up with the
printer.
However, devices that operate at higher speeds, like serial lines and disks, each
require
an IRQ, or interrupt request, line. This is a hardware signal sent by the device to
the
processor whenever the device requires the processor's attention; for example, if the
device
has data waiting to be input to the processor. The processor stops whatever it is
doing
and handles the interrupt request of the device. The Zip drive ppa device requires a
free
interrupt, which must correspond to the interrupt that is set on the printer card
that you
connect the Zip drive to. At the time of this writing, the Linux ppa device driver
does not
support "chaining" of parallel port devices, and separate parallel ports must be used
for the Zip ppa device and each printer.
To determine which interrupts are already in use on your system, the command:
# cat /proc/interrupt
displays a list of devices and the IRQ lines they use. However, you also need to be
careful
not to use any automatically configured serial port interrupts as well, which may not
be listed in the /proc/interrupt file. The Linux Documentation Project's Serial
HOWTO, available from the sources listed in Appendix A, describes in detail the
configuration
of serial ports.
You should also check the hardware settings of various interface cards on your
machine 3
by opening the machine's case and visually checking the jumper settings if necessary,
to
ensure that you are not co-opting an IRQ line that is already in use by another
device.
Multiple devices fighting for an interrupt line is perhaps the single most common
cause of
non-functioning Linux systems.
0: 6091646 timer
1: 40691 keyboard
2: 0 cascade
4: 284686 + serial
13: 1 math error
14: 192560 + ide0
The first column is of interest here. These are the numbers of the IRQ lines that are
in
use on the system. For the ppa driver, we want to choose a line which is not listed.
IRQ
7 is often a good choice, because it is seldom used in default system configurations.
We
also need to specify the port address which the ppa device will use. This address
needs
to be physically configured on the interface card. Parallel I/O ports are assigned
specific
addresses, so you will need to read the documentation for your parallel port card. In
this
example, we will use the I/O port address 0x278, which corresponds to the LPT2:
printer
port under MS-DOS. Adding both the IRQ line and port address to our boot-time command
line, above, yields the following statement as it would appear in the appropriate
stanza of
the /etc/lilo.conf file:
append="lp=0x378,0 ppa=0x278,7"
These statements are appended to the kernel's start-up parameters at boot time. They
ensure that any printer attached to the system does not interfere with the Zip
drive's
operation. Of course, if your system does not have a printer installed, the "lp="
directive can and should be omitted.
After you have installed the custom kernel itself, as described in section 2.2.1, and
before you reboot the system, be sure to run the command:
# /sbin/lilo
to install the new LILO configuration on the hard drive's boot sector.
HOME
1.6 Exploring The File System 1.8 Wildcards 1.9.3 Pipes 1.10.3 Permissions Dependencies
1.12.4 Stopping And Restarting Jobs 1.13.3 Inserting Text 1.13.9 Including Other Files
1.14.3 Shell Initialization Scripts System Administration 2.3.1 The /etc/imitate file
2.4 Managing File Systems 2.6 Managing Users 2.6.5 Groups 2.7.2 gzip and compress
http://personal.bellsouth.net/atl/p/s/psadler/KnowledgeWorks.htm
© copyright KnowledgeWorks, Inc. (2001)
At the time of this writing, the most recent version of ftape is 3.04d. You can
retrieve
the package from the sunsite.unc.edu FTP archive.
After unpacking the ftape archive in the /usr/src directory, typing make install in
the
top-level ftape directory will compile the ftape driver modules and utilities, if
necessary and install them. If you experience compatibility problems with the ftape
executable distribution files and your system kernel or libraries, executing the
commands.
make clean and make install will ensure that the modules are compiled on your
system.
To use this version of the ftape driver, you must have module support compiled into
the kernel, as well as support for the kerneld kernel daemon. However, you must not
include the kernel's built-in ftape code as a kernel option, as the more recent ftape
module completely replaces this code.
make install also installs the device driver modules in the correct directories. On
standard Linux systems, modules are located in the directory:
/lib/modules/kernel-version
If your kernel version is 2.0.30, the modules on your system are located in
/lib/modules/2.0.30.The make install step also insures that these modules
are locatable by adding the appropriate statements to the modules.dep file, located
in
the top-level directory of the module files, in this case /lib/modules/2.0.30.The
ftape installation adds the following modules to your system (using kernel version
2.0.30
in this example):
/lib/modules/2.0.30/misc/ftape.o
/lib/modules/2.0.30/misc/zft-compressor.o
/lib/modules/2.0.30/misc/zftape.o
The instructions to load the modules also need to be added to the system-wide
module configuration file. This is the file /etc/conf.modules on many systems.
To automatically load the ftape modules on demand, add the following lines to the
/etc/conf.modules file:
The first statement loads all of the ftape related modules if necessary when a device
with the
major number 27 (the ftape device) is accessed by the kernel. Because support for the
zftape
module (which provides automatic data compression for tape devices) requires the
support
of the other ftape modules, all of them are loaded on demand by the kernel. The
second line
specifies load-time parameters for the modules. In this case, the utility
/sbin/swapout,
which is provided with the ftape package, ensures that sufficient DMA memory is
available
for the ftape driver to function.
To access the ftape device, you must first place a formatted tape in the drive.
As mentioned before, most of the software on the system is compiled to use shared
libraries, which contain common subroutines shared among different programs.
when attempting to run a program, then you need to upgrade to the version of the
libraries
which the program requires. Libraries are backwardly compatible. A program compiled
to use an older version of the libraries should work with the new version of the
libraries
installed. However, the reverse is not true.
The newest version of the libraries can be found on Linux FTP sites. On
sunsite.unc.edu, they are located in /pub/linux/gcc. The "release" files there
should explain what files you need to download and how to install them. Briefly, you
should get the files image-version.tar.gz and inc-version.tar.gz where version
is the version of the libraries to install, such as 4.4.1. These are tar files
compressed
with gzip. The image file contains the library images to install in /lib and
/usr/lib.
The inc file contains include files to install in /usr/include.
library's
.a and .sa files in /usr/lib. These are the libraries used at compilation time.
In addition, the shared library image files, libc.so.version are installed in /lib.
These are the shared library images loaded at run time by programs using the
libraries.
Each library has a symbolic link using the major version number of the library in
/lib.
The libc library version 4.4.1 has a major version number of 4. The file containing
the library is libc.so.4.4.1. A symbolic link of the name libc.so.4 is also placed
in /lib pointing to the library. You must change this symbolic link when upgrading
the
libraries. For example, when upgrading from libc.so.4.4 to libc.so.4.4.1, you
need to change the symbolic link to point to the new version.
You must change the symbolic link in one step, as described below. If you delete
the symbolic link libc.so.4, then programs which depend on the link (including basic
utilities like ls and cat) will stop working. Use the following command to update the
symbolic link libc.so.4 to point to the file libc.so.4.4.1:
You also need to change the symbolic link libm.so.version in the same manner. If you
are upgrading to a different version of the libraries, substitute the appropriate
file names,
above. The library release notes should explain the details.
The gcc C and C++ compiler is used to compile software on your system, most
importantly
the kernel. The newest version of gcc is found on the Linux FTP sites. On
sunsite.unc.edu, it is found in the directory /pub/Linux/GCC (along with the
libraries).
There should be a release file for the gcc distribution detailing what files you
need to download and how to install them. Most distributions have upgrade versions
that
work with their package management software. In general, these packages are much
easier
to install than "generic" distributions.
are
looking for software on an FTP site, downloading the ls-lR index file from the FTP
site
and using grep to find the files you want is the easiest way to locate software. If
you have
archie available to you, it can be of assistance as well. There are also other
Internet
resources which are devoted specifically to Linux. See Appendix A for more details.
Believe it or not, there are a number of housekeeping tasks for the system
administrator
which don't fall into any major category.
Note:
If you don't have archie, you can telnet to an archie server such as
archie.rutgers.edu,
login as "archie" and use the command "help".
HOME
1.6 Exploring The File System 1.8 Wildcards 1.9.3 Pipes 1.10.3 Permissions Dependencies
1.12.4 Stopping And Restarting Jobs 1.13.3 Inserting Text 1.13.9 Including Other Files
1.14.3 Shell Initialization Scripts System Administration 2.3.1 The /etc/imitate file
2.4 Managing File Systems 2.6 Managing Users 2.6.5 Groups 2.7.2 gzip and compress
http://personal.bellsouth.net/atl/p/s/psadler/KnowledgeWorks.htm
© copyright KnowledgeWorks, Inc. (2001)
Extension Description
.a Archived file or assembler code.
.au Audio file
.c C language source file
.csh C shell script
.enc Encrypted file
.F FORTRAN source code before
processing
.gif Graphics Interchange format file
.gl Animation file
.gz File compressed with gzip
.h C program header file
.jpg or jpeg Joint Photographic Experts Group
format
Linux Glossary
CORBA (Common Object Request Broker Architecture)
A specification developed by the Object Management
Group detailing how object messaging are handled
across different platforms.
horizontal symbol.
Daemon Descriptions
ftpd File transfer protocol daemon
inetd Internet daemon
lockd Network lock daemon
lpd Line Printer daemon
named Internet domain name server daemon
nfsd NFS daemon
pppd Point-to-point protocol daemon
uucpd UUCP daemon
D.1.1 Games
little bit of random wisdom revealed to you when you log in?
Fortune's your program. Fun-loving system administrators can add
fortune to users' .login files, so that the users get their dose
of wisdom each time they log in.
[W]
You should install the gnuchess package if you would like to play
chess on your computer. You'll also need to install the curses
package. If you'd like to use a graphical interface with GNUchess,
you'll also need to install the xboard package and the X Window
System.
Games for the K Desktop Environment. Included with this package are:
kabalone, kasteroids, kblackbox,kmahjongg, kmines, konquest, kpat,
kpoker, kreversi, ksame, kshisen, ksokoban, ksmiletris, ksnake,
ksirtet.
D.1.2 Graphics
This section lists packages that provide graphics that are fun to
look at.
The xdaliclock program displays a digital clock, with digits that merge
into the new digits as the time changes. Xdaliclock can display the time
in 12 or 24 hour modes and can will display the date if you hold your
mouse button down over it. Xdaliclock has two large fonts built in, but
is capable of animating other fonts.
[S]
You should install postgresql if you want to create and maintain your
own PostgreSQL databases and/or your own
[S]
GNU Wget is a file retrieval utility which can use either the HTTP or
FTP protocols. Wget features include the ability to work in the
background while you're logged out, recursive retrieval of directories,
file name wildcard matching, remote file timestamp storage and
comparison, use of Rest with FTP servers and Range with HTTP
servers to retrieve files over slow or unstable connections,
support for Proxy servers, and configurability.
[W]
X-Chat is yet another IRC client for the X Window System, using the
Gtk+ toolkit. It is pretty easy to use compared to the other Gtk+ IRC
clients and the interface is quite nicely designed.
alias
alias l = ls -alt
Options Descriptions
-t Makes the Korn shell remember the
full path name for the aliased
command,which allows it to be
found quickly.You can then issue -t
from any directory.Tracked aliases
are the same as hashed commands in
the Bourne shell.
Options Descriptions
-x Exports the alais so that you can
use it in shell scripts.
alias
C Shell Syntax
at
Option Description
OPTIONS1
-f filename Executes the commands contained in
filename.
-m Sends mail, if any to the user when
the job is complete.
OPTIONS2
-l [jobs] Reports all jobs, or if jobs is
specified, reports on them.
For instance:
The last line above contains the job number and the
time at will run the job.
du
du [options] [directories]
Options Descriptions
-a Shows the totals for all files and
sub directories.
-f Shows totals for files and directories
in the current file system only.
-k Gives totals in kilobytes.
-L All symbolic links are followed.
-P Symbolic links are not followed.
-r Displays a message cannot open.
-s Prints to screen a sum total for each
filename and directory name.
-u Ignores files with more than one link.
-x Prints to screen totals for file and
directories in the current file system
only.
Examples:
du / home / patrick
echo
echo
Options and Arguments
echo sequences
Sequence Descriptions
\b Backspace
\c Suppress final newline
(same as - n option)
\f Formfeed
\n Newline
\r Carriage return
\t Tab
\v Vertical tab
\\ Backslash
\0n Octal number, specified by n
Examples
KnowledgeWorks, Inc
364 Green Street
P.O. Box 2701
Gainesville, GA 30503
HOME
1.6 Exploring The File System 1.8 Wildcards 1.9.3 Pipes 1.10.3 Permissions Dependencies
1.12.4 Stopping And Restarting Jobs 1.13.3 Inserting Text 1.13.9 Including Other Files
1.14.3 Shell Initialization Scripts System Administration 2.3.1 The /etc/imitate file
2.4 Managing File Systems 2.6 Managing Users 2.6.5 Groups 2.7.2 gzip and compress
http://personal.bellsouth.net/atl/p/s/psadler/KnowledgeWorks.htm
© copyright KnowledgeWorks, Inc. (2001)