Inter Unix
Inter Unix
TM
Most unix users use csh as their primary working environment. The csh gives
you the prompt and reads your commands. However, /bin/csh has many
powerful features including history, job control, aliasing, foreach statements,
and many other things. These are accessed either via the command line, your
~/.cshrc conguration le, or csh scripts (which we will cover later in this
course (p. 35)).
Some users use shells other than csh for their day-to-day use, the most common
being tcsh. We will not cover these other shells.
Using shell history 1: Recalling old commands
The csh has a history manager which remembers old commands. The number
of old commands it remembers is set with the history environment variable
which you probably want to set to a large number with a command like
set history = 200
in your ~/.cshrc le. Each command is then given a number. You can see
your previous commands and their numbers with the command history #
where # is the number of commands you want to see. For example,
loki(pwalker).30 % history 5
26 idl
27 setenv DISPLAY hopper:0
28 idl
29 more ~/.exrc
30 history 5
You can recall previous commands with three basic mechanisms. The rst is
to use !# where # is the number of the command of interest, eg
loki(pwalker).31 % !27
setenv DISPLAY hopper:0
Note the command is echoed out to the console after being recalled. This
behaviour is common.
The second method is to use !pattern where pattern is a pattern to be matched
against a previous command. eg,
loki(pwalker).32 % !set
setenv DISPLAY hopper:0
or, here
loki(pwalker).33 % !s
setenv DISPLAY hopper:0
Finally, many people use !! to repeat their last command. I mention this
here, but it really belongs in the next section.
Using shell history 2: Bits of old commands
As well as recalling old commands, you can recall segments of your previous
command very easily with the csh. The syntax for this is !x where x is a
special character. The following are the useful values of x, and how they would
operate on the example string
a.out arg1 arg2 arg3
An example of when these can be useful is, for example (with the output
supressed)
hopper(pwalker).48 % ls /afs/ncsa/common/doc/ftp
/afs/ncsa/common/doc/web
...
hopper(pwalker).49 % ls -l !*
ls -l /afs/ncsa/common/doc/ftp /afs/ncsa/common/doc/web
where I have added a line break in the rst line to improve readability. Note
the contraction of !$/subdirectory which is always useful.
Using shell history 3: Changing old commands
Often, you will want to change a previous command a little. This is also fairly
straightforward in the csh, although many people prefer to cut and paste with
the mouse. Well, that's because they didn't have to work on a vt100 when they
were undergrads (or maybe because they did!). So, here is how you modify
your old commands.
To modify your previous command in one place use the ^old^new constuction,
which replaces the old with the new. For example,
hopper(pwalker).52 % mire imagemap.txt
mire - Command not found
hopper(pwalker).53 % ^ir^or
more imagemap.txt
farside(pwalker).214 % !212:s:in:out
foo output
farside(pwalker).215 % !214:gs:o:a
fao auput
Then a.out will run but you will have your prompt back. Note the output from
a.out will still come to your console. We will discuss how to avoid this in the
pipes and redirection (p. 13) section of the course.
The other method is to suspend the job and then background it. To suspend
(stop) a job, use ^Z where I'm using ^ to mean control. This will stop the job.
Then place the job in the background with the command bg.
Managing jobs in the background
You can easily have several jobs backgrounded, and you often want to bring
one of them to the foreground, kill one, or otherwise manipulate them.
The jobs command will tell you which jobs you have running. For instance,
hopper(pwalker).77 % jobs
[1] + Running xdvi EH_V3.dvi
[2] - Running xemacs src/ReadData.c src/SfcEvolve.c
This display has several bits of information. The job number is in the []. The
+ indicated which job is the current default for fg and bg commands, the job
status is shown, and the job name.
You can bring a job to the foreground with fg %# where # is the number
indicated in the output of jobs, eg:
hopper(pwalker).78 % fg %2
xemacs src/ReadData.c src/SfcEvolve.c
You can then stop this job with ^Z and jobs will show it as stopped.
hopper(pwalker).80 % fg %2
xemacs src/ReadData.c src/SfcEvolve.c
<---- I pressed ^Z here, but it didn't show up!
Stopped
hopper(pwalker).81 % jobs
[1] - Running xdvi EH_V3.dvi
[2] + Stopped xemacs src/ReadData.c src/SfcEvolve.c
will alias the command rm to rm -i, eg, to prompt you before it removes any
les.
However, you can easily write advanced aliases by making use of command line
history. For instance,
alias fc "fgrep \!* src/*.[ch] "
You could create an alias called, for instance, vic with the command
alias vic "vi \!:1; cp \!:1 \!:2"
which will vi the rst argument, then copy the rst argument into the second
argument. So the above two commands could be executed as vic myfile
~/mydir.
This is best shown with an series of examples. So here are some examples!
Expression Matches Doesn't Match
A*.html Albert.html Robert.html
Alhtml
A*rt.? Albert.c Albert.html
A Cat.html
A?[1-5].dat Ax3.dat Albert1.dat
Ax1.dat Ax6.dat
[A-C]*[1- Albert163.html Charlie52.html
5][2468]?.html Charlie58x.html Charlie621.html
Where VARIABLE is anything you want and FILES is any acceptible le or glob.
For instance, to print the name of each html le in a directory, and follow that
with the number of lines, you could use
foreach C (*.html)
? echo $C
? wc -l $C
BackTicks
Backticks (`) return the output of a command in a form so that you can use
it in a command you construct. Take, for instance, the command uname -n
which returns the name of the current machine.
hopper(pwalker).62 % uname -n
hopper
hopper(pwalker).63 % pwd
/tmp
hopper(pwalker).64 % mkdir /tmp/`uname -n`
hopper(pwalker).65 % cd /tmp/`uname -n`
hopper(pwalker).66 % pwd
/tmp/hopper
hopper(pwalker).67 %
The utility of this function is limitless, but I won't mention it beyond this
simple example.
Grouping commands with ()
In the csh you can group sets of commands with (). The commands in these
parens have their own shell, and can do shell things, such as set environment
variables, redirect, and pipe, independently of the main shell.
The clearest example of this behaviour is with setenv which sets an environ-
ment variable for your current shell. Note that the setenv outside the parens
aects the entire session, but inside, aects only the commands inside.
hopper(pwalker).17 % setenv SUMPIN Hi
hopper(pwalker).18 % echo $SUMPIN
Hi
hopper(pwalker).19 % (setenv SUMPIN Lo; echo $SUMPIN)
Lo
hopper(pwalker).20 % echo $SUMPIN
Hi
What is a stream
Unix has a well dened concept of a data stream. There are three well dened
data streams in unix for all process, stdin, stdout and stderr.
The streams are merely
ows of data. stdin is the standard input for a process
and is usually generated via the keyboard. stdout is the standard output for a
process and is usually sent to the screen. stderr is the standard error channel
for a process, and is also usually sent to the screen. However, stderr only
contains error messages.
In many ways, you can consider every unix process a box which reads stdin
or command line arguments and based on that, creates something on stdout
or in a le.
What is a pipe
A Pipe is used to connect the stdout of one process to the stdin of another
process. The jargon for this action is Piping into, eg pipe the output of com-
mand1 into command2
What is a redirect
A redirect is a method to turn a le into a stream or a stream into a le.
Redirects are used in two places, either at the beginning of a pipe to pipe a
le into the stdin of a process or at the end of a pipe to pipe the stdout of
the process into a le.
How do I use a pipe
The symbol for a pipe is (the vertical bar, often shift-backslash). The essential
j
syntax for using pipes is
command1 | command2
This is often useful when commands make screenfuls of stderr, such as when
compiling codes.
How do I use a redirect (/bin/csh version)
Redirects are dierent in dierent shells. Here, I will use the /bin/csh version.
The 4 basic redirects in the csh are:
< Turn le into stdin
> Turn stdout into a le
>> Append stdout onto a le
>& Turn stdout and stderr into a le
For example:
The stdin of a.out is read from the
a.out < input
le input
a.out output The stdout of a.out is placed in the
>
le output
a.out input output
The stdin of a.out is le input and
< >
the output goes into output
a.out >> output Append the output of a.out to
output
stdin from input and stdout and
a.out < input >& output
stderr to output.
(a.out > out std) >& out ste
Place stdout in out std and
stderr in out ste
is the equivalent of
xv nice.gif
cat file2
is the equivalent of
cp file1 file2
more a.file
more < a.file
cat a.file | more
Imagine a.out spews tonnes of information to your screen when given certain
parameters. You could observe these with
This runs a command and then watches the output accumulate in a le.
Now let us look at grep and some of its output acting on this le.
The next set involve using grep in pipes. Here are some actual examples.
where I have added line breaks to improve readability; You should have the
entire pipe on one line.
As you can see, using grep and wc together work very well, and gives a good
way to gure out things like percentages of accesses to a server from Macintosh
(grep Mac agent log
| wc -l) as a fraction of total access (wc -l agent log). Grep is so useful,
I could go on forever, but I wont. I think there is enough information here to
get you started!
where -Fc is optional and species the record separator. Without it, the default
separator is whitespace. cols is replaced with the columns of interest addressed
as $1, $2 etc... for column 1, 2 etc... file is an optional le. Without it,
stdin is parsed.
Example 1. Numerical Data Imagine you have a text le with data in polar
coordinates as
r theta
r theta
r theta
Example 2. System les Many system les in unix are : delimited les. For
instance, /etc/passwd has user information separated into columns with :.
(more /etc/passwd or man 4 passwd on your system for more info). You can
extract subsets of this information using awk. For instance, to extract users
(column 1), user numbers (column 3) and home directories (column 6) from
/etc/passwd, you could use
man sort if you actually want to master this potentially useful utility.
tee
tee creates a sort of "T" junction in your pipe. It takes a le as an argument.
The action of tee file is to take stdin and pipe it to both stdout and file.
This allows you to see intermediary results in pipes. For example:
grep walker /etc/passwd | tee all_the_walkers | grep paul
> all_the_paul_walkers
OK, now we have to use awk twice. The rst time we will turn the string into
the form userno,name,office,phone. This is done with
| awk -F: '{print $3 "," $5}' |
And nally we want to split this on , and print out the rst, second, and fourth
eld.
| awk -F, '{print $1, $2, $4}' |
and for good measure, send it into more. So our nal superpipe is:
grep wal /etc/passwd | grep "/bin/csh" | tee walpwd |
awk -F: '{print $3 "," $5}' |
awk -F, '{print $1, $2, $4}' | more
I've put in the line wraps for ease of viewing, but it could not be there on your
script.
OK, and lets see what this does on one of our machines:
farside(pwalker).238 % grep wal /etc/passwd | grep "/bin/csh" |
tee walpwd | awk -F: '{print $3 "," $5}' |
awk -F, '{print $1, $2, $4}' | more
15299 Paul Walker 217-244-1144
where once again, I've added line breaks for ease of reading.
And this is, inded, a super-pipe!
Emacs
Many people, mostly stubborn emacs users, claim that the only thing you need
to know about vi is how to get out of it! This belief is due to the fact that
emacs is a widely available extensible wonderfully fabulous editor. Correctly
congured, it can do color sensitive highlighting of your text, indent and align
text cleverly, have dierent behaviours based on the type of le you are editing,
and many other features, none of which vi has.
If you plan to do a lot of editing in a Unix environment, let me encourage
you to use emacs for all your serious needs, since it is undeniably an inntely
superior editor.
However, you have to know vi. You need to know vi because it is universal,
quick, and often, you are kicked into it by some program. Even the most
convinced emacs users use vi for small tasks, and so, you need to know some
simple tips and tricks, the purpose of this chapter.
Marks and yanking/moving blocks of text
vi has the concept of a mark. A mark is a line in your document to which
you assign a special tag which is a letter. Once you have set a mark, you can
use that mark as a place keeper. vi can also use numbers as place keepers.
A number used as a place keeper species a line number. Finally, vi has two
special marks $ which is used for the end of document and . which is used for
the current position.
A mark is set using the vi command mx where x is your mark tag.
Two of the most useful things you can do with marks is yank or delete text
between two marks. yanked or deleted text can then be restored to the current
cursor location by pulling it with p.
To yank text, use the command :'a,'by where a and b are marks. Note, a
and b can be numbers or a special mark, but then do not need the quote. This
text will remain in your buer but also be in your kill ring so it can be pulled
and thus copied. To delete a block of text, use :'a,'bd, which will delete your
area and put it in your kill ring.
Some examples are best, and it is probably easiest if you try along with these
(which is what we will do in class). Examples To delete everything between
the current position and the end of the document do
:.,$d
or between the beginning of the document and current position
:1,.d
A quick way to delete a chunk of text is to position the cursor at the beginning
of the text and make a mark with
md
which will delete everything from the current position to the mark d which we
made with the md command. This text could then be recovered by moving to
another location and issuing p
Finally to mark and delete a block of text, set 2 marks, then use
:'a,'bd
where a and b are marks or line numbers (without the '). The optional g means
to replace everywhere. Without it, only the rst occurance on each line will
be replaced.
Examples To replace green with blue in your entire document, use
:1,$s/green/blue/g
eg :13.
You can display the line numbers of your current le beside the line by using
:set number and turn it o with :set nonumber.
These are very useful when editing and debugging programs.
Using your .exrc (NOT your .virc)
There are a few useful environement variables and defaults which you can set
to aect vi. Defaults are set in your ~/.exrc le, and environment variables
are set in the standard fashion.
The one environment variable of interest is ESCDELAY which you may want to
set to a value of about 1500. This variable determines the amount of time
between when ESC is pressed and a new command is issued. Since arrow keys
form esc sequences, if the sequences are genreated too slowly, an arrow key
can, instead of moving your cursor, insert some garbage into your le (which
often looks like ^[A). Try some values of ESCDELAY to x this.
You can set any vi settable thing in your ~/.exrc le. For instance, if you
always want numbers on, you should put in your ~/.exrc
set number
You should consult man vi for more info, but a very useful thing to set is
set ai
which causes an auto-indent feature. This means your cursor lines up at the
tab stop of your previous line when you create a new line. This is very useful for
program editing with vi. Of course, if you are doing serious program editing,
you should probably use emacs...
The Free Software Foundation has written a commonly used compression pro-
gram, gnu zip, or gzip which creates compressed .gz les. gzip also reads .Z
les. gzip is pretty much faster, more ecient, better etc... You should use it
given the choice. To compress and uncompress with gzip, use
gzip bigfile.ps
gunzip bigfile.ps.gz
Note, gzip has levels of compression. If you don't mind waiting a little longer
to compress your les, and want better compression ratio's, use gzip -9.
Handling multiple les with tar
Often, it is useful to put multiple les into a single le before compressing,
sending to collaborators, or backing up. The easiest way to do this is using
tar. tar was originally designed to access tape drives, and it still does that
ver well, but most users will use it to make archive les from multiple les.
The basic syntax for creating an archive with tar is
cd /usr/people/me/mydir
tar cvf mydir.tar .
This will create a le called mydir.tar which contains all the les in and below
.. It will also show you the les being added. You can remeber the
ag cvf
by thinking of create verbose file.
To extract a tar archive, you want to use
cd /usr/people/me/mynewdir
tar xvf mydir.tar
which will recreate the directories stored within mydir.tar and put them un-
derneath mynewdir. For xvf think eXtract verbose file.
If you simply want to see the contents of a tar le, you can use tvf, eg:
tar tvf mydir.tar
find will then do action list on all les at or below path matching condition.
Now, each of these things can have many settings, and you should consult man
find to see the possiblities. However, I am only going to mention a few of the
options.
condition is the most complex of the options. The two most useful condition
ags are -name "glob" and -mtime +/-#. -name "glob" matches anything
matching glob. -mtime +/-# matches any le modied earlier than/since #
days.
action list also has many options. The two I will mention are -print and
-exec. -print simply prints all the matches to stdout with a pathname. -exec
is used to execute a command on the matched le and has the syntax -exec
command
; where is replaced with the name of the current match. Examples The best
way to understand find is through some examples.
This prints all les below the current directory which end in .c or .h.
This shows all occurances of mosaic in all html les, as well as printing all
.html les.
find . -name "*.html" -print -exec grep mosaic {} \;
Finally, these remove all les more than 7 days old and print all les modied
within the last day.
find . -mtime +7 -exec rm {} \;
find . -mtime -1 -print
Be very careful with commands such as the rst one on this line.
tar, gzip, and ftp with pipes
Of course, all of these commands are available in pipes, and many are very
useful. I will give here the examples I use the most often.
zcat and gunzip -c uncompress a le and send the output to stdout, leaving
the le compressed on disk. This is often useful if you want to simply view
a le. For instance, imagine that you have compressed your postscript le,
Fig.ps and now you want to see it without uncompressing it. You could use:
zcat Fig.ps.Z | gv -
gunzip -c Fig.ps.gz | gv -
depending on which compresser you used to compress it. Of course, you could
pipe this output into anything you wanted!
tar can create a le onto stdout, and extract a tar le from stdin, by replacing
the output or input tar le with -. That is, the following pairs are identical
tar xvf mydir.tar
tar xvf - < mydir.tar
File Management 29
One of the most useful applications of this is to move a directory as well as all
its les and subdirectories to another location. This is done by creating a tar
le and piping it into a subshell which runs in a dierent directory and untar's
stdin. That is,
tar cf - . | (cd ~/newdir ; tar xf -)
Note I left out the v option to both tars so the code will run silently.
Finally, you can compress stdin and send it to stdout, or uncompress stdin and
send it to stdout, using gzip and gunzip alone in a pipe. That is, these pairs
are equivalent.
gunzip -c foo.gz > bar
cat foo.gz | gunzip > bar
Now this combination is very useful due to a little know property of ftp. ftp
allows you to specify pipes as source or receving les. For instance, you can
get and view a gif image from an ftp site with
ftp> get file.gif "| xv -"
This is useful, but you can also use this trick to create a tar le onto an ftp site
without making that tar le on your local disk. This is invaluable for backup
processes. An example of this is
ftp> put "| tar cvf - ." myfile.tar
Or, to send and compress a tar le onto an ftp site, you can use this:
ftp> put "| tar cvf - . | gzip " myfile.tar.gz
That is, ftp makes a transfer transfer file dest by eectively taking the
stdout of cat file and piping this into dest.
The rst string here shows the permissions given to the classes. The rst
character indicates the le type. - means a normal le. The next three are
the permissions for the user. Here the user has read, write, and execute per-
missions. The group permissions are the next three, and hear are read and
execute. The world permissions are the last set, here read and execute again.
This means that I can modify the le but anyone can copy or execute it.
You can change the permissions on a le using chmod. chmod has syntax chmode
permission files. permissions can be stated numerically (we will skip that;
See man chmod for more) or more intuitively, with strings.
The permission strings have the syntax class +/- permission, eg u+rwx to
give a user read, write, and execute, or go-rwx to remove read, write, and
execute from group and other. An example is:
hopper(pwalker).20 % chmod go-rx a.out
hopper(pwalker).21 % ls -l a.out
-rwx------ 1 pwalker user 13308 Oct 13 10:48 a.out
File Management 31
This gives the lists of users and AFS groups which have various permissions.
Each character of the string indicates a permission, with the crucial ones being
r Read
l Lookup (eg, stat)
idwk Various parts needed to write and
create
a Administer (eg, change acl's)
The rst column indicates to whom those permissions belong. These holders
may be users (eg, pwalker) or groups (eg, pwalker:twodwave, projects.genrel.admin)
or special users (system:administrator, system:anyuser and system:authuser for
the administrators, any user regardless of tokenization, and any user with a
token).
You can set the AFS permissions in a directory using fs sa. For instance, if I
want to give user johnsonb write permission to my home directory, I could go:
hopper(pwalker).55 % cd /afs/ncsa/user/pwalker
hopper(pwalker).56 % fs sa . johnsonb rlidwk
This would allow Ben to write to my home directory. (I undid this example
immediately after issuing this command).
AFS provides shorthand for the most commonly used acl sets. read = rl.
write = rlidwk. all = rlidwka. So above, I could have said "fs sa . johnsonb
write".
ACL's allows user great deals of
exibility in setting permissions on directories.
In order to maximize eciency, though, we want to use AFS groups, to which
we turn our attention next.
AFS File Permissions II: Groups
Setting acls can be a great way to allow groups of users accesses to various
parts of your afs le system. In order to maximize this ease of use, though,
you can create AFS Groups. Any user can create groups, and then assign
directory acl's to those groups rather than individuals.
The pts membership group command tells who is in a group. pts creategroup
group creates a group, pts adduser group adds a user, and pts removeuser
group removes a user.
An example is the easiest way to see how this works. Imagine I have a directory
called BigPerlProject and I want to set acls to that directory. I could use
the following:
farside(pwalker).19 % pts creategroup pwalker:big_p_proj
group pwalker:big_p_proj has id -750
farside(pwalker).24 % fs la .
Access list for . is
Normal rights:
pwalker:big_p_proj rlidwk
system:administrators rlidwka
system:anyuser l
pwalker rlidwka
farside(pwalker).25 %
so now johnsonb, royh, and sbrandt can write to this directory. As the project
personnel change or expand, I only have to modify this group, not each di-
rectory to which this group has permissions. As you can see, this is really a
powerful feature for collaboration.
File Management 33
34 Intermediate Unix Training
Chapter 6
Scripting with the csh is something you probably don't want to do too much
of. As a scripting language, the csh lacks many powerful features available in
real languages like perl and more powerful shells such as sh and bash.
However, short csh script can ease repetitive tasks and allow you to use your
already existant csh tricks in simple scripts. Also, conguration les such as
your ~/.cshrc use csh constructs, so understanding scripting can help you
modify them. In this section, we will cover scripting basics, command line
parsing, and simple control
ow in the csh. By the end of this chapter, you
should be able to read and write basic csh scripts.
In the rest of this chapter, I will use shell script and csh script interchangeably.
Most people think of shell script as /bin/sh, so this is sloppy language on my
behalf, but you've been warned!
Basic Ideas
The basic idea of scripting with the shell is that you have a set of commands,
and perhaps some control statements, which make the shell do a series of
commands. Basically there is no dierence between a script and a program.
There are a few basic syntactical elements you need to know.
First, every shell script begins with the line
#!/bin/csh (optional arguments)
#!/bin/csh
# A very simple csh script!
/bin/ls *.o
/bin/ls -l a.out
To make this script executable, save it as my script and issue the command
chmod u+x my script (for more on chmod see the section on nfs le permis-
sions (p. 31)). Then run it with my script alone on a line.
Here are a couple of things to note
1. Even this simple script has a comment!
2. Note I used /bin/ls rather than just plain ls. This means that any
aliases for ls will be ignored, which will make the behaviour of the script
portable. However, if you have ls aliased and don't realize it (which is
the case with many people) this may produce unexpected results.
#!/bin/csh
# An example of showing command line arguments with echo, and
# doing things with them with /bin/ls
Get it?
Using Variables
You can use variables in the csh for many things, but the three uses I will
discuss here are
1. Getting environment information
2. Storing user-dened information
3. Getting command output with backticks
Getting information from the environment is trivial. If you want to use an
environment variable FOO in your script, simply address it as $FOO, just like
you would on the command line
Using variables to save user-dened information, such as executable locations,
is a good habit since it eases portability and modication of your script. The
archetypical example is to set a variable to point at an executable, then if the
executable moves, your script only need change in one spot. A short example
of this is
#!/bin/csh
# Run /usr/people/pwalker/mycmd from a script
set MYCMD = /usr/people/pwalker/mycmd
$MYCMD
Conditionals with if
Conditionals allow you to execute parts of code only if certain conditions are
true. The conditional construct we will consider in the csh is:
if (condition) then
statements
else
statements
endif
set ME = `whoami`
set IWISHIWERE = "pwalker"
as expected.
Loops with foreach
Using foreach in a csh script is just as simple as using it on the command line,
as described in the csh tricks and tips section (p. 10). You simply will not be
prompted with a ? but will put your commands between the foreach() and the
end.
Here is a simple example
#!/bin/csh
foreach C (`ls`)
echo $C
end
just as expected.
Writing to stdin with <<
The nal aspect of shell scripting I want to mention is writing to the stdin of
a process using the << construct. The basic construction here is
command <<TAG
stuff sent to stdin of command
TAG
There are many uses for this, including controlling programs which read stdin.
The example I'm going to give here, though, will show how to originate an
item of mail to a user based on a command line argument.
#!/bin/csh
# DemoEndDoc
# Send the size of $1 to user $2
Here is ls -l for $1
$RESULT
- Paul
ENDMAIL
# End of script
You should be aware that their are some subtleties involved in whether or
not you enclose your TAG in quotes or not. If you do, then variables will
not be expanded in your input. However, if you are running into this sort of
problem, you're probably doing something far too tricky for csh programming,
and should rewrite it in /bin/sh or perl!
Also note the sneaky way I fake up the Subject: header in the outgoing mail.
alias ls ls -FC
alias rm rm -i
alias a alias
alias h history
alias m more
The most important elds are the user, PID (column 2) time (second-to-last)
and process name.
On a SYSV-type machine, you can show all processes with ps -ef. eg,
farside(pwalker).20 % ps -ef | grep pwalker
pwalker 24308 24307 0 09:06:08 pts/4 0:03 -csh
pwalker 29735 29734 2 12:42:14 pts/10 0:01 -csh
pwalker 4253 29735 9 14:39:08 pts/10 0:00 ps -ef
pwalker 4254 29735 1 14:39:08 pts/10 0:00 grep pwalker
Once again, column 1 and 2 are the user and PID. Column 3 is the Parent (eg,
executing process's) PID. The last two are time used and command.
The PID of the a process is very useful, since you can use the PID in conjunction
with kill to kill a process. Varients of kill are kill PID to kill process with
PID, kill -9 PID to kill it now (which is often unclean, but always eective)
and kill -9 -1 which kills all your processes including usually your current
shell. So, I could kill one of my csh processes on farside with kill -9 29735.
See whos on with nger and who
finger user@host gives you information about user on host For instance,
farside(pwalker).179 % finger pwalker@loki
[loki.ncsa.uiuc.edu]
Login name: pwalker In real life: Paul Walker
Phone: 217-244-3008
Directory: /u/ncsa/pwalker Shell: /bin/csh
On since Oct 17 13:04:31 on ttyq37 from hopper.ncsa.uiuc.edu
1 hour 16 minutes Idle Time
No Plan.
The most important number reported here is the load average, averaged over
1, 5, and 15 minutes (usually). The load average gives an idea of the number
of processes running at once. On a single processor machine, a load of 1
is maximum ecient utilization. Loads more than the number of processors
mean the machine is too heavily loaded.
rup host gives uptime information about a remote host in eectively the same
format.
Seeing where it lives with which
Many commands you use are in your path. Sometimes it is useful to know
where in your path they are. Hence the command which. For example,
farside(pwalker).195 % which xemacs
/usr/ncsa/bin/xemacs
This is often useful when a system has several copies of an executable, or you
are trying to see when an executable was last modied. Note which will source
your .cshrc before starting so it can handle aliases, eg
farside(pwalker).197 % which rm
rm aliased to "rm -i"
/bin/rm
df -k tells you about kB free on NFS or mounted le systems. The usual report
contains kbytes available and used, as well as a mount point (eg, directory
name) for the le system.
fs lq dir tells you the afs quota available in dir.
Of course, the material in this course did not cover even close to all the aspects
of the concepts or commands mentioned. For that, I would suggest three
resources.
First, just look over the shoulder of other Unix users you know. This is how
you pick up many neat tricks. Be sure you ask em if it is OK rst, of course!
Second, read the man pages for these commands. After this course, you should
be able to understand them pretty easily. Remeber, all the information you
could possible want about a command is just a man command away!
Finally, I highly recommend the book Unix in a Nutshell published by O'Reilly
and Associates. They have it at all the bookstores in town, and it only costs
about $10. This reference has all the information in this course plus much more
in an easy to use concise reference. I would be lost without a copy
oating
around my oce!