Processes Kernel Process Identification Number: Aemon
Processes Kernel Process Identification Number: Aemon
A daemon is a process that detaches itself from the terminal and runs, disconnected,
in the background, waiting for requests and responding to them. It can also be
defined as the background process that does not belong to a terminal session.
Examples: init, crond, sendmail, inetd, httpd, nfsd, sshd, named, and
lpd.
ZOMBIE PROCESS:
When a process finishes execution, it will have an exit status to report to its parent
process. Because of this last little bit of information, the process will remain in the
operating system’s process table as a zombie process, indicating that it is not to be
scheduled for further execution, but that it cannot be completely removed (and its
process ID cannot be reused) until it has been determined that the exit status is no
longer needed.
When a child exits, the parent process will receive a SIGCHLD signal to indicate that
one of its children has finished executing; the parent process will typically call the
wait() system call at this point. That call will provide the parent with the child’s exit
status, and will cause the child to be reaped, or removed from the process table.
Orphan:
If a parent process dies, but its children have not, then those children are
"orphans” process.
.PROFILE:
A profile file is a start-up file of an UNIX user, like the autoexec.bat file of
DOS. When a UNIX user tries to login to his account, the operating system
executes a lot of system files to set up the user account before returning the
prompt to the user.
To achieve this in UNIX, at the end of the login process, the operating system
executes a file at the user level, if present. This file is called profile file.
The specific settings which an unix user usually does is:
Setting of any environment variable
Setting of any alias.(Though it is always recommended to keep the aliases in
a separate file).
Setting of PATH variable or any other path variables
.profile - it say to spawn a new child shell and execute the script.
. ~/.profile - it means you are executing the script in present shell.
PATH:
PATH variable tells the places from where the Operating System(OS) will search for
whenever a command is given.
When the ls command is executed, the OS starts searching for an executable
named ls.
Where does it search? It searches in all the directories mentioned in the PATH
variable.
The moment it finds the executable, it executes the command the the output
is displayed.
SHELL:
A Unix shell is a command-line interpreter or shell that provides a user interface for
the Unix operating system and for Unix-like systems.
The UNIX shell is a program that serves as the interface between the user and the
UNIX operating system. It is not part of the kernel, but communicates directly with
the kernel. The shell translates the commands you type in to a format which the
computer can understand. It is essentially a command line interpreter.
Shebang(#!):
#! line states the interpreter to be used by the shell to interpret the script.
shebang line is used to tell shell which interpreter to use for your rest of the
script.
Running it as ./script.sh will make the kernel read the first line (the
shebang), and then invoke kornshell to interpret the script.
$ basename $line
file.ksh
$ dirname $line
/dir1/dir2/gr3
#!/bin/sh #!/bin/ksh
set -- "1" "2" "3" "4" "5" echo "Printing second args"
echo $* until [[ $# -eq 0 ]];do
shift echo $1
echo $* shift
shift done
echo $*
./printer first second "third
[blstst1a]:/home/bliss/kannan forth"
$ ./"shift_ex.sh" Output:
1 2 3 4 5 Printing second args
2 3 4 5 first
3 4 5 second
third forth
Bourne shell:
var=value
export var
INODE:
All UNIX files have its description stored in a structure called 'inode'. The inode
contains info about the file-size, its location, time of last access, time of last
modification, permission and so on.
Directories are also represented as files and have an associated inode. In
addition to descriptions about the file, the inode contains pointers to the data
blocks of the file.
If the file is large, inode has indirect pointer to a block of pointers to
additional data blocks (this further aggregates for larger files). A block is
typically 8k.
special variables:
Variable Description
All the arguments are double quoted. If a script receives two arguments,
$*
$* is equivalent to $1 $2.
All the arguments are individually double quoted. If a script receives two
$@
arguments, $@ is equivalent to $1 $2.
The process number of the current shell. For shell scripts, this is the
$$
process ID under which they are executing.
$! The process number of the last background command.
Difference between $* and $@:
$* and $@ both will act the same unless they are enclosed in double quotes, "".
Example:
#!/usr/bin/ksh
echo "Printing \$* "
for i in $*
do
echo i is: $i
done
Printing $@
i is: a
i is: b
i is: c d
i is: e
Example:
cat food >file 2>&1
The shell sees >file first and redirects stdout to file. Next 2>&1 sends fd2 ( stderr ) to
the same place fd1 is going - that's to the file
Example 2:
cat food 2>&1 >file //output: cat: can't open food
the shell sees 2>&1 first. That means "make the standard error (file descriptor 2) go to the
same place as the standard output (fd1) is going." There's no effect because both fd2 and
fd1 are already going to the terminal. Then >file redirects fd1 ( stdout ) to file . But
fd2 ( stderr ) is still going to the terminal.
>/dev/null redirects standard out to /dev/null, i.e. throw the output away. 2>&1
redirects standard error to standard out; in this case, it means to throw all the
error output away as well
LINKS IN UNIX
A links in UNIX are pointers pointing to a file or a directory. Creating links is a
kind of shortcuts to access a file.
The two different types of links in UNIX are:
1. Soft Links or Symbolic Links
2. Hard links
Soft link:
ln -s soft.txt soft1.txt
Soft Links can be created across file systems.
Soft link has a different inode number than the original file.
On deleting the original file, soft link cannot be accessed.
Soft link needs extra memory to store the original file name as its data.
Source file need not exist for soft link creation.
Can be created on a file or on a directory.
Access to the file is slower due to the overhead to access file.
The file size of a soft linked file is the length of the filename of the original
file. In this case, the original file "file1" is of length 8.
In a soft linked file, the location where the file content is to be stored, the file
name of the original file gets stored, and hence the file size is so
Example:
_______ ________
|_______| |________|
9962464
_______ ________
|_______| |________|
9962471
Soft Link Representation (Files pointing to inodes, in turn pointing to data location)
Hard Link:
ln hard.txt hard1.txt
[blstst1a]:/home/bliss/kannan/unix_work $ ls -lrt
total 4
lrwxrwxr-x 1 bliss bliss 8 Dec 4 22:47 soft2.txt ->
soft.txt
-rw-rw-r-- 2 bliss bliss 143 Dec 4 22:50 hard.txt
-rw-rw-r-- 2 bliss bliss 143 Dec 4 22:50 hard1.txt
ln hard.txt hard2.txt
[blstst1a]:/home/bliss/kannan/unix_work $ ls -lrt
total 6
lrwxrwxr-x 1 bliss bliss 8 Dec 4 22:47 soft2.txt ->
soft.txt
-rw-rw-r-- 3 bliss bliss 176 Dec 4 22:51 hard.txt
-rw-rw-r-- 3 bliss bliss 176 Dec 4 22:51 hard1.txt
-rw-rw-r-- 3 bliss bliss 176 Dec 4 22:51 hard2.txt
[blstst1a]:/home/bliss/kannan/unix_work $ rm hard.txt
[blstst1a]:/home/bliss/kannan/unix_work $ ls -lrt
total 4
lrwxrwxr-x 1 bliss bliss 8 Dec 4 22:47 soft2.txt ->
soft.txt
-rw-rw-r-- 2 bliss bliss 176 Dec 4 22:51 hard1.txt
-rw-rw-r-- 2 bliss bliss 176 Dec 4 22:51 hard2.txt
Hard links can be created only within the file system.
Hard links have the same inode number as the original file.
On deleting the original file, hard linked file can still be accessed.
Hard links do not need any extra data memory to save since it uses links
Source file should exist.
Can be created only on files, not on directories.
Access to the file is faster compared to soft link.
Example:
_______ ________
_____|_______| |________|
File2--------/ 9962464
FIND COMMAND
find path_list selection_criteria action
+ sign is used to search for greater than, - sign is used to search for less than and
without sign is used for exact.
lists all files that are greater than 10,000 bytes, but less than 50,000 bytes
To find the smallest file in the current directory and sub directories
find . -type f -exec ls -s {} \; | sort -n | head -1
find . -name H* | xargs ls -l constructs an argument list from the output of the find
commend and passes it to ls.
UMASK:
Whenever create files and directories, the default permissions that are assigned to
them depend on the system’s default setting.
TOUCH:
Sometimes need to set the modification and access time to predefined values.
Syntax:
touch option expression filename
Whenever touch is used without option and expression, both time are set to the
current time.
TEE:
Tee command is used to store and view (both at the same time) the output of
any other command.
Tee command writes to the STDOUT, and to a file at a time. By default tee
command overwrites the file.
o $ ls | tee file
Can instruct tee command to append to the file using the option –a as shown
below.
o $ ls | tee –a file
TYPE:
type is a Unix command that describes how its arguments would be interpreted if
used as command names.
type will display the command name's path. Possible command types are:
o shell built-in
o function
o alias
o hashed command
o keyword
The command returns a non-zero exit status if command names cannot be found.
Example:
$ type test
test is a shell builtin
$ type cp
cp is /bin/cp
$ type unknown
-bash: type: unknown: not found
$ type type
type is a shell builtin
$ type -a gzip
gzip is /opt/local/bin/gzip
gzip is /usr/bin/gzip
USER ENVIRONMENT
ENV:
env - run a program in a modified environment
env is used to either print a list of environment variables or run another utility in an
altered environment without having to modify the currently existing environment.
Using env, variables may be added or removed, and existing variables may be
changed by assigning new values to them.
FINGER:
finger is a program you can use to find information about computer users. It usually
lists the login name, the full name, home directory, shell information about a
particular user as shown below.
finger [email protected]
ID:
prints the user or group identifier of the account by which the program is executed
The root account has a UID of 0
id -un # Where `-u` refers to `--user` and `-n` refers to `--name`
S U:
The su command, also referred to as substitute user, super user, or switch user,
allows a computer operator to change the current user account associated with the
running virtual console.
When run from the command line, su asks for the target user's password, and if
authenticated, grants the operator access to that account and the files and
directories that account is permitted to access.
FOLD:
Fold is a Unix command used for making a file with long lines more readable
on a limited width terminal.
Most Unix terminals have a default screen width of 80, and therefore reading
files with long lines could get annoying.
The fold command puts a line feed every X characters if it does not reach a
new line before that point. If the -w argument is set, the fold command
allows the user to set the maximum length of a line.
JOIN;
Join command is used to combine two files based on a matching fields in the files.
join [options] file1 file2
Option:
-1 field number : Join on the specified field number in the first file
-2 field number : Join on the specified field number in the second file
-j field number : Equivalent to -1 fieldnumber and -2 fieldnumber
-o list : displays only the specified fields from both the files
-t char : input and output field delimiter
-a filenumber : Prints non matched lines in a file
-i : ignore case while joining
The basic usage of join command is to join two files on the first field. By default the
join command matches the files on the first fields when we do not specify the field
numbers explicitly.
PASTE:
The paste command merges the lines from multiple files. The paste command
sequentially writes the corresponding lines from each file separated by a TAB
delimiter on the unix terminal.
Option:
-d : Specify of a list of delimiters.
-s : Paste one file at a time instead of in parallel.
> cat file1 > paste file1 file2
Unix Unix Dedicated server
Linux Linux Virtual server
Windows Windows
Paste command uses the tab You can merge the files in sequentially using
delimiter by default for merging the the “-s” option. The paste command reads
files. You can change the delimiter to each file in sequentially. It reads all the lines
any other character by using the “–d” from a single file and merges all these lines
option. into a single line.
SPLIT:
The split command splits the file into multiple files with 1000 lines into each output
file by default.
Since the input file does not contain 1000 lines, all the contents are put into only one
output file "xaa". By default, the output files generated contains the prefix "x", and
the suffix as "aa", "ab", "ac" and so on.
o
Use -d option to name the files with number suffixes as 00,
01, 02 .. and so on, instead of aa, ab, ac.
$ split -d testfile
$ ls
testfile x00 x01 x02
FILTERS:
CMP COMMAND:
Two files are compared byte by byte, and the location of the first mismatch is echoed
to the screen.
Cmp, when invoked without option doesn’t bother about subsequently mismatches.
CUT COMMAND:
Head and tail command used to slice a file horizontally. The slice a file vertically with
the cut command. Cut identifies both column and fields.
cut command to print characters by print more than one character at a time
position: by specifying the character positions in a
comma separated list as
cut -c4 file.txt cut -c4,6 file.txt
x xo
u ui
l ln
To print characters by range To print the first six characters in a line,
omit the start position and specify only
cut -c4-7 file.txt the end position.
x or cut -c-6 file.txt
unix unix o
linu is uni
is lin
To print the characters from tenth If you omit the start and end positions,
position to the end, specify only the start then the cut command prints the entire
position and omit the end position. line.
SORT COMMAND:
sort [options] filename
Option Significance
-t char Uses delimiter char to identify field
-u Remove duplicate lines
-n Sort numerically
-r Reverse sort order
-c Check if the file is sorted
-k Sorts file based on the data in the specified field
positions.
-M Sorts based on months. Considers only first 3 letters as
month
-b Ignores leading spaces in each line
-o Places output in file file1
file1
-m list Merges sorted files in list
-k Option:-
You can specify the field positions using the -k option of sort command.
To sort based on the data in the second field, run the below command:
> sort -k2 order.txt
Unix distributed 05 server
Unix distributed 05 server
Distributed processing 6 system
Linux virtual 3 server
-u Option:-
Only unique values in the output using the - u option of the sort command.
> sort -u order.txt
Distributed processing 6 system
Linux virtual 3 server
Unix distributed 05 server
> sort -t'|' -nrk2 delim_sort.txt
Declast|12
Mayday|4
Janmon|1
Sort the data in the monthwise using the -M option of the sort command. This is shown
below:
> sort -M delim_sort.txt
Janmon|1
Mayday|4
Declast|12
UNIQ COMMAND:
The uniq command can eliminate or count duplicate lines in a presorted file. It reads in
lines and compares the previous line to the current line. Depending on the options
specified on the command line it may display only unique lines or one occurrence of
repeated lines or both types of lines.
Options
-u Print only lines which are not repeated (unique) in the original file
-d Don't output lines that are not repeated in the input.
-c Generate an output report in default style except that each line is
preceded by a count of the number of times it occurred. If this option is
specified, the -u and -d options are ignored if either or both are also
present.
-i Ignore case differences when comparing lines
-f Ignore a number of fields in a line
-s Skips a number of characters in a line
-w Specifies the number of characters to compare in lines, after any
characters and fields have been skipped
--help Displays a help message
--version Displays version number on stdout and exits.
uniq file Simply fetches one copy of each line and write it to the standard output.
davel
davel
davel
jeffy
jones
jeffy
mark
mark
mark
chuck
bonnie
chuck
That gives you a truly unique list. However, it's also a useless use of uniq since
sort(1) has an argument, -u to do this very common operation:
% sort -u foo
jones
bonnie
davel
chuck
jeffy
mark
d tells uniq to eliminate all lines with -u tells uniq to eliminate all duplicated
only a single occurrence (delete unique lines and show only those which appear
lines), and print just one copy of repeated once (only the unique lines):
lines:
% sort foo | uniq -d % sort foo | uniq -u
davel jones
chuck bonnie
jeffy
mark
-c tells uniq to count the occurrences of I often pipe the output of "uniq -c" to
each line: "sort -n" (sort in numeric order) to get
the list in order of frequency:
% sort foo | uniq -c % sort foo | uniq -c | sort -n
1 jones 1 jones
1 bonnie 1 bonnie
3 davel 2 chuck
2 chuck 2 jeffy
2 jeffy 3 davel
3 mark 3 mark
LINE NUMBERING: nl
The nl command has elaborate schemes for numbering lines.
nl use the tab as the default delimiters.
-w –-- To specify the width of the number format.
-s --- delimiters
GREP:
The name grep is a combination of editor command characters.
It is from the editor command :g/RE/p, which translates to global Regular
Expression print.
In fgrep the f stands for fast.
egrep – Achieves by using multiple –e option
o egrep ‘bliss|ofr’ emp.txt
Option Significance
-c Display count of number of occurrences
-l Display list of file name only
-n Display line numbers along with lines
-v Doesn’t display lines matching expression
-i Ignore case when matching
-h Omits filenames when handling multiple files
-w Match complete word only
-o To show out only the matched string of the pattern
-b Display the block number at the beginning of each
line
-s Silent mode
-r search recursively i.e. read all files under each
directory for a string
-q Ouput will not display in terminal
[blstst1a]:/home/bliss/ $ grep -q loop *.sh
[blstst1a]:/home/bliss/ $ grep loop *.sh
sample_prt.sh:loop
sample_prt.sh:end loop;
PATTERN MATCHES
* Zero or more occurrences of previous character
. A single character
+ Matches one or more occurrences of the previous
character
? Matches Zero or more occurrences of the previous
character
^ (caret) For Matching at the beginning
$ For matching at the end
[abc] A single character a,b or c
[a-c] A character between a to c
[^abc] A single character which is not a,b or c
Command Significance
i,a,c Insert, append and change text
d Delete lines
1,4d Delete lines 1 to 4
r foo Places contents of file foo after line
w bar Write address line to file bar
3,$p Print line 3 to end
$!p Print all the line except last line
/begin/,/end/p Print lines enclosed between begin and end
q Quit after reading up to addressed line
s/s1/s2/ Replace first occurrence of string s1 in all
lines with sting s2
-e –e option to do this multiple substitutions
[[:space:]] simply a special keyword that tells sed to match either a TAB or a
space.
^ *$ '*' indicates 0 or more occurrences of the
previous character. '^ *$' indicates a line
containing zero or more spaces.
dollar sign ($) denotes last line of input file
Note:In the above example each regular expression inside the parenthesis would be
back referenced by \1, \2 and so on. Here I used \ to give line break you should
remove this before running this command.
Sed print all the lines on the std output in addition to the lines affected by the action.
So the addressed lines (the first two) are printed twice.
Delete the first line AND the last line of a file, i.e, the header and trailer line of a file.
sed '1d;$d' file
How about writing only changes to another file for future reference?
With w option, we will get only changes to the new file
sed ‘s/baby/dady/w abc.txt’ tem.txt
How about reducing it more by using ;(Continuation operator) for the same
question?
sed ‘s/Surendra/bca/;s/mouni/mca/;s/baby/bba/’ tem.txt
$ cat file
Cygwin
Unix
Linux
Solaris
AIX
Delete the lines NOT containing the Delete the lines containing the pattern
pattern 'Unix': 'Unix' OR 'Linux':
$ sed '/Unix/!d' file $ sed '/Unix\|Linux/d' file
Unix Cygwin
Solaris
AIX
Note: The OR condition is specified
using the | operator. In order not to get
the pipe(|) interpreted as a literal, it is
escaped using a backslash.
Delete all lines which are entirely in Delete all lines which are entirely in
capital letters: capital letters:
$ sed '/^[A-Z]*$/d' file $ sed '/^[A-Z]*$/d' file
Cygwin Cygwin
Unix Unix
Linux Linux
Solaris Solaris
Delete the last line ONLY if it Delete the line containing the pattern
contains either the pattern 'AIX' or 'Unix' and also the next line:
'HPUX': $ sed '/Unix/{N;d;}' file
$ sed '${/AIX\|HPUX/d;}' file Cygwin
Cygwin Solaris
Unix AIX
Linux Note:
Solaris N command reads the next line in the
pattern space. d deletes the entire
pattern space which contains the current
and the next line.
Delete only the next line containing the pattern 'Unix', not the very line:
$ sed '/Unix/{N;s/\n.*//;}' file
Cygwin
Unix
Solaris
AIX
Using the substitution command s, we delete from the newline character till the end,
which effective deletes the next line after the line containing the pattern Unix.
export FILE_NAME=/home/bliss/kannan/unix_work/report.txt
export OUT_FILE=/home/bliss/kannan/unix_work/output.txt
export REPORT_FILE=/home/bliss/kannan/unix_work/final_report.txt
export MAILLIST="[email protected]"
rm -f $FILE_NAME
rm -f $OUT_FILE
$ORACLE_HOME/bin/sqlplus -s $ORACLE_LOGIN <<!>>$OUT_FILE
set echo off
set head off
set feedback off
spool $FILE_NAME;
select CUSTOMER_ORDER_ID||'|'||COE_ID||'|'||LINE_TYPE_CD||'|'||
TELEPHONE_NUM from telephone_number where customer_order_id=198425;
spool off;
exit;
!
sed '1i\
CUSTOMER_ORDER_ID|COE_ID|LINE_TYPE_CD|TELEPHONE_NUM' $FILE_NAME
>$REPORT_FILE
rm -f $FILE_NAME
if [ -s $REPORT_FILE ]; then
echo $REPORT_FILE "File sending to my id"
uuencode $REPORT_FILE $REPORT_FILE |mailx -s "Test Report" $MAILLIST
fi
exit 0
AWK:
awk is one of the most powerful utilities used in the unix world. Whenever it comes
to text parsing, sed and awk do some unbelievable things.
The selection criteria (a form of addressing) filter input and selects lines for the
action component to act on.
where the pattern indicates the pattern or the condition on which the action is to be
executed for every line matching the pattern.
In case of a pattern not being present, the action will be executed for every line
of the file.
In case of the action part not being present, the default action of printing the line
will be done.
Built-In Variables:
Variable Function
NR Cumulative number of lines read
FS Input field separator
OFS Output field separator
NF Number of fields in current line
FILENAME Current input file
ARGC Number of arguments in command line
ARGV List of arguments
F Specify the delimiter
Built-In Functions:
Function Significance
int(x) Returns integer value of x
sqrt(x) Returns square root of x
length Return length of the complete line
substr(stg,m,n) Returns portion of string of length n, starting from postion m in
string stg.
index(s1,s2) Returns position of string s2 in string s1
split(stg,arr,ch) Splits string stg into array arr using ch as delimiter;
system(“cmd” Runs UNIX command cmd, and return its exit status.
Pass a variable to awk which contains the double quote. Print the quote,
line, quote.
[blstst1a]:/home/bliss/kannan/unix_work/awk_prg $ awk -v
q="'" -F"," '{print q $1 q}' file1
'Name'
'Deepak'
'Neha'
'Vijay'
'Guru'
To double quote the contents, pass the variable within single quotes
[blstst1a]:/home/bliss/kannan/unix_work/awk_prg $ awk -v
q='"' -F"," '{print q $1 q}' file1
"Name"
"Deepak"
"Neha"
"Vijay"
"Guru"
The uuencode command takes the named SourceFile (default standard input) and
produces an encoded version on the standard output.
The encoding uses only printable ASCII characters, and includes the mode of the file and
the OutputFile filename used for recreation of the binary image on the remote system.
- Encode the output using the MIME Base64 algorithm. If -m is not specified, the old
m uuencode algorithm will be used
Example:
uuencode $REPORT_FILE $REPORT_FILE |mailx -s "Test Report" $MAILLIST
PROCESS:
Option Significance
-f Fill listing showing the PPID of each process
-e All process including user and system processes
-u usr Processes of user usr only
-a Processes of all users excluding processes not
associated with terminal
-l A long listing showing memory related information
-t term Processes running on terminal term (tty03)
The & is the shell’s operator used to run a process in the background.
$ sort –o emp.lst &
550 // the job’s ID
nice is a built-in command in the C Shell. Where it has default value of 4. nice
values are system-dependent and typically range from 1 to 19.
Nice -5 wc –l <file> & // Nice value increased by 5 units
SIGNALS:
The following are some of the more common signals you might encounter and want to
use in your programs:
Signal
Signal Name Description
Number
JOB CONTROL:
Relegate a job to the background (bg )
Bring it back to the foreground (fg)
List the active jobs (jobs)
Suspend a foreground job ( [Ctrl-z] )
Kill a job ( kill )
CRONTAB:
The crontab command is used to schedule jobs to be run in the future, usually on some
regular schedule (such as every week). The command is run with one of three command
line arguments:
crontab -l View crontab file, if any
crontab -r Remove crontab file, if any
crontab -e Edit (or create) user's crontab file (starts the editor automatically)
crontab file Replace existing crontab file (if any) with file
Field 1 2 3 4 5 6
Equivalent
Entry Description
To
@yearly (or Run once a year at midnight in the morning of 0 0 1 1 *
@annually) January 1
@monthly
Run once a month at midnight in the morning of the 0 0 1 * *
first of the month
@weekly
Run once a week at midnight in the morning of 0 0 * * 0
Sunday
@daily Run once a day at midnight 0 0 * * *
@hourly Run once an hour at the beginning of the hour 0 * * * *
@reboot Run at startup @reboot
cron permissions
The following two files play an important role:
/etc/cron.allow - If this file exists, it must contain your username for you to use
cron jobs.
/etc/cron.deny - If the cron.allow file does not exist but the /etc/cron.deny file
does exist then, to use cron jobs, you must not be listed in the /etc/cron.deny file.
Default Actions:
Every signal has a default action associated with it. The default action for a signal is
the action that a script or program performs when it receives a signal.
Some of the possible default actions are:
Terminate the process.
Ignore the signal.
Dump core. This creates a file called core containing the memory image of the
process when it received the signal.
Stop the process.
Continue a stopped process
Sending Signals:
There are several methods of delivering signals to a program or script. One of the
most common is for a user to type CONTROL-C or the INTERRUPT key while a script
is executing.
When you press the Ctrl+C key a SIGINT is sent to the script and as per defined
default action script terminates.
The other common method for delivering signals is to use the kill command whose
syntax is as follows:
$kill -signal pid
Here signal is either the number or name of the signal to deliver and pid is the
process ID that the signal should be sent to.
For Example:
$ kill -1 1001
Sends the HUP or hang-up signal to the program that is running with process ID
1001.
To send a kill signal to the same process use the folloing command:
$ kill -9 1001
Trapping Signals:
When you press the Ctrl+C or Break key at your terminal during execution of a shell
program, normally that program is immediately terminated, and your command
prompt returned.
This may not always be desirable. For instance, you may end up leaving a bunch of
temporary files that won't get cleaned up.
Here command can be any valid Unix command, or even a user-defined function, and
signal can be a list of any number of signals you want to trap.
There are three common uses for trap in shell scripts:
1. Clean up temporary files
2. Ignore signals
From the point in the shell program that this trap is executed, the two files work1$$
and dataout$$ will be automatically removed if signal number 2 is received by the
program.
So if the user interrupts execution of the program after this trap is executed, you can
be assured that these two files will be cleaned up. The exit command that follows
the rm is necessary because without it execution would continue in the program at
the point that it left off when the signal was received.
Now these files will be removed if the line gets hung up or if the Ctrl+C key gets
pressed.
The commands specified to trap must be enclosed in quotes if they contain more
than one command. Also note that the shell scans the command line at the time that
the trap command gets executed and also again when one of the listed signals is
received.
So in the preceding example, the value of WORKDIR and $$ will be substituted at the
time that the trap command is executed. If you wanted this substitution to occur at the
time that either signal 1 or 2 was received you can put the commands inside single
quotes:
$ trap 'rm $WORKDIR/work1$$ $WORKDIR/dataout$$; exit' 1 2
Ignoring Signals:
If the command listed for trap is null, the specified signal will be ignored when received.
For example, the command:
$ trap '' 2
Specifies that the interrupt signal is to be ignored. You might want to ignore certain
signals when performing some operation that you don't want interrupted. You can specify
multiple signals to be ignored as follows:
$ trap '' 1 2 3 15
Note that the first argument must be specified for a signal to be ignored and is not
equivalent to writing the following, which has a separate meaning of its own:
$ trap 2
If you ignore a signal, all subshells also ignore that signal. However, if you specify an
action to be taken on receipt of a signal, all subshells will still take the default action
on receipt of that signal.
Resetting Traps:
After you've changed the default action to be taken on receipt of a signal, you can change
it back again with trap if you simply omit the first argument; so
$ trap 1 2
resets the action to be taken on receipt of signals 1 or 2 back to the default.
#!/bin/sh while :
# trap1a do
trap 'my_exit; exit' SIGINT SIGQUIT sleep 1
count=0 count=$(expr $count + 1)
echo $count
my_exit() done
{
echo "you hit Ctrl-C/Ctrl-\, now
exiting.."
# cleanp commands here if any
}
output:
[blstst1a]:/home/bliss/kannan/unix_work $ ./trap2.sh
1
2
3
4
5
6
7
you hit Ctrl-C/Ctrl-\, now exiting..
SHELL SCRIPT:
Any variable can become an environment variable. First it must be defined as usual;
then it must be exported with the command:
export varnames
If statement: Shift:
if [ $# -le 2 ]; #!/bin/sh
then while [ $# -gt 1 ]; do
echo "need 2 argument" echo $1
exit 1 shift
fi done
For: Case:
for lst in `ls -l`; do case:
echo $lst #!/bin/sh
done set -x
read input
case $input in
1) `ls -l`
break ;;
2) `ls`
break ;;
*) `ls -lrt`
exit ;;
esac
Conditional Test:
String operations
string1 =
True if the strings are equal.
string2
string1 !=
True if the strings are not equal.
string2
-z string True if the length of string is zero.
string True if the length of string is non-zero.
-n string True if the length of string is non-zero.
string1 ==
(Bash only) True if the strings are equal.
string2
(Bash in [[ ]] only) True iff regex matches str. BASH_REMATCH[0] =
str = regex
entire match, BASH_REMATCH[i] = i-th paren submatch.
True if shell option optname is enabled. See the list of options under
-o optname
the description of the -o option to the set builtin below.
Numeric operations
OP is one of -eq, -ne, -lt, -le, -gt, or -ge. These arithmetic binary
arg1 OP operators return true if arg1 is equal to, not equal to, less than, less than
arg2 or equal to, greater than, or greater than or equal to arg2, respectively.
Arg1 and arg2 may be positive or negative integers.
string1 <
True if string1 sorts before string2 lexicographi- cally in the current locale.
string2
string1 >
True if string1 sorts after string2 lexicographi- cally in the current locale.
string2
File operations
-e file True if file exists.
-d file True if file exists and is a directory.
-f file True if file exists and is a regular file.
-L file True if file exists and is a symbolic link.
-r file True if file exists and is readable.
-w file True if file exists and is writable.
-x file True if file exists and is executable.
file1 -nt file2 True if file1 is newer (according to modification date) than file2.
file1 -ot file2 True if file1 is older than file2.
file1 -ef file2 True if file1 and file2 have the same device and inode numbers.
Debugging Scripts
Running it as ./script.sh will make the kernel read the first line (the shebang), and
then invoke bash to interpret the script. Running it as sh script.sh uses whatever
shell your system defaults sh to
. ./my_script.ksh executes the script within your current (probably login) shell.
sh my_script.ksh creates a new shell in a child process and executes within that.
If you run sh file.ksh , you're running "sh" (which may be linked to ksh or bash or
whatever) with file.ksh as input. It's run in a child process, though, so variables you
set are not available later in your current shell.
If you run ./file.ksh , the first 4 bytes of the file (the file "magic") are read,
determined to be a script, and the executable, if available, identified after the
shbang is run, feeding the file as input. Also run in a child, so no variables are
available after control is passed back to your current login shell.
What is “Library”?
A file that contains only functions is called a library. Usually libraries contain no main
code.
GETOPTS:
The parameters to your script can be passed as -n 15 -x 20. Inside the script, you
can iterate through the getopts array as while getopts n:x option, and the variable
$option contains the value of the entered option.
optstring - the string which contains the list of options expected in the
command line
name - the variable name which is used to read the command line options
one by one.
Env Variables:
getopts command makes use of 2 environment variables:
OPTARG : contains the argument value for a particular command line option.
OPTIND : contains the index of the next command line option.
TR (TRANSLATE)
tr is an abbreviation of translate or transliterate, indicating its operation of replacing
or removing specific characters in its input data set.
Example:
[blstst1a]:/bliss/ $ echo abcdefghijklmnopqrstuvwxyz |tr a-z A-Z
ABCDEFGHIJKLMNOPQRSTUVWXYZ
f -- following, used to point name of tar file to be created. it actually tells tar command that
z -- zip, tells tar command that create tar file using gzip.
j –- another compressing option tells tar command to use bzip2 for compression
Example:
tar -cvf kans.tar *
tar -xvf kans.tar
SCP Command:
scp [options] username1@source_host:directory1/filename1
username2@destination_host:directory2/filename2
use scp with the -r option. This tells scp to recursively copy the source directory and its
contents.
scp -r script_prg
[email protected]:/bliss/home/kannan/
Therefore, to copy all the .txt files from the revenge directory on your deathstar.com
account to your revenge directory on empire.gov, enter:
without password:
Step 1 : local host
> ssh-keygen -t rsa
and when prompted for pass phrase I hit enter. Then id_rsa , id_rsa.pub files created in
<usershome>/.ssh directory.
[blstst1a]:/home/bliss/.ssh $ ls -lrt
-rw------- 1 bliss bliss 963 Jul 17 23:50 id_rsa
-rw-r--r-- 1 bliss bliss 224 Jul 17 23:50 id_rsa.pub
Sticky bit:-
It is a user ownership access-right flag that can be assigned to files and directories
on Unix systems.
The sticky bit can be set using the chmod command and can be set using its octal
mode 1000 or by its symbol t (s is already used by the setuid bit).
chmod -t /usr/local/tmp
chmod 0777 /usr/local/tmp
1. T The sticky bit is set (mode 1000), but not execute or search permission.
2. t The sticky bit is set (mode 1000), and is searchable or executable.
The file mode printed under the -l option consists of the entry type (1st bit) and the
permissions (9 bits). The entry type character describes the type of file, as follows:
1. - Regular file.
2. b Block special file (stored in /dev).
3. c Character special file (stored in /dev).
4. d Directory.
5. l Symbolic link.
6. p FIFO.
7. s Socket.
8. w Whiteout.
Setting the setgid permission on a directory (chmod g+s) causes new files and
subdirectories created within it to inherit its group ID, rather than the primary
group ID of the user who created the file (the owner ID is never affected, only the
group ID).
Example:
if [[ ${PRODUCT_NAME} = "app_rel" ]]; then
while [[ $I -lt ${#App_Dirset[*]} ]]; do
[blstst1a]:/bliss/ofc/script $ uname -n
blstst1a
[blstst1a]:/bliss/ofc/script $ uname
HP-UX
[blstst1a]:/bliss/ofc/script $ uname -a
HP-UX blstst1a B.11.00 U 9000/893 371349291 unlimited-user license
String reverse:
Out put:
Emoclew
Explain:
The length command gives the length of the argument passed to it.
With no argument, length gives the length of the current line which is $0.
The substr command in awk extracts one character at a time and is appended
to the resultant variable x which is printed at the end using the END label.
---------------------------------
#!/bin/bash x="welcome"
len=`echo ${#x}`
while [ $len -ne 0 ] do
y=$y`echo $x | cut -c $len`
((len--))
done
echo $y
---------------------------------
cat sample.sh
#! /bin/sh
# reverse a string
REV=""
for (( i=$len ; i>0 ; i-- ))
do
REV=$REV""${STR:$i-1:$i}
STR=${STR%${STR:$i-1:$i}}
done
output:
[~/temp]$ ./sample.sh
Reversed string
z y x w v u t s r q p o n m l k j i h g f e d c b a
Output: Ouput:
0023 0023
Note:typeset has an option -Z which is
used for zero padding (only in ksh).
$ x=23 x=23
$ echo $x | awk '{printf while [ ${#x} -ne 4 ];
"%04d\n", $0;}' do
x="0"$x
Output: done
0023 echo $x
Output:
0023
$ sed #!/usr/bin/bash
's/.*Fedora.*/Cygwin\n&/' while read line
file do
Output: echo $line | grep -q "Fedora"
Linux [ $? -eq 0 ] && echo "Cygwin"
Solaris echo $line
Cygwin done < file
Fedora
Ubuntu Note:
AIX A line is read. grep -q is silent grep
HPUX where the result, if any, will not be
Note: displayed. The status($?) will be 0 if a
On finding the pattern 'Fedora', match is found and hence 'Cygwin' is
substitute with 'Cygwin' followed by the printed.
pattern matched.
#!/usr/bin/bash
while read line
do
echo $line
echo $line | grep -q "Fedora"
[ $? -eq 0 ] && echo "Cygwin"
done < file
Different ways to display the contents of a file:-
File name: a
$ cat a $ paste a
a a
b b
c c
----------------
cat < a Note:
To paste contents from
multiple files. But, when
paste is used without any
options against a single
file, it does print out the
contents.
$ grep '.*' a $ while read line
a do
b echo $line
c done < a
Note: Note:
xargs takes input from sed without any command
standard input. By default, inside the single quote just
it suppresses the newline prints the file.
character. -L1 option is to
retain the newline after
every(1) line.
$ sed -n 'p' a $ sed -n '1,$p' a
a a
b b
c c
Note: note:
1 means true which is to The print command of awk
print by default. Hence, prints the line read, which
every line encountered by awk is by default $0().
is simply printed.
$ awk '{print $0;}' a
a
b
c