0% found this document useful (0 votes)
129 views

Shell Scripting Preparation

The document provides an overview of Linux command line basics and shell scripting. It discusses common commands like ls, cd, cp etc. and how they work with files and directories. It also covers shell configuration files, variables, processes, I/O redirection, substitution and quoting which are fundamental aspects of writing shell scripts. The document is intended to teach basic Linux commands and shell scripting concepts.

Uploaded by

njoy986
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
129 views

Shell Scripting Preparation

The document provides an overview of Linux command line basics and shell scripting. It discusses common commands like ls, cd, cp etc. and how they work with files and directories. It also covers shell configuration files, variables, processes, I/O redirection, substitution and quoting which are fundamental aspects of writing shell scripts. The document is intended to teach basic Linux commands and shell scripting concepts.

Uploaded by

njoy986
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 23

Chapter 1: Command and Shell Basics: Command: is a Program that we can execute.

Complex command: Accept the arguments for simple command. Compound command: Multiple commands seperated by ; Shells: 1. Borne Shell(sh): Developped in ALGOL. a. Bourne Shell: no filename completion, no command history, difficulty in executing more background jobs. b. Korn Shell: it has all the features from Sh and C shell. c. Bourne Again (BASH): mix of ksh and C ,but syntax same as Bourne Shell(sh). C- shell: Developped in C. a. C Shell : Lack of functions, confusion syntax due to Lazy interpreter, weak I/O controls. b. Posix Shell Chapter 2: Script Basics: Utility is the name of the program getting executed where as the command is the program that executes and accepts the arguments that change the behavior of the command. Means simple commands without args is same as utility. Kernal is the heart of Unix system. Gives a means of accessing sys memory, execute commands. When sys shutdown, both commands and Kernal will be saved in systems hard disk. When sys rebooted kernel will be loaded in to memory where as commands will be loaded into memory whenever they gets executed. They will remain in memory for few mins for the fast execution of frequently used commands. After the reboot, Getty prog prompts for login, password and provide the details to login command. It will searches /etc/passwd file and if the entry found starts the shell and exits. Otherwise gives control to Getty and exits. Shell will be uninitialized when it is started, it will be initialized in a 2 step process by reading the parameters from any of the below in the same order. 1. /etc/profile 2. .profile Interactive: It will take input from the user and execute the commands. At the beginning it will be in interactive mode. Non interactive modes: Take input from a file.(keep all the commands in a file and execute the file)

Initialization files: TERM: Set the terminal PATH : the path where to search for commands MANPATH: path to search for the man pages. 1. Make the file executable 2. Specify the correct shell (#!/bin/ksh) : Magic line which will specify the shell used to execute the script instead of the original shell. Starts a new shall and execute the file MAN pages: To get the help on a command. 1. Name : Name of cmd with short desc 2. Synopsis: command with diff forms means with diff args 3. Description: desc and explanation on diff args. 4. Example: exampls 5. See also: related commands to refer 6. Notes: some expected bugs for this cmd. Chapter 3: Working with Files: Ordinary files: files that have text/data or code. Special Files: Files that access devices like CD ROM etc and the aliases and links. Directories: That contains Ordinary or special files. ls : -a hidden files, -r reverse order -f (appends / at the end of directories) -l (long listing) , -t sort -ut sort by access time cp source dest if there are multiple sources (files) then dest should be dir. -i interactive mv source dest if there are multiple sources (files) then dest should be dir. -i interactive wc filename o/p: no.of.lines no.of.words no.of.chars -l :no. of. Lines -w no. of. Words -c/-m no. of .chars view a file cat filename or cat filename1 filename2 etc rm filename or rm filenam1 filename2 etc

cp :

mv :

wc :

cat: rm :

Chapter4: Working with Directories: Absolute path: Path to access the file from the root Relative path: Path to access the file from the current dir cd: Change dir ( cd, cd..,cd pathname) ls dirname gives the file listing in the dir ls dir1, dir2,f1 etc: gives the listing of all the files in the specified dirs and individual files specified. Mkdir : mkdir dir name. -p (to create the necessary parent dirs too if u specify a path and the parent dir does not exists) mkdir p /home/jo/profiles/scripts/ cp r file1 dir1 dir2 destdir -- copies all the files and the files in the source dirs to dest dir mv r file1 dir1 dir2 destdir -- moves all the files and the files in the source dirs to dest dir rmdir or rm r : to remove dir, remove dir recursively. Chapter 5: Manipulating File Atributes: File types: - regular file, -d dir, -l symbolic link, -p named pipe -c char specific file, -b block specific file -s socket Ln s source dest to create a symbolic link Rm to remove links Pipes will take the o/p of one cmd and provide that as i/p of other command in command line. To communicate the same between to processes, we will use names pipes(inter process communication) Sockets are the files used for inter process communication between two diff systems connected in network. Usually for web apps and browsers. Unix will access devices by writing into /read from the device files. There are the access points to the devices. Two diff types of device files are 1. Char specific file: Communicates with the devices one char at a time. Will have major and minor numbers which will help to access the device driver. 2. Block specific file: communicates blocks of data at a time. Will have major and minor numbers which will help to access the device driver. Owner, group, others Read,write,execute SUID, SGID: to access the secure files with the privs of the owner of the script instead of the user privs. For ex: changing password will write into /etc/shadow file with the password cmd privs not wit hur privs. The same with SGID(with the privs the prog group not with the privs of user group)

This will appear in user execute privs space Eg: -rws-w-rfilename Chmod : change access privs 1. Symbolic: r-read, w- write, x- execute. u,o,g. + to add, - to delete, = to give 2. Octal: 4-execute, 2- write, 1-read only 4,2 for SUID, SGID respectively Chgrp: change group filename or chown :group filename Chown: change owner Change owner:group filename -R to recursively change the owner ship of all the files of dir Chapter 6: Processes: Every program/command under execution is a process. 1. Background process (&) 2. Foreground process Fg : to move a background process to foreground Default: fg -- moves the recent background process. fg %jobid Bg: move a foreground process to background. Default: bg -- moves the recently suspended foreground process. bg %jobid nohup: to make a process to execute without a terminal. Even if we logoff the process will continue. Wait: wait for a process or a job to finish wait %jobnumber or wait process_id wait with out args will wait for all the background processes to finish. Jobs ---list all the background, stopped, and suspended jobs. Will not show the foreground processes. Ps will show the currently running processes. Ps -f: full listing Ps -e: list every process Ps -u: list of processes for a specific user Ps -a: list of processes running without a terminal (demons and nohup processes)

Kill To kill a process or a job Kill %jobid or kill pid. Kill -9 jobid/pid force kill All process will become the child process of ksh(current shell) and this is the child process for in.telnetd process(telnet demon) or operating system. Subshell: #!/bin/ksh: will create a new shell to execute the script, which is called a subshell. It will exit from the subshell at the end of the exec of the script We can manually create a subshell by entering the ksh, csh at the command prompt and exit from that by using the exit command. exec overlaying the current process. Means if we give exec ksh, it will replace the original shell with ksh. Instead of creating a subshell. The same with any process. Chapter 7: Variables: Varible: Temp location in memory which holds a value. Var =value During accessing only we can refer the var with $var, should not use $ while initializing. Readonly: to make a variable read only. Syntax : readonly varname Unset: to tell the shell to remove this var from the list of vars that it will track. We cannot unset a readonly variable. Arrays: No need to use the indices in numerical order. It will keep track of indices which have values only. arr[1]=10; Intialization: set a arr 1 2 3 4 Or arr=(1 2 3 4) Accessing: echo ${arr[2]} arr[@] or arr[*] -- will display all the array elements. If arr=(apple grape banana peach fruit) arr[*] will display the list as 4 elements and arr[@] will display it as 5 elements. Vars of 3 types. 1. Local vars : local as above vars, can not use by shell processes 2. Environment vars: to set the env for the shell. Available to any prog started by shell. 3. Shell vars: set by shell and is required by shell to work properly.

Chapter 8: Substitution: Filename substitution: 1. * - zero or more occurrences of any char 2. ? - one occurrence of any char 3. [] one occurrence of any thing inside those braces. Eg: [1234] or [1-4] ---using range operator. 4. Negating a set: [!A-Z] -- should not match the upper case alphabets. Variable substitution: 1. ${parameter:-word} -- if parameter is null or not set then word is substituted for parameter. Value of parameter will not change. 2. ${parameter:=work} -- if parameter is null or not set then value of the word is substituted for parameter. 3. ${parameter:?message} ---- if parameter is null or not set then prints the error message. 4. ${parameter:+word} -- if parameter is not null or set then word is substituted for parameter. Value of parameter will not change Command substitution: Enables to execute a command and substitutes the o/p in another command. Keep the command in `cmd` to execute the command in a script. $date=`date` Arithmetic substitution: $((expr)) -- will perform the integer arithmetic with out using additional progs like expr or bc in shell scripts. Will work with ksh and bash. Not available in sh. Eg: $(( (5+2*3)-6)) Chapter 9: Quoting: 1. Backslash ( \) 2. Single quote () 3. Double quote () Summary of the quoting rules: 1. 2. 3. 4. A backslash takes away the special meaning of the character that follows it. The character doing the quoting is removed before command execution. Single quotes remove the special meaning of all enclosed characters. Quoting regular characters is harmless. 5. A single quote cannot be inserted within single quotes. 6. Double quotes remove the special meaning of most enclosed characters. (only special chars for double quotes are : $ -var substitution, ` -command substitution, \ $,\`,\\,\) 7. Quoting can ignore word boundaries.

8. Different types of quoting can be combined in one command. 9. Quote spaces to embed them in a single argument. 10. Quote the newline to continue a command on the next line. 11. Use quoting to access filenames that contain special characters. 12. Quote regular expression wildcards. 13. Quote the backslash to enable echo escape Eg: echo Jyothi\nRaghu will print jyothi and raghu in 2 lines. Chapter 10: Flow control: If statement: Syntax:if condition then stmt1 elif con2 then stmt2 else stmt3 fi use ; to combine the statements in a single line. Test: the condition used in if. Syntax: test condition or [condition] Test opt condition/[-opt condition] 1. file test: -b block specific file -c char specific file -r readable file -w writable file -e executable file -s size greater than zero -f regular file -h symbolic link -n named pipe -u SUID bit set -g SGID bit set 2. string test: -z str ----- string of zero length -n str ----- string of non zero length str1==str2 str1!=str2

3. numeric test: -eq --equal -nqnot equal -ge greater or equal -gt -greater -le lesser or equal -lt lesser [!condition] to negate the condition Compound test: -a and, -o or Case statement: case word in mat1)list1 ;; mat2|mat3) list2 ;; esac Chapter 11: Loops: Loops let us execute a series of commands more times. While loop: while condition do list done Until loop: (opposite to while) until condition do list done for loop : for i in 1 2 3 4 5 do statements done select loop : provide the user with a numbered menu and let them choose an option. Select name in word1 word2 word 3

do statements done. Eg: select component in comp1 comp2 comp3 all none do case $component in comp1|comp2|comp3) comp;; all) fun2;; none)break;; *) echo wrong selection;; esac done Infinite loops: Loops that will execute for ever. Eg: while : do stamts done break: To stop the execution of the loop and come out of the loop continue: stop the current pass of the loop and go to the next iteration. Chapter 12: Parameters: Special variables: $0 name of the command or name of the cript $n parameters passed. $1- first arg, $2 second arg etc $? status of the last cmd executed. $$- process id present shell $!-pid of the last background process $#- no.of.args passed. $@ or $* --list of args passed Arguments: all the parameters passed to the script or cmd. Options: these are the arguments that will change the behavior of the cmd or script. Two ways to process the arguments: 1. manually process the arguments using the case statement 2. Using the get opts. Getopts options variable Eg: getopts e:o:v option Case $option in e) list1;; o)list2;;

v)list3;; *)echo $usage;; Esac If the options require an additional parameter we can accomplish this by using : as above eg. That additional parameter value will be saved in OPTARG variable. And the index of the last argument processed by getopts is saved in OPTIND. Chapter13: Input and Ouput: Output redirection Input redirection Echo, printf diff is echo will not have the format string to format the ouput. Echo will have an default new line appended at the end of o/p. but in printf we have to give manually. %-m,nx x- format string and one of the below. s-string f-floating point d integer or decimal number o-octal h-hexa decimal e- exponential notation c- char m,n for the precision -denotes the o/p is left justified. By default it is right justified. File descriptors: these are the numbers associated to a file, which are used to read from or write into a file. We also called them as file handles. 0-standard i/p 1-standard o/p 2-standard error Making the error same as o/p is 2>&1 Exec 4>file.txt -- making the 4 as file handler for the file, file.txt in write mode Read: to read the data from the user. >-o/p redirection >> append <-i/p redirection Eg: program1.sh 1>file1 2>file2 or program1.sh > file1 2>file2 Changing the file descriptors: exec n<&0 statements exec 0<&n n<&-

in the aboe example we are redirecting the standard i/p to file handler n and then again we are changing back after some code statements. &- means closing that file descriptor. So that we cannot read or write into the file. Hour 14: Functions: Functions will group a list of statements or commands to perform a specific operation. Modularity, reusability. Definition: fun1() { Statements } Call: fun1 Passing arguments: fun1 arg1 arg2 In the called fun, $1,$2,etc will be the arguments passed and $# -- will be the no. of. args passed. $@, $*- will be the total list of args passed. Calling one fun from another fun: fun1() { stmt } fun2() { fun1 } IFS: input field separator. If u wanted to use another field separator for an instance we can change that and change it to the default value later like the code below.
OLDIFS="$IFS" IFS=: for DIR in $PATH ; do echo $DIR ; done IFS="$OLDIFS"

Chapter 15: Text Filters:


1. Head: will display the topmost lines of the file or o/p of cmd. Default 10 lines

Syntax : head n filename 2. Tail: will display the bottom lines of the file or o/p of cmd. Default 10 lines.

Syntax: tail n filename 3. Grep: (Globally Regular Expression Print): will search for a word in list of files. -i ignore case -v non matched lines -n line numbers where match found -l file names where match found
4. tr: transliterate: replace one set with other set , sometime delete a set of chars.

Syntax: tr set1 set2 <filename eg: tr A-Z a-z <filename tr s char: to squeeze the chars specified in the o/p eg: echofeed | tr s e -- produces the o/p fed. tr d char < file > newfile delete the char that matched. Character classes to use in tr: tr [:class:] set2 alnum letters and digits digit digits punct punctuation marks cntrl control chars xdigit hexa decimal digits lower lower case letters upper upper case letters blank horizontal spaces alpha letters space horizontal and vertical spaces print printable chars 5. sort: sort the output. Syntax : sort opt filename -n numeric sort -r reverse order -u unique sort -k start, end sort on the specified fields starting from the start and end with the end fileds. Eg: sort nr k 2,2 file4.txt sorts on second field. 6. unique: list the unique lines, ignoring the duplicates syntax: unique opt filename -c count of duplications of each line. Chaper 16: Filtering text using Regular expressions:

Filtering text using sed and awk. Both use the same invocation syntax, both executes the script on each line of input file, both use regular expression for matching. Syntax: command script filenames Script:/pattern/action Reg Expressions: matches sequence of letters or digits or symbols. . matches any single char except the new linw * matches zero or more occurances of a previous char. [] matches any of the chars in the braces [^ ] do not match all the chars in the braces ^ match start of the line $ match end of the line \ escaping the special meaning Sed: Steam editor Syntax:sed /pattern/action files Pattern: regular expression Action: p print d delete s/pat1/pat2/ substitutes the pat1 with pat2 eg: sed /a*c/p file.txt will print every input line along with the line that matches the pattern. Means o/p have one entry of the lines that do not have a match and two entries of the lines that have a match. -n to print only lines that have a match Eg:sed n /a*c/p file.txt sed /a*c/d file.txt sed s/[A-Z]/[a-z]/ file.txt perform the substitution only for the first match in each line sed s/[A-Z]/[a-z]/g file.txt performs the substitution for all the matches in each line globally. sed -e /a*c/p -e s/[A-Z]/[a-z]/g file.txt to execute multiple actions. ls ltr | sed -e /a*c/d -e s/[A-Z]/[a-z]/g taking i/p from the pipe instead of from a file

Chapter 17: Filtering text with AWK: awk script files Script: /pattern/{action} Filed Editing: awk {print ;} file.txtsimply prints all the input lines awk {print $3, $4;} file.txt prints the 3rd and 4th fileds of all i/p lines awk {printf %32s %s, $3, $4;} file.txt formatted o/p Pattern Specific actions:

awk /$[0-9]\.[0-9][0-9]*/ {print $3;} /$0\.0-9][0-9]*/ {print $3 *;} file.txt Comparision operators: awk $3>10 {print $5;} file.txt >,<,>=,<=,!=.==,value~/pattern/,value!~/pattern/ Compound expressions: awk ($3>10) && ($4 <100) {print $6;} file.txt &&-and || -or Next command: informs to proceed to the next line. awk ($3<=10) {print %s \n %s, $5, reorder; next;) ($3>10) {print %s, $5;} informs to proceed to the next line as the match found, dont go for the second condition check. Variables: Variable=value Operators: +,-,*,/,%-modulo, ^-exponentation Eg: x=x+1; Shorthand notation also available for all the above operators. +=, -+ etc Eg: x+=1; awk {x+=1; print x;} Begin and End blocks: awk BEGIN {statements} /pattern/{statements} END{statements} file.txt The statements in the begin block will execute before reading the first line of i/p and the end block executed after the script reads all the i/p lines. Built-in Varibles: FILENAME: name of the input file passed, cannot change the variable NR: number of records/lines in the file NF: number of fields in the i/p line OFS: o/p field separator FS:i/p field separator ORS:o/p record separator RS: i/p record separator flow control:

if statement:

While loop:

if (condtion) {stmt1} else if (condtion) {stmt2} else {stmt3} While (condtion) { stmts }

Eg: awk Do loop:

'BEGIN{ x=0 ; while (x < 5) { x+=1 ; print x ; } }'

do { Stmts } while (condition) For loop: for (intializtion; comaprision; increment) { Stmts; } Eg:
awk '{ for (x=1;x<=NF;x+=1) { printf "%s ",$x ; }

Using shell variables in awk: In order for awk to use the shell variables, you have to convert them to awk variables on the command line. The basic syntax for setting variables on the command line is awk 'script' awkvar1=value awkvar2=value ... files Eg:
#!/bin/sh NUMFRUIT="$1" if [ -z "$NUMFRUIT" ] ; then NUMFRUIT=75 ; fi awk ' $3 <= numfruit { print ; } ' numfruit="$NUMFRUIT" fruit_prices.txt

Chapter 18: Miscellaneous Tools: There are two types of miscellaneous tools 1. Built in Shell commands: Shell can access them directly with our reading them from the disk. Will have fast response times (eval, :, type)

2. External cmds: these cmds present as binary programs in disk. (expr,bc,find,xargs,remsh, sleep)
1. Eval: it will tell the shell to re process the command line a second time.

Syntax: eval any unix cmd. Eg: op=>op.out echo hello $op which will give the o/p as hello>op.out eval echo hello $op --create a file named op.out and print the message hello in that file. 2. : (no-op)This is complete shell command that does nothing, but returns a zero completion code. We can use for infinite loops. Eg: while : do stmts done 3. Type: This will give the complete path of any command Syntax:type command Apart from the path, it will also give the below details. a. Whether this is a keyword or reserved word b. Whether this is a shell built in command c. Whether this is an alias, if the argument is an alias then this gives the original command linked to this alias also. 4. Sleep: which will pause the process for the specified time Syntax : sleep n stops the process for n seconds
5. Find: This is a power tool that will list all the files that matched a particular criteria.

Syntax: find start_dir options actions


a. find . name fname print ( searches for the name fname in all dirs starting from

present dir and print the list of files that matches. Print is the action here. Keep the file name in sinle quotes when you use the wildcard chars like . * etc) b. find / -name fname print(starting position is the roor dir) c. find . type f name fname print (searches for the file which is of type regular file and the name is fname and print them) different types: -f -d -l -p -c regular file directory symbolic link named pipe character specific file

-b block specific file d. find . -mtime -5 print (print the files that are modified less than 5 days ago) -n less than n days ago n exactly n days +n more than n days ago e. find . ctime n print(inode modified time is less than n days ago. Inode will get modified when the file created and then later if there are any changes in owner/group, permissions and filesize) -n less than n days ago n exactly n days +n more than n days ago f. find . atime (access time) . same options as above. g. find . size n exec rm f ();\ (size greater than n blocks, delete all those files. exec is used to execute a specific cmd on the list of files that matched the criteria) -n less than n blocks, n exactly n blocks , +n more than n blocks. h. find . size n | xargs rm f ( xargs take the o/p of find cmd and send that as arg to rm cmd. Which will work as same way as exec, but this is more efficient. Combining options: u can combine two options with o for logical or Eg: find . \(-size -2000 o name fname\) print Negating: to do the action on the unmatched list Eg: find . ! \(-size -2000 o name fname\) print
6. Xargs: This accepts a list of words from the standard i/p and provides that as arguments to

another cmd. Eg: cat filename | xargs rm --we cannot pipe the o/p of cat to rm normally. We will use xargs here for that purpose. cat filename | xargs n 20 rm -- provides 20 args for each command line, means delete 20 files at a time
7. Expr: this cmd is used for integer arithmetic.

Syntax:expr integer1 operator integer2 Eg: expr 3 + 4 + Addition Subtraction \* multiplication

/ Division Spaces should be there between the operator and the operands, otherwise this cmd will not perform the calculation. We can use the same in shell scripts as below. cnt=`expr $cnt + 2` This tool also will give the number of chars matched by a regular expression. expr $ABC : [0-9]* If part of the regular expression pattern is grouped in escaped parentheses, expr returns the portion of the pattern indicated by the parentheses: $ expr abcdef : '..\(..\)..' cd
8. Bc: do the calculation, not limited to integers.

Eg:

bc Scale=4 8/3 2.6666 quit In the above example we have specified the scale as 4, so it does the cal to 4 digits after the precision. Eg: x=`echo scale=4; $price/$untis | bc` It allows conversion between different number basis. Eg: bc obase=16 ibase=8 400 100 quit here obase =16 means o/p will be in hexadecimal, and ibase=8 means i/p is in octal. Ocatal 400 is hexadecimal 100.
9. remsh: If several unix systems were connected thru a network, this cmd will invoke a

remote shell in a remote system, execute the command and display the o/p in your system Syntax: remsh remotesys cmd Eg: remsh acron who Remsh/rsh/remote/rcmd - based on the unix version the cmd changes.

cd source_dir find . print | cpio ocva| remsh acron \(cd destdir \; cpio icdum\) will copy the files from source dir of your source sys and copy them in dest dir of remote system. Chapter 19: Dealing with Signals: Signals are the software interrupts sent to a program to indicate that an imp event has occurred. Events vary from user request to illegal memory access. Different Signals: SIGHUP 1 Hang-up detected from the controlling terminal or process SIGINT 2 Interrupt from Keyboard SIGQUIT 3 quit from the Keyboard SIGKILL 9 Kill signal SIGALARM 14 Alarm signal SIGTERM 15 Termination signal To check the list of signals available in the system use the below cmds Linux: man 7 signals Solaris: man s 5 signals HP-UX: man 5 signals Another easy way to check this is: kill l All these signals will be saved in signals.h file. We can call signals using Syntax: kill signalno pid or kill s signame pid Few default actions when the signals arrived: 1. Terminate the process 2. Dump the core. Creates a file called core which will have the memory image of the stopped process. 3. Stop the process 4. Continue a stopped process 5. Ignore the signal If we are not specifying any options with the kill command by default it will terminate the process. So kill pid and kill s SIGTERM pid both are equal.

Handling signals: The three actions to handle a signal are: 1. Do nothing and let the signal take the default action --easy way to do 2. Ignore the signal need some code to be added to the script to handle this 3. Catch the signal and do some signal specific action -- need to add a code block to execute the routine to perform the signal specific action Trap: Catch the signals using the trap command and do the specific action. Syntax: trap name signal_list. (name is the command / function name) If you did not specify the function the trap will execute the default action for the signals. Eg: trap calenup 1,2,5 trap "rm -f $TMPF; exit 2" 1 2 3 15 To ignore the signals: trap : signal list or trap signal_list. To make enable signals again---trap signal_list Chapter 20: Debugging: Debugging is useful for 2 reasons. 1. Syntax checking 2. Shell tracing Enabling Debugging: Executing the script from the command line : /bin/sh -option script arg1 arg2 arg3 Or include the options in #!/bin/sh line of the script like below #!/bin/sh -option Here option is any of the following options. -n reads all the commands, but not execute -v print the script as it reads -x display all the commands along with their args as they execute. also called as shell tracing. Eg: /bin/sh n sciprt1.sh Or #!/bin/sh n Enable /Disable Debugging with set options: Sometimes we dont want to debug the entire code and we need to debug a small part of the code. We can enable or disable the debugging for a part of the code using the set options. set option to enable debugging set + option to disable debugging eg: set n

fun1 set +n set to disable all the debugging modes that are enabled for a script

Debugging Hooks: There are two ways to enable/disable the debug modes. 1. Use the set command in the script 2. By checking the value of the environment variables DEBUG=true or TRACE=true ( Debugging Hooks) Eg: develop a function using the env variables and use that as a debugging hook. Debug() { if [ "$DEBUG" = "true" ] ; then if [ "$1" = "on" -o "$1" = "ON" ] ; then set -x else set +x fi fi } To activate debugging, use the following: Debug on To deactivate debugging, use either of the following: Debug Debug off

Chapter 21: Problem solving with Functions: Some time we may re use few function in script, in such cases instead of copying the code to all Scripts, it is a better idea to create a library with all such common functions and use the library Where ever required. Library: will have only functions Eg: creating a library messages.sh as below.
#!/bin/sh echo_error() { echo "ERROR:" $@ >&2 ; } echo_warning() { echo "WARNING:" $@ >&2 ; }

Main code / script:

Will have both functions and normal commands also.

To include the functions from a library (messages.sh) we have to include that library in the script using the period operator.

Eg:

#!/bin/ksh . $HOME/lib/sh/messages.sh
MSG="hello" echo_error $MSG -- function from messages.sh library.

Chapter 22: Problem solving with shell scripts: File System A file system is used by UNIX to store files and directories. Usually a file system corresponds to a hard drive or hard drive partition. Tar File A tape archive file created by the tar command. A tar file can contain both files and directories, making it similar to a zip file, but it will not compress the files. mv command will not work to move a file from one file system to another file system. To make that we have to do the following steps: 1. Remove the dest dir (rm rf destdir) 2. Copy recursively all files and dirs from source to dest (cp r source dest) 3. Remove the source dir. (rm rf source) But this might not work properly always as the links will not be copied properly, instead the original files will be copied. And the other issue is owner, group and the privileges also get changed with few cp versions. To make sure that the files are copied as it is from source to dest in diff file system use the tar. 1. Create a tar file for all the files in source dir 2. Change to dest dir 3. Extract the tar file in dest 4. Remove the source But this time tar file will be created in hard disk and deletion of tar file is an additional issue. For that we have an additional feature. Tar can write to STDOUT, and take input from STDIN using a pipe. Eg: To create a tar file, use the following command: tar -cpf - source Here source is the pathname of a directory. The options specified to tar tell it to create a tar file, whereas the - indicates that the tar file it creates should be written to STDOUT. To extract a tar file from STDIN, use the command: tar -xpf Final command to achieve the result is: tar cpf souce | (cd dest; tar xpf -) Chapter 23: Scripting for Portability:

There are two versions of Unix systems. 1. BSD: Barkley Software Distribution -- bought by Barkley group of University of Colombia from At&T bell labs and changed that to BSD. 2. System V: Also called as SysV - Latest version of Unix by At&T labs. Based on Unix systems few commands syntax will be different. To know which system is urs use uname cmd. uname: gives the details of ur unix system syntax: uname option different options are : -a all info -r release data -n hostname -s operating system -m hardware type Hardware Description 9000/xxx Hewlett-Packard 9000 series workstation. Some common values of xxx are 700, 712, 715, and 750. i386 Intel 386-, 486-, Pentium-, or Pentium II-based workstation. sun4x A Sun Microsystems workstation. Some common values of x are c (SparcStation 1 and 2), m (SparcStation 10 and 20), and u (UltraSparc). Alpha A workstation based on the Digital Electronics Corporation ALPHA microprocessor.

Use the below methods to write the scripts portable to all Unix systems. 1. Conditional execution -- use conditions to execute proper cmds based on sys type 2. AbstractionAbstraction is a technique used to hide the differences between the versions of UNIX inside shell functions. By doing this, the overall flow of a shell script is not affected. When a function is called, it makes a decision as to what commands to execute. Chapter 24: Shall programming FAQs: Read all the FAQs in Chapter 24

You might also like