Unit-II DevOps - Shell Scripting
Unit-II DevOps - Shell Scripting
wget
http://home.adelphi.edu/~pe16132/csc271/not
e/scripts/envvar
Environment Variables
• set | more – shows all the environment variables that
exist
• Change
– PS1='\u>'
– PATH=$PATH:/home/pe16132/bin1
– IFS=':'
– IFS is Internal Field Separator
• Sample
wget
http://home.adelphi.edu/~pe16132/csc271/note/scripts/envva
r
$* and $@
Format Meaning
read answer Reads a line from stdin into the variable answer
read first last Reads a line from stdin up to the whitespace, putting the
first word in first and the rest of the of line into last
read Reads a line from stdin and assigns it to REPLY
read –a Reads a list of word into an array called arrayname
arrayname
read –p prompt Prints a prompt, waits for input and stores input in REPLY
read –r line Allows the input to contain a backslash.
Shortcut to Display Lots of Words
• Here file:
– You give it the end token at the start
– Type a list
– Type the end token to end
– cat << Here
words
Here
wget
http://home.adelphi.edu/~pe16132/csc271/note/scripts/
nosy
Numbers
• Assumes variables are strings
• Math operations on strings are essentially
ignored
– Normalvar=1
– 3+$normalvar yields 3+1
• Must force consideration as number
– Create variable with declare - i
– Surround your mathematical statement with (( ))
wget
http://home.adelphi.edu/~pe16132/csc271/note/scripts/
numbers
Different Base Nums: Octal, Hex
wget
http://home.adelphi.edu/~pe16132/csc271/note/scripts/i
fscript
Using test For Numbers And Strings – Old
Format
if test expression
then
command
fi
or
if [ string/numeric expression]
then
command
wget
fi http://home.adelphi.edu/~pe16132/csc271/n
ote/scripts/ifscript
Using test For Strings – New Format
wget
http://home.adelphi.edu/~pe16132/csc271/note/sc
ripts/ifscript
Testing Strings vs Numbers
Comparing numbers
• remember (( ))
• -eq , -ne, -gt, -ge, -lt, -le
Comparing strings
• Remember [[ ]]
• Remember space after [
• =
• !=
• Unary string tests
– [ string ] (not null)
– -z (0 length)
– -n (some length)
– wget
-l returns the length of the string
http://home.adelphi.edu/~pe16132/csc271/note/sc
ripts/ifscriptnum
Echo
Echo command is well appreciated when trying to debug scripts.
Syntax : echo {options} string
Options: -e : expand \ (back-slash ) special characters
-n : do not output a new-line at the end.
String can be a “weakly quoted” or a ‘strongly quoted’ string. In
the weakly quoted strings the references to variables are
replaced by the value of those variables before the output.
As well as the variables some special backslash_escaped symbols
are expanded during the output. If such expansions are
required the –e option must be used.
test Command Operators – String Test
Test Operator Tests True if
wget
http://home.adelphi.edu/~pe16132/csc271/note/script
s/forscript
while Command
• The while command evaluates the command
following it and, if its exit status is 0, the commands
in the body of the loop are execeuted.
• The loop continues until the exit status is nonzero.
• Format:
while command
do
command(s)
done
wget
http://home.adelphi.edu/~pe16132/csc271/note/script
s/numm
The until Command
wget
http://home.adelphi.edu/~pe16132/csc271/note/scrip
ts/runit
Commands Used With select
• select will automatically repeat and has do
mechanism of its own to terminate. For this
reason, the exit command is used to
terminate.
• We use break to force an immediate exit from
a loop (but not the program).
• We use shift to shift the parameter list one
or more places to the left, removing the
displaced parameters.
wget
http://home.adelphi.edu/~pe16132/csc271/note/scrip
ts/dater
SELECT for a menu
• Variables
• Decision - If / case / select (embedded while)
– Numbers vs Strings
– Unary tests
– File tests
• Loop – for/ while / until
– File IO
• Functions
• Trap
I/O and Redirection
Standard I/O
• Standard Output (stdout)
– default place to which programs write
• Standard Input (stdin)
– default place from which programs read
• Standard Error (stderr)
– default place where errors are reported
• To demonstrate -- cat
– Echoes everything you typed in with an <enter>
– Quits when you press Ctrl-d at a new line -- (EOF)
Redirecting Standard
Output
• cat file1 file2 > file3
– concatenates file1 and file2 into file3
– file3 is created if not there
• cat file1 file2 >! file3
– file3 is clobbered if there
• cat file1 file2 >> file3
– file3 is created if not there
– file3 is appended to if it is there
• cat > file3
– file3 is created from whatever user provides from
standard input
Redirecting Standard Error
• Generally direct standard output and standard
error to the same place:
obelix[1] > cat myfile >& yourfile
• If myfile exists, it is copied into yourfile
• If myfile does not exist, an error message
cat: myfile: No such file or directoryis copied in yourfile
• In tcsh, to write standard output and standard
error into different files:
obelix[2] > (cat myfile > yourfile) >& yourerrorfile
• In sh (for shell scripts), standard error is
redirected differently
– cat myfile > yourfile 2> yourerrorfile
Redirecting Standard Input
• obelix[1] > cat < oldfile > newfile
• A more useful example:
– obelix[2] > tr string1 string2
• Read from standard input.
• Character n of string1 translated to character n
of string2.
• Results written to standard output.
– Example of use:
obelix[3] > tr aeoiu eoiua
obelix[4] > tr a-z A-Z < file1 > file2
/dev/null
• /dev/null
– A virtual file that is always empty.
– Copy things to here and they disappear.
• cp myfile /dev/null
• mv myfile /dev/null
– Copy from here and get an empty file.
• cp /dev/null myfile
– Redirect error messages to this file
• (ls -l > recordfile) >& /dev/null
• Basically, all error messages are discarded.
Filters (1)
Filters (2)
• grep patternstr:
– Read stdin and write lines containing patternstr to stdout
obelix[1] > grep "unix is easy" < myfile1 > myfile2
– Write all lines of myfile1 containing phrase unix is easy to
myfile2
• wc:
– Count the number of chars/words/lines on stdin
– Write the resulting statistics to stdout
• sort:
– Sort all the input lines in alphabetical order and write to the
standard output.
Pipes
• The pipe:
– Connects stdout of one program with stdin of another
– General form:
command1 | command2
– stdout of command1 used as stdin for command2
– Example:
obelix[1] > cat readme.txt | grep unix | wc -l
• An alternative way (not efficient) is to:
obelix[2] > grep unix < readme.txt > tmp
obelix[3] > wc -l < tmp
• Can also pipe stderr: command1 |& command2
Redirecting and Pipes (1)
Redirecting and Pipes (2)
• Note: The name of a command always comes first
on the line.
• There may be a tendency to say:
obelix[1] > readme.txt > grep unix | wc -l
– This is WRONG!!!
– Your shell will go looking for a program named
readme.txt
• To do it correctly, many alternatives!
obelix[1] > cat readme.txt | grep unix | wc -l
obelix[2] > grep unix < readme.txt | wc -l
obelix[3] > grep unix readme.txt | wc -l
obelix[4] > grep -c unix readme.txt
The ‘grep’ Command
What is grep?
• ‘-i’
– Ignores case.
• ‘-v’
– Inverts the matching. When used, grep will print
out lines that do not match the pattern
• ‘-e pattern’
– Pattern is the pattern. This can be used to specify
multiple patterns, or if the pattern starts with a ‘-’.
A line only has to contain one of the patterns to be
matched.
Examples using ‘-i’, ’-v’, and ‘-e’
• ‘-n’
– Prefixes each line of output with the line number
from the input file the match was found on
• ‘-H’
– Prefix each line of output with the input file name
that the match was found in
• ‘-T’
– Makes sure that the actual line content (or
whatever content comes after the ‘-T’) lands on a
tab stop
Examples using ‘-H’, ’-n’, and ‘-T’
• ‘-A num’
– Print num lines of trailing context after matching
lines
• ‘-B num’
– Print num lines of leading context before matching
lines
• ‘-C num’ or ‘-num’
– Print num lines of leading and trailing output
context
Examples using ‘-A’, ’-B’, and ‘-C’
• ‘-G’
– Interpret pattern as basic regular expression (BRE).
This is the default.
• ‘-E’
– Interpret pattern as extended regular expression
(ERE)
• When using basic regular expression some
special characters (like ‘?’ in the previous
example) loose their special meaning and
must have a ‘\’, the escape character, before
BRE and ERE Difference
67
Introduction
AWK is a great language. Awk is geared
towards text processing and report
generation, yet features many well-
designed features that allow for serious
programming. And, unlike some
languages, awk's syntax is familiar, and
borrows some of the best parts of
AWK Programming languages like C, python, and bash
(although, technically, awk was created
before both python and bash). Awk is
one of those languages that, once
learned, will become a key part of one’s
strategic coding arsenal.
Awk stands for the names of its authors
“Aho, Weinberger, and Kernighan”
SYNTAX
[Note: Either search pattern or action(s) are optional, but not both.]
THE first awk
Explanation • Space is considered as the default OFS i.e. Output Field Separator.
• The following command won’t retrieve the kth column of the CSV file.
Note/Remarks • awk '{print $k}' input_csv_file
Printing Multiple fields
of specially delimited files
• Printing multiple columns of files with delimiters
Purpose other than space, say comma. Files having comma as
the delimiter are called CSV files.
• Awk executes this block after all lines in the input file
have been processed. Typically, the END block is
END used to perform final calculations or print
BLOCK summaries that should appear at the end of the
output stream.
Regular expressions & blocks
NR
FS
OFS
Awk also allows the use of boolean operators "||" (for "logical or") and "&&"(for
"logical and") to allow the creation of more complex boolean expressions:
( $1 == "foo" ) && ( $2 == "bar" ) { print }
This example will print only those lines where field one equals foo and field two
equals bar.
NUMERIC variables
(find number of blank lines in a file)
So far, we've either printed strings, the entire line, or specific fields. However,
awk also allows us to perform both integer and floating point math. Using
mathematical expressions, it's very easy to write a script that counts the
number of blank lines in a file. Here's one that does just that:
BEGIN { x=0 }
/^$/ { x=x+1 }
END { print "I found " x " blank lines. :)" }
In the BEGIN block, we initialize our integer variable x to zero. Then, each time
awk encounters a blank line, awk will execute the x=x+1 statement,
incrementing x. After all the lines have been processed, the END block will
execute, and awk will print out a final summary, specifying the number of blank
lines it found.
Plenty of operators
Another nice thing about awk is its full package of mathematical operators. In
addition to standard addition, subtraction, multiplication, and division, awk
allows us to use the exponent operator "^", the modulo (remainder) operator
"%", and a bunch of other handy assignment operators borrowed from C.
These include pre- and post-increment/decrement ( i++, --foo ),
add/sub/mult/div assign operators ( a+=3, b*=2, c/=2.2, d-=6.2 ). But that's not
all -- we also get handy modulo/exponent assign ops as well ( a^=2, b%=4 ).
Looping:
Calcula
te • awk 'BEGIN { for(i=1;i<=10;i++) print "square of", i, "is",i*i; }'
square
of
numbe
rs from
1 to 10.
String functions in awk
awk –F”,”
'BEGIN {print "EmpID\tQualification\tInstitute";}
{print $1,"\t",$2,"\t",$3;}
END{print "REPORT GENERATED.\n";}'
list1.txt
THANK YOU
83