Mastering Linux Shell Scripting - Sample Chapter
Mastering Linux Shell Scripting - Sample Chapter
P U B L I S H I N G
pl
C o m m u n i t y
$ 34.99 US
22.99 UK
Sa
m
E x p e r i e n c e
D i s t i l l e d
Andrew Mallett
Mastering Linux
Shell Scripting
ee
Mastering Linux
Shell Scripting
Master the complexities of Bash shell scripting and unlock the
power of shell for your enterprise
Andrew Mallett
Preface
Mastering Linux Shell Scripting will become your Bible and a handbook to create and
edit bash shell scripts in Linux, OS X, or Unix. Starting with the fundamentals, we
quickly move onto helping you create useful scripts with practical examples. In this
way, your learning becomes effective and quick. With each chapter, we provide
explanations of the code and code examples, so from a learning book this becomes
a book that you can use as a ready reference, if you need to understand how to
program a specific task.
Preface
Chapter 5, Alternative Syntax, tells us how we can abbreviate the test command to just
a single [, we can also use [[ and (( depending on your needs.
Chapter 6, Iterating with Loops, covers how loops are also conditional statements.
We can repeat a block of code while a condition is true or false. In this way, using
for, while, or until we can have the script complete the repetitive code sequences.
Chapter 7, Creating Building Blocks with Functions, covers how functions can
encapsulate the code that we need to repeat within the script. This can help
with readability and how easy a script is to maintain.
Chapter 8, Introducing sed, the stream editor, tells us how sed can be used to edit files
dynamically and implement it in scripts. In this chapter, we look at how to use and
work with sed.
Chapter 9, Automating Apache Virtual Hosts, covers the practical recipes that we can
take away when we create a script to create virtual hosts on an Apache HTTPD
server. We use sed within the scripts to edit the template used to define virtual hosts.
Chapter 10, Awk Fundamentals, looks at how we can start to process text date from the
command line and using awk is another very powerful tool in Linux.
Chapter 11, Summarizing Logs with Awk, tells us about the first practical example we
look at with awk, allowing us to process log files on the web server. It also looks at
how to report the IP address that has access to the server most often, as well as, how
many errors occur and of which type.
Chapter 12, A Better lastlog with Awk, looks at more examples that we can use in awk
to filter and format data provided by the lastlog command. It drills down to the
specific information that we want and removes information we do not need.
Chapter 13, Using Perl as a Bash Scripting Alternative, introduces the Perl scripting
language and the advantages that it can offer. We are not restricted to just using
bash we also have Perl as a scripting language.
Chapter 14, Using Python as a Bash Scripting Alternative, introduces you to Python
and the Zen of Python that will help you with all programming languages. Like Perl,
Python is a scripting language that can extend the functionality of your scripts.
Bash vulnerabilities
Bash vulnerabilities
For this book, I will be working entirely on a Raspberry Pi 2 running Raspbian, a
Linux distribution similar to Debian, and Ubuntu; although for you, the operating
system you choose to work with is immaterial, in reality, as is the version of bash.
The bash version I am using is 4.2.37(1). If you are using the OS X operating system,
the default command line environment is bash.
[1]
To return the operating system being used, type the following command if it
is installed:
$ lsb_release -a
The easiest way to determine the version of bash that you are using is to print the
value of a variable. The following command will display your bash version:
$ echo $BASH_VERSION
In 2014, there was a well-publicized bug within bash that had been there for many
yearsthe shell-shock bug. If your system is kept up-to-date, then it is not likely to
be an issue but it is worth checking. The bug allows malicious code to be executed
from within a malformed function. As a standard user, you can run the following
code to test for the vulnerabilities on your system. This code comes from Red Hat
and is not malicious but if you are unsure then please seek advice.
The following is the code from Red Hat to test for the vulnerability:
$ env 'x=() { :;}; echo vulnerable''BASH_FUNC_x()=() { :;}; echo
vulnerable' bash -c "echo test"
If your system is free from this first vulnerability the output should be as shown in
the following screenshot:
[2]
Chapter 1
To test for the last vulnerability from this bug, we can use the following test, which is
again from Red Hat:
cd /tmp; rm -f /tmp/echo; env 'x=() { (a)=>\' bash -c "echo date"; cat /
tmp/echo
The output from a patched version of bash should look like the following screenshot:
If the output from either of these command lines is different, then your system may
be vulnerable to shell-shock and I would update bash or at least take further advice
from a security professional.
Command type
For example, if we type and enter ls to list files, it will be reasonable to think that
we were running the command. It is possible, but we will be running an alias often.
Aliases exist in memory as a shortcut to commands or commands with options;
these aliases are used before we even check for the file. The bash shell built-in
command type can come to our aid here. The type command will display the type
of command for a given word entered at the command line. The types of command is
listed as follows:
Alias
Function
Shell built in
Keyword
File
This list is also a representative of the order in which they are searched. As we can
see, it is not until the very end where we search for the executable file ls.
[3]
We can extend this further to display all the matches for the given command:
$ type -a ls
ls is aliased to `ls --color=auto'
ls is /bin/ls
If we need to just type in the output, we can use the -t option. This is useful when
we need to test the command type from within a script and only need the type to be
returned. This excludes the superfluous information; thus, making it easier for us
humans to read. Consider the following command and output:
$ type -t ls
alias
The output is clear and simple and just what a computer or script requires.
The built-in type can also be used to identify shell keywords such as if, case,
function, and so on. The following command shows type being used against multiple
arguments and types:
$ type ls quote pwd do id
You can also see that the function definition is printed when we stumble across
a function when using type.
[4]
Chapter 1
Command PATH
Linux will check for executables in the PATH environment only when the full or
relative path to the program is supplied. In general, the current directory is not
searched unless it is in the PATH. It is possible to include our current directory within
the PATH by adding the directory to the PATH variable. This is shown in the following
code example:
$ export PATH=$PATH:.
This appends the current directory to the value of the PATH variable each item the
PATH is separated using the colon. Now, your PATH is updated to include the current
working directory and each time you change directories, the scripts can be executed
easily. In general, organizing scripts into a structured directory hierarchy is probably
a great idea. Consider creating a subdirectory called bin within your home directory
and add the scripts into that folder. Adding $HOME/bin to your PATH variable will
enable you to find the scripts by name and without the file path.
The following command-line list will only create the directory, if it does not
already exist:
$ test -d $HOME/bin || mkdir $HOME/bin
Although the above command-line list is not strictly necessary, it does show that
scripting in bash is not limited to the actual script and we can use conditional
statements and other syntax directly at the command line. From our viewpoint, we
know that the preceding command will work whether you have the bin directory
or not. The use of the $HOME variable ensures that the command will work without
considering your current file system context.
As we work through the book, we will add scripts into the $HOME/bin directory so
that they can be executed regardless of our working directory.
Conguring vim
Editing the command line is often a must and is a part of my everyday life. Setting
up common options that make life easier in the editor give us the reliability and
consistency you need, a little like scripting itself. We will set some useful options in
the vi or vim editor file, $HOME/.vimrc.
The options we set are detailed in the following list:
nohlsearch: Does not highlight the words that we have searched for
autoindent: We indent our code often; this allows us to return to the last
indent level rather than the start of a new line on each carriage return
expandtab: Converts tabs to spaces, which is useful when the file moves to
other systems
syntax on: Note that this does not use the set command and is used to turn
on syntax highlighting
When these options are set, the $HOME/.vimrc file should look similar to this:
setshowmodenohlsearch
setautoindenttabstop=4
setexpandtab
syntax on
Conguring nano
The nano text edit is increasing in importance and it is the default editor in many
systems. Personally, I don't like the navigation or the lack of navigation features
that it has. It can be customized in the same way as vim. This time we will edit the
$HOME/.nanorc file. Your edited file should look something like the following:
setautoindent
settabsize 4
include /usr/share/nano/sh.nanorc
[6]
Chapter 1
Conguring gedit
Graphical editors, such as gedit, can be configured using the preferences menu and
are pretty straight forward.
Enabling tab spacing to be set to 4 spaces and expanding tabs to spaces can be done
using the Preference | Editor tab, as shown in the following screenshot:
[7]
Another very useful feature is found on the Preferences | Plugins tab. Here, we can
enable the Snippets plugin that can be used to insert code samples. This is shown in
the following screenshot:
For the rest of the book, we will be working on the command line in and in vim; feel
free to use the editor that you work with best. We have now laid the foundations to
create good scripts and although whitespace, tabs, and spaces in bash scripts are not
significant; a well laid out file with consistent spacing is easy to read. When we look
at Python later in the book, you will realize that in some languages the whitespace is
significant to the language and it is better to adopt the good habits early.
[8]
Chapter 1
Hello World!
As you know, it is almost obligatory to begin with a hello world script and we will
not disappoint as far as this is concerned. We will begin by creating a new script
$HOME/bin/hello1.sh. The contents of the file should read as in the following
screenshot:
I am hoping that you haven't struggled with this too much; it is just three lines after
all. I encourage you to run through the examples as you read to really help you instill
the information with a good hands-on practice.
#!/bin/bash: Normally, this is always the first line of the script and is
known as the shebang. The shebang starts with a comment but the system
still uses this line. A comment in a shell script has the # symbol. The shebang
instructs the system to the interpreter to execute the script. We use bash for
shell scripts and we may use PHP or Perl for other scripts, as required. If we
do not add this line, then the commands will be run within the current shell;
it may cause issues if we run another shell.
echo "Hello World": The echo command will be picked up in a built-in
shell and can be used to write a standard output, STDOUT, this defaults to
exit 0: The exit command is a built in shell and is used to leave or exit the
script. The exit code is supplied as an integer argument. A value of anything
other than 0 will indicate some type of error in the script's execution.
[9]
We should be rewarded with the Hello World text being displayed back on our
screens. This is not a long-term solution, as we need to have the script in the $HOME/
bin directory, specifically, to make the running of the script easy from any location
without typing the full path. We need to add in the execute permissions as shown in
the following code:
$ chmod +x $HOME/bin/hello1.sh
We should now be able to run the script simply, as shown in the following screenshot:
In the preceding example, command2 is executed only if command1 fails in some way.
To be specific, command2 will run if command1 exits with a status code other than 0.
Similarly, in the following extract:
$ command1 && command2
We will only execute command2 if command1 succeeds and issues an exit code of 0.
To read the exit code from our script explicitly, we can view the $?variable, as shown
in the following example:
$ hello1.sh
$ echo $?
The expected output is 0, as this is what we have added to the last line of the file and
there is precious little else that can go wrong causing us to fail in reaching that line.
[ 10 ]
Chapter 1
$ type -a hello1.sh
unique
Hello Dolly!
It is possible that we might need a little more substance in the script than a simple
fixed message. Static message content does have its place but we can make this script
much more useful by building some flexibility.
In this chapter, we will look at positional parameters or arguments that we can
supply to the script and in the next chapter we will see how we can make the script
interactive and also prompt the user for input at runtime.
[ 11 ]
The script will still run and will not produce an error. The output will not change
either and will print hello world:
Argument Identifier
$0
Description
$1
${10}
$#
$*
For the script to make use of the argument, we can change the script content a
little. Let's first copy the script, add in the execute permissions, and then edit the
new hello2.sh:
$ cp $HOME/bin/hello1.sh $HOME/bin/hello2.sh
$ chmod +x $HOME/bin/hello2.sh
We need to edit the hello2.sh file to make use of the argument as it is passed at the
command line. The following screenshot shows the simplest use of command line
arguments allowing us now to have a custom message.
[ 12 ]
Chapter 1
Run the script now, we can provide an argument as shown in the following:
$ hello2.sh fred
The output should now say Hello fred. If we do not provide an argument then
the variable will be empty and will just print Hello. You can refer to the following
screenshot to see the execution argument and output:
If we adjust the script to use $*, all the arguments will print. We will see Hello and
then a list of all the supplied arguments. If we edit the script and replace the echo
line as follows:
echo "Hello $*"
betty barney
If we want to print Hello <name>, each on separate lines, we will need to wait a little
until we cover the looping structures. A for loop will work well to achieve this.
[ 13 ]
As we have seen, using the double quotes echo "Hello $1" will result in Hello fred
or whatever the supplied value is. Whereas, if we use single quotes echo 'Hello
$1' the printed output on the screen will be Hello $1, where we see the variable
name and not its value.
The idea of the quotes is to protect the special character such as a space between
the two words; both quotes protect the space from being interpreted. The space is
normally read as a default field, separated by the shell. In other words, all characters
are read by the shell as literals with no special meaning. This has the knock on
effect of the $ symbol printing its literal format rather than allowing bash to expand
its value. The bash shell is prevented from expanding the variable's value, as it is
protected by the single quotes.
This is where the double quote comes to our rescue. The double quote will protect all
the characters except the $, allowing bash to expand the stored value.
If we ever need to use a literal $ within the quoted string along with variables that
need to be expanded; we can use double quotes but escape the desired $ with the
backslash (\). For example, echo "$USER earns \$4" would print as Fred earns $4
if the current user was Fred.
Try the following examples at the command line using all quoting mechanisms. Feel
free to up your hourly rate as required:
$ echo "$USER earns $4"
$ echo '$USER earns $4'
$ echo "$USER earns \$4"
Chapter 1
Edit your script so that it reads as the following complete code block for $HOME/bin/
hello2.sh:
#!/bin/bash
echo "You are using $0"
echo "Hello $*"
exit 0
If we prefer not to print the path and only want the name of the script to show we
can use the basename command, which extracts the name from the path. Adjusting
the script so that the second line now reads is as follows:
echo "You are using $(basename $0)"
The $(.) syntax is used to evaluate the output of the inner command. We first run
basename $0 and feed the result into an unnamed variable represented by the $.
The new output will appear as seen in the following screenshot:
It is possible to achieve the same results using back quotes, this is less easy to read
but we have mentioned this as you might have to understand and modify the scripts
that have been written by others. The alternative to the $(.) syntax is shown in the
following example:
echo "You are using 'basename $0'"
Please note that the characters used are back quotes and NOT single quotes. On UK
and US keyboards, these are found in the top-left section next to the number 1 key.
[ 15 ]
This is especially useful in this example as we can see how each element of the
embedded basename command is processed. The first step is removing the quotes
and then the parentheses. Take a look at the following output:
More commonly used is the -x option, which displays the commands as they get
executed. Its useful to know the decision branch that has been chosen by the script.
The following shows this in use:
$ bash -x $HOME/bin/hello2.sh fred
We again see that the basename is evaluated first, but we do not see the more
detailed steps involved in running that command. The screenshot that follows
captures the command and output:
[ 16 ]
Chapter 1
Summary
This marks the end of the chapter and I am sure that you might have found this
useful. Especially for those making a start with bash scripting, this chapter must
have built a firm foundation on which you can build your knowledge.
We began by ensuring that bash is secure and not susceptible to embedded functions
shell-shock. With bash secured, we considered the execution hierarchy where aliases,
functions, and so on are checked before the command; knowing this can help us plan
a good naming structure and a path to locate the scripts.
Soon we were writing simple scripts with static content but we saw how easy it was
to add flexibility using arguments. The exit code from the script can be read with the
$? variable and we can create a command line list using || and &&, which depends
on the success or failure of the preceding command in the list.
Finally, we closed the chapter by looking at debugging the script. Its not
really required when the script is trivial, but it will be useful later when
complexity is added.
In the next chapter, we will be creating interactive scripts that read the user's input
during script execution.
[ 17 ]
www.PacktPub.com
Stay Connected: