0% found this document useful (0 votes)
25 views91 pages

CC EBook

The document discusses Linux networking and operating systems. It covers topics like users and groups, RAID, logical volume management, DHCP, and installing server components. It also discusses kernel, boot loaders, and file systems.

Uploaded by

fncfranky787
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views91 pages

CC EBook

The document discusses Linux networking and operating systems. It covers topics like users and groups, RAID, logical volume management, DHCP, and installing server components. It also discusses kernel, boot loaders, and file systems.

Uploaded by

fncfranky787
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 91

Chapter I

Introduction to Linux Networking

1.0 Objective
1.1 Introduction
1.2 What is an OS?
1.3 Users & Groups Administration
1.4 Task Scheduling
1.5 RAID Implementation
1.6 Logical Volume Management (LVM)
1.7 Installing & Managing Packages in Linux
1.8 Configuring DHCP
1.9 Installing Server components
1.10 Summary
1.11 Check Your Progress Answers

1.0 Objective
After studying this chapter you will be able to understand,
 Architecture on an Operating system, different boot loaders used in Linux OS,
 Creating and managing the user and group rights,
 Implementing the different RAID configurations,
 Managing the disk space,
 Using YUM and RPM to install packages on Linux,
 Configuring the network and installing different server components.

1.1 Introduction
Linux is a generic term that commonly refers to Unix-like Computer Operating Systems that
use the Linux kernel. Linux is one of the most prominent examples of free software and open
source development; typically all the underlying source code that can be used, freely modified,
and redistributed by anyone.

1.2 What is an OS?


An operating system (commonly abbreviated OS and O/S) is the infrastructure software
component of a computer system; it is responsible for the management and coordination of
activities and the sharing of the limited resources of the computer. The operating system acts
as a host for applications that are running on the machine. As a host, one of the purposes of
an operating system is to handle the details of the operation of the hardware. Almost all
computers, including handheld computers, desktop computers, supercomputers, and even
video game consoles, use an operating system of some type. Some of the oldest models may
-1-
however use an embedded operating system, which may be contained on a compact disk or
other data storage device.

Figure- 1.1

What is Linux?
The name "Linux" comes from the Linux kernel, originally written in 1991 by Linus Torvalds.
The system's utilities and libraries usually come from the GNU operating system, announced in
1983 by Richard Stallman. The GNU contribution is the basis for the alternative name
GNU/Linux.

1.2.1 What is Kernel?


In computer science, the kernel is the central component of most computer operating systems
(OS). Its responsibilities include managing the system's resources (the communication
between hardware and software components).As a basic component of an operating system, a
kernel provides the lowest-level abstraction layer for the resources (especially memory,
processors and I/O devices) that application software must control to perform its function. It
typically makes these facilities available to application processes through inter-process
communication mechanisms and system calls.

Figure- 1.2 Linux file system and directory hierarchy:

A simple description of the UNIX system, also applicable to Linux, is this:


-2-
"On a UNIX system, everything is a file; if something is not a file, it is a process."
This statement is true because there are special files that are more than just files (named
pipes and sockets, for instance), but to keep things simple, saying that everything is a file is an
acceptable generalization. A Linux system, just like UNIX, makes no difference between a file
and a directory, since a directory is just a file containing names of other files. Programs,
services, texts, images, and so forth, are all files. Input and output devices, and generally all
devices, are considered to be files, according to the system.

For starters, there is only a single hierarchal directory structure.


Everything starts from the root directory, represented by '/', and thenexpands into sub-
directories. Where DOS/Windows had various partitions andthen directories under those
partitions, Linux places all the partitionsunder the root directory by 'mounting' them under
specific directories.
Closest to root under Windows would be c:\>
In general the hierarchy looks like something this:

/
/bin
/boot
/dev
/etc
/home
/initrd
/lib
/lost+found
/media
/mnt
/opt
/proc
/root
/sbin
/usr
/var
/srv
/tmp

1.2.2 Boot Loaders and Linux Boot Loaders


Computing systems powered by the central processor (or a set of processors) can only execute
code found in the operating memory, also known as systems memory that may be
implemented in several technologies covered by the general types of: Read-Only Memory or
ROM, and Random Access Memory or RAM. Modern operating systems and application

-3-
program code and data are stored on nonvolatile or persistent local or remote peripheral
memories or mass storage devices. Typical examples of such persistent storage devices are:
hard disk, CD, DVD, USB flash drive and Floppy drive. When a computer is first powered on, it
must initially rely only on the code and data stored in nonvolatile portion of the systems
memory map, such as ROM, NVRAM or CMOS RAM. Persistent code and data residing in the
systems memory map represent the bare minimum needed to access peripheral persistent
devices and load into the systems memory all of the missing parts of the operating system.
Truly speaking at power-on time the computing system does not have an operating system in
memory. Among many things, the computer's hardware alone (processor and systems
memory) cannot perform many complex systems actions of which loading program files from
the disk based file systems is one of the most important tasks.

The program that starts the "chain reaction" which ends with the entire operating system
being loaded is known as bootstrap loader. Early computer designers creatively imagined that
before being ready to "run" a computer program, a small initiating program called a bootstrap
loader, bootstrap or boot loader had to run first. This program's only job is to load other
software for the operating system to start. Often, multiple-stage boot loaders are used, in
which several small programs of increasing complexity sequentially summon one after the
other in a process of chain loading, until the last of them loads the operating system.
In case of Linux there are two types of Boot Loaders:
1. LILO
2. GRUB

LILO:
LILO was originally developed by Werner Almesberger, while its current developer is John
Coffman. LILO does not depend on a specific file system, and can boot an operating system
(e.g., Linux kernel images) from floppy disks and hard disks. One of up to sixteen different
images can be selected at boot time. Various parameters, such as the root device, can be set
independently for each kernel. LILO can be placed either in the master boot record (MBR) or
the boot sector of a partition. In the latter case something else must be placed in the MBR to
load LILO.

At system start, only the BIOS drivers are available for LILO to access hard disks. For this
reason, with very old BIOS, the accessible area is limited to cylinders 0 to 1023 of the first two
hard disks. For later BIOS, LILO can use 32-bit "logical block addressing" (LBA) to access
practically the entire storage of all the harddisks that the BIOS allows access to. LILO was the
default bootloader for most Linux distributions in the years after the popularity of loadlin.
Today, most distributions use GRUB as the default bootloader.

-4-
Figure- 1.3

GRUB:
GNU GRUB ("GRUB" for short) is a boot loader package from the GNU Project. GRUB is the
reference implementation of the Multiboot Specification, which allows a user to have several
different operating systems on their computer at once, and to choose which one to run when
the computer starts. GRUB can be used to select from different kernel images available on a
particular operating system's partitions, as well as to pass boot-time parameters to such
kernels.

GNU GRUB developed from a previous package called the Grand Unified Bootloader (a play on
grand unified theory). It is predominantly used on Unix-like systems; the GNU operating
system uses GNU GRUB as its boot loader, as do most general-purpose Linux distributions.
Solaris has used GRUB as its bootloader on x86 systems since the Solaris 10 1/06 release.

1.1, 1.2 Check your progress:

1 In computer science, the _______ is the central component of most computer operating
systems.
2 _______ does not depend on a specific file system, and can boot an operating system from
floppy disks and hard disks.
3 _______ is the reference implementation of the Multiboot Specification, which allows a user to
have several different operating systems on their computer at once.

1.3 Users & Groups Administration


1.3.1 Creating a User Account
To create a user account, you use the adduser command, which has the form:
adduseruserid
Where userid specifies the name of the user account that you want to create. The command
prompts you for the information needed to create the account.
Here's a typical example of using the command, which creates a user account named newbie:
linux:~#
adduser newbie

-5-
Adding user newbie...
Adding new group newbie (1001).
Adding new user newbie (1001) with group newbie.
Creating home directory /home/newbie.
Copying files from /etc/skel
Changing password for newbie
Enter the new password (minimum of 5, maximum of 8 characters)
Please use a combination of upper and lower case letters and numbers.
Re-enter new password:
Password changed.
Changing the user information for newbie
Enter the new value, or press return for the default
Full Name []:
Newbie Dewbie
Room Number []:
Work Phone []:
Home Phone []:
Other []:
Is the information correct? [y/n] y
linux:~#
 Notice that the lines where the password was typed were overwritten by the subsequent
lines. Moreover, for security, passwords are not echoed to the console as they are typed.
 Notice also that several of the information fields were omitted - for example, Room
Number.

You can specify such information if you think it may be useful, but the system makes no use of
the information and doesn't require you to provide it.

The similarly named useradd command also creates a user account, but does not prompt you
for the password or other information.

When the command establishes a user account, it creates a home directory for the user. In the
previous example, the command would have created the directory /home/newbie. It also
places several configuration files in the home directory, copying them from the directory
/etc/skel. These files generally have names beginning with the dot (.) character, so they are
hidden from an ordinary ls command. Use the -a argument of ls to list the names of the files.
The files are generally ordinary text files, which you can view with a text editor, such as ae. By
modifying the contents of such files, you can control the operation of the associated
application.

1.3.2 Changing a User's Name

-6-
You can change the name associated with a user account, by using the chfn command:
chfn -f
nameuserid
Where, name specifies the new name and userid specifies the account to be modified. If the
name contains spaces or other special characters, it should be enclosed in double quotes (").
For example, to change the name associated with the account newbie to Dewbie Newbie, you
would enter the following command:
chfn -f "Dewbie Newbie" newbie

1.3.3 Changing a User Account Password


From time to time, you should change your password, making it more difficult for others to
break into your system. As system administrator, you may sometimes need to change the
password associated with a user's account. For instance, some users have a bad habit of
forgetting their password. They'll come to you, the system administrator, seeking help in
accessing their account.

To change a password, you use the passwd command. To change your own password, enter a
command like this one:
passwd

This command changes the password associated with the current user account. You don't
have to be logged in as root to change a password. Because of this, users can change their own
passwords without the help of the system administrator. The root user, however, can change
the password associated with any user account, as you'll see shortly. Of course, only root can
do so - other users can change only their own password.
As the root user, you can change the password associated with any user account. The system
doesn't ask you for the current password, it immediately prompts for the new password:
linux:~# passwd newbie
Changing password for newbie
Enter the new password (minimum of 5, maximum of 8 characters)
Please use a combination of upper and lower case letters and numbers.
New password:
Re-enter new password:
Password changed.
Information on users is stored in the file /etc/passwd, which you can view using a text editor.
Any user can read this file, though only the root user can modify it. If you selected shadow
passwords, passwords are encrypted and stored in the file /etc/shadow, which can be read
only by the root user.

1.3.4Configuring Group Definitions

-7-
Linux uses groups to define a set of related user accounts that can share access to a file or
directory. You probably won't often find it necessary to configure group definitions,
particularly if you use your system as a desktop system rather than a server. However, when
you wish, you create and delete groups and modify their membership lists.

Creating a group
To create a new group, use the groupadd command:
groupadd group
Where, group specifies the name of the group to be added. Groups are stored in the file
/etc/group, which can be read by any user but modified only by root.
For example, to add a group named newbies, you would enter the following command:
groupadd newbies

Deleting a group
To delete a group, use the groupdel command:
groupdel group
Where group specifies the name of the group to be deleted. For example, to delete the group
named newbies, you would enter the following command:
groupdel newbies

Adding a member to a group


To add a member to a group, you use a special form of the adduser command:
adduser user group
Where user specifies the member and group specifies the group to which the member is
added. For example, to add the user newbie01 to the group newbies, you would enter the
following command:
adduser newbie01 newbies

Removing a member from a group


Unfortunately, no command removes a user from a specified group. The easiest way to
remove a member from a group is by editing the /etc/group file. Here's an excerpt from a
typical /etc/group file:
users:x:100:
nogroup:x:65534:
bmccarty:x:1000:
newbies:x:1002:newbie01,newbie02,newbie03
Each line in the file describes a single group and has the same form as other lines, consisting of
a series of fields separated by colons (:). The fields are:
Group name: The name of the group.

-8-
Password: The encrypted password associated with the group. This field is not generally used,
containing an x instead.
Group ID: The unique numeric ID associated with the group.
Member list
A list of user accounts, with a comma (,) separating each user account from the next.
To remove a member from a group, first create a backup copy of the /etc/group file:
cp /etc/group /etc/group.SAVE

Deleting a User Account


To delete a user account, use the userdel command:
userdel user
Where user specifies the account to be deleted. If you want to delete the user's home
directory, its files and subdirectories, use this form of the command:
userdel -r
user

1.4 Task Scheduling


The cron daemon on Linux runs tasks in the background at specific times; it’s like the Task
Scheduler on Windows. Add tasks to your system’s crontab files using the appropriate syntax
and cron will automatically run them for you.

Crontab files can be used to automate backups, system maintenance and other repetitive
tasks. The syntax is powerful and flexible, so you can have a task run every fifteen minutes or
at a specific minute on a specific day every year.

Open terminal:
Use the crontab -e command to open your user account’s crontab file. Commands in this file
run with your user account’s permissions. If you want a command to run with system
permissions, use the sudocrontab -e command to open the root account’s crontab file. Use the
su -c “crontab -e” command instead if your Linux distribution doesn’t use sudo.

Adding New Tasks


Use the arrow keys or the page down key to scroll to the bottom of the crontab file in Nano.
The lines starting with # are comment lines, which mean that cron ignores them. Comments
just provide information to people editing the file.

Lines in the crontab file are written in the following sequence, with the following acceptable
values:
minute(0-59) hour(0-23) day(1-31) month(1-12) weekday(0-6) command
You can use an asterisk (*) character to match any value. For example, using a asterisk for the
month would cause the command to run every month.
-9-
For example, let’s say we want to run the command /usr/bin/example at 12:30 a.m. every day.
We’d type:
29 0 * * * /usr/bin/example
We use 29 for the 30-minute mark and 0 for 12 a.m. because the minute, hour and weekday
values start at 0. Note that the day and month values start at 1 instead of 0.

1.3, 1.4 Check your progress:

1 To create a user account, _______ command is used.


2 You can change the name associated with a user account, by using the _______ command.
3 To add a member to a group, _______ command is used.
4 The _______ daemon on Linux runs tasks in the background at specific times.

1.5 RAID Implementation


RAID is a Redundant Array of Inexpensive disk, now it’s called Redundant Array of
Independent drives. Raid is just a collection of disks in a pool to become a logical volume.It can
be a minimum of 2 number of disk connected to a raid controller and make a logical volume or
more drives can be in a group. Only one Raid level can be applied in a group of disks.

Hardware RAID
Hardware RAID is a physical storage device which is built from multiple hard disks. While
connecting with system all disks appears as a single SCSI disk in system. From system points of
view there is no difference between a regular SCSI disk and a Hardware RAID device. System
can use hardware RAID device as a single SCSI disk.

Hardware RAID has its own independent disk subsystem and resources. It does not use any
resources from system such as power, RAM and CPU. Hardware RAID does not put any extra
load in system. Since it has its own dedicate resources, it provides high performance.

Software RAID
Software RAID is a logical storage device which is built from attached disks in system. It uses all
resources from system. It provides slow performance but cost nothing. In this tutorial we will
learn how to create and manage software RAID in detail.

This tutorial is the last part of our article “Linux Disk Management Explained in Easy Language
with Examples”. You can read other parts of this article here.

1.5.1 Basic concepts of RAID

- 10 -
A RAID device can be configured in multiple ways. Depending on configuration it can be
categorized in ten different levels. Before we discuss RAID levels in more detail, let’s have a
quick look on some important terminology used in RAID configuration.

Chunk: - This is the size of data block used in RAID configuration. If chunk size is 64KB then
there would be 16 chunks in 1MB (1024KB/64KB) RAID array.
Hot Spare: - This is the additional disk in RAID array. If any disk fails, data from faulty disk will
be migrated in this spare disk automatically.
Mirroring: - If this feature is enabled, a copy of same data will be saved in other disk also. It is
just like making an additional copy of data for backup purpose.
Striping: - If this feature is enabled, data will be written in all available disks randomly. It is just
like sharing data between all disks, so all of them fill equally.
Parity: - This is method of regenerating lost data from saved parity information.

1.5.2 Different RAID Levels:


Different RAID levels are defined based on how mirroring and stripping are required. Among
these levels only Level 0, Level1 and Level5 are mostly used in Red Hat Linux.

RAID Level 0
This level provides striping without parity. Since it does not store any parity data and perform
read and write operation simultaneously, speed would be much faster than other level. This
level requires at least two hard disks. All hard disks in this level are filled equally. You should
use this level only if read and write speed are concerned. If you decide to use this level then
always deploy alternative data backup plan. As any single disk failure from array will result in
total data loss.

RAID Level 1
This level provides parity without striping. It writes all data on two disks. If one disk is failed or
removed, we still have all data on other disk. This level requires double hard disks. It means if
you want to use 2 hard disks then you have to deploy 4 hard disks or if you want use one hard
disk then you have to deploy two hard disks. First hard disk stores original data while other
disk stores the exact copy of first disk. Since data is written twice, performance will be
reduced. You should use this level only if data safety is concerned at any cost.

RAID Level 5
This level provides both parity and striping. It requires at least three disks. It writes parity data
equally in all disks. If one disk is failed, data can be reconstructed from parity data available on
remaining disks. This provides a combination of integrity and performance. Wherever possible
you should always use this level.

- 11 -
If RAID device is properly configured, there will be no difference between software RAID and
hardware RAID from operating system’s point of view. Operating system will access RAID
device as a regular hard disk, no matter whether it is a software RAID or hardware RAID.

Linux provides md kernel module for software RAID configuration. In order to use software
RAID we have to configure RAID md device which is a composite of two or more storage
devices.

1.6 Logical Volume Management (LVM)

Logical Volume Management (LVM) makes it easier to manage disk space. If a file system
needs more space, it can be added to its logical volumes from the free spaces in its volume
group and the file system can be re-sized as we wish. If a disk starts to fail, replacement disk
can be registered as a physical volume with the volume group and the logical volumes extents
can be migrated to the new disk without data loss.

In a modern world every Server needs more space day by day for that we need to expand
depending on our needs. Logical volumes can be use in RAID, SAN. A Physical Disk will be
grouped to create a volume Group. Inside volume group we need to slice the space to create
Logical volumes. While using logical volumes we can extend across multiple disks, logical
volumes or reduce logical volumes in size with some commands without reformatting and re-
partitioning the current disk. Volumes can stripes data across multiple disks this can increase
the I/O stats.

LVM Features
 It is flexible to expand the space at any time.
 Any file systems can be installed and handle.
 Migration can be used to recover faulty disk.
 Restore the file system using Snapshot features to earlier stage. etc…

1.7 Installing & Managing Packages in Linux


1.7.1 What is yum?
YUM (Yellowdog Updater Modified) is an open source command-line as well as graphical
based package management tool for RPM (RedHat Package Manager) based Linux systems. It
allows users and system administrator to easily install, update, remove or search software
packages on a system. It was developed and released by Seth Vidal under GPL (General Public
License) as an open source, means anyone can allow to download and access the code to fix
bugs and develop customized packages. YUM uses numerous third party repositories to install
packages automatically by resolving their dependencies issues.

1. Install a Package with YUM


- 12 -
To install a package called Firefox 14, just run the below command it will automatically find
and install all required dependencies for Firefox.
# yum install firefox
The above command will ask confirmation before installing any package on your system. If you
want to install packages automatically without asking any confirmation, use option -y as
shown in below example.
# yum -y install firefox

2. Removing a Package with YUM


To remove a package completely with their all dependencies, just run the following command
as shown below.
# yum remove firefox
Same way the above command will ask confirmation before removing a package. To disable
confirmation prompt just add option -y as shown in below.
# yum -y remove firefox

3. Updating a Package using YUM


Let’s say you have outdated version of MySQL package and you want to update it to the latest
stable version. Just run the following command it will automatically resolves all dependencies
issues and install them.
# yum update mysql

4. List a Package using YUM


Use the list function to search for the specific package with name. For example to search for a
package called openssh, use the command.
# yum list openssh
To make your search more accurate, define package name with their version, in case you
know. For example to search for a specific version openssh-4.3p2 of the package, use the
command.
# yum list openssh-4.3p2

5. Search for a Package using YUM


If you don’t remember the exact name of the package, then use search function to search all
the available packages to match the name of the package you specified. For example, to
search all the packages that matches the word.
# yum search vsftpd

6. Get Information of a Package using YUM


Say you would like to know information of a package before installing it. To get information of
a package just issue the below command.
- 13 -
# yum info firefox

1.7.2 What is rpm?


RPM (Red Hat Package Manager) is an default open source and most popular package
management utility for Red Hat based systems like (RHEL, CentOS and Fedora). The tool allows
system administrators and users to install, update, uninstall, query, verify and manage system
software packages in Unix/Linux operating systems. The RPM formerly known as .rpm file, that
includes compiled software programs and libraries needed by the packages. This utility only
works with packages that built on .rpm format.

Some Facts about RPM (RedHat Package Manager)


 RPM is free and released under GPL (General Public License).
 RPM keeps the information of all the installed packages under /var/lib/rpm database.
 RPM is the only way to install packages under Linux systems, if you’ve installed packages
using source code, then rpm won’t manage it.
 RPM deals with .rpm files, which contains the actual information about the packages such
as: what it is, from where it comes, dependencies info, version info etc.
There are five basic modes for RPM command
 Install: It is used to install any RPM package.
 Remove: It is used to erase, remove or un-install any RPM package.
 Upgrade: It is used to update the existing RPM package.
 Verify: It is used to verify an RPM packages.
 Query: It is used query any RPM package.

1. How to Check an RPM Signature Package


Always check the PGP signature of packages before installing them on your Linux systems and
make sure its integrity and origin is OK. Use the following command with –checksig (check
signature) option to check the signature of a package called pidgin.

[root@tecmint]# rpm --checksig pidgin-2.7.9-5.el6.2.i686.rpm


pidgin-2.7.9-5.el6.2.i686.rpm: rsa sha1 (md5) pgp md5 OK

2. How to Install an RPM Package


For installing an rpm software package, use the following command with -i option. For
example, to install an rpm package called pidgin-2.7.9-5.el6.2.i686.rpm.
[root@tecmint]# rpm -ivh pidgin-2.7.9-5.el6.2.i686.rpm
Preparing... ########################################### [100%]
1:pidgin ########################################### [100%]
RPM command and options
-i : install a package
-v : verbose for a nicer display
- 14 -
-h: print hash marks as the package archive is unpacked.

3. How to check dependencies of RPM Package before Installing


Let’s say you would like to do a dependency check before installing or upgrading a package.
For example, use the following command to check the dependencies of BitTorrent-5.2.2-1-
Python2.4.noarch.rpm package. It will display the list of dependencies of package.
[root@tecmint]# rpm -qpR BitTorrent-5.2.2-1-Python2.4.noarch.rpm
/usr/bin/python2.4
python>= 2.3
python(abi) = 2.4
python-crypto>= 2.0
python-psyco
python-twisted>= 2.0
python-zopeinterface
rpmlib(CompressedFileNames) = 2.6
RPM command and options
-q : Query a package
-p : List capabilities this package provides.
-R: List capabilities on which this package depends..

4. How to Install a RPM Package without Dependencies


If you know that all needed packages are already installed and RPM is just being stupid, you
can ignore those dependencies by using the option –nodeps (no dependencies check) before
installing the package.
[root@tecmint]# rpm -ivh --nodeps BitTorrent-5.2.2-1-Python2.4.noarch.rpm
Preparing... ########################################### [100%]
1:BitTorrent ########################################### [100%]
The above command forcefully install rpm package by ignoring dependencies errors, but if
those dependency files are missing, then the program will not work at all, until you install
them.

5. How to check an Installed RPM Package


Using -q option with package name, will show whether an rpm installed or not.
[root@tecmint]# rpm -q BitTorrent
BitTorrent-5.2.2-1.noarch

1.5, 1.6 & 1.7Check your progress:

1 RAID _______ provides striping without parity.


2 RAID _______ provides both parity and striping.
3 _______ makes it easier to manage disk space.

- 15 -
1.8 Configuring DHCP
DHCP, or Dynamic Host Configuration Protocol, allows an administrator to configure network
settings for all clients on a central server.

The DHCP clients request an IP address and other network settings from the DHCP server on
the network. The DHCP server in turn leases the client an IP address within a given range or
leases the client an IP address based on the MAC address of the client's network interface card
(NIC). The information includes its IP address, along with the network's name server, gateway,
and proxy addresses,including the netmask.

Nothing has to be configured manually on the local system, except to specify the DHCP server
it should get its network configuration from. If an IP address is assigned according to the MAC
address of the client's NIC, the same IP address can be leased to the client every time the
client requests one. DHCP makes network administration easier and less prone to error.

Configure dhcp server


dhcp rpm is required to configure dhcp server. check it if not found then install

Now check dhcpd service in system service it should be on


#setup
Select System service
from list [*]dhcpd
To assign IP to dhcp server
DHCP server have a static a ip address. First configure the ip address 192.168.0.254 with
netmask of 255.255.255.0 on server.
Run setup command form root user
#setup
This will launch a new window select network configuration

Now a new window will show you all available LAN card select your LAN card

- 16 -
Assign IP in this box and click ok

click on ok, quit and again quit to come back on root prompt.
restart the network service so new IP address can take place on LAN card
#service network restart
main configuration file of dhcp server is dhcpd.conf. This file located on /etc directory. If this
file is not present there or you have corrupted this file, then copy new file first, if ask for
overwrite press y

Now open /etc/dhcpd.conf

Default entry in this file look like this

- 17 -
Make these change in this file to configure dhcp server
remove this line # - - - default gateway
set option routers to 192.168.0.254
set option subnet-mask to 255.255.255.0
optionnis domain to example.com
option domain-name to example.com
option domain-name-servers to 192.168.0.254
range dynamic-bootp to 192.168.0.10 192.168.0.50;
After change this file should look like this

1.9 Installing Server components


1.9.1 Installing Apache on rpm based systems

- 18 -
You can install Apache via the default Package Manager available on all Red Hat based
distributions like CentOs, Red Hat and Fedora.
[root@amsterdam ~]# yum install httpd
The apache source tarball could be converted into an rpm file using the following command.
[root@amsterdam ~]# rpmbuild -tb httpd-2.4.x.tar.bz2
It is mandatory to have -devel package installed on your server for creating .rpm file from
source.
Once you convert the source file into an rpm installer, you could use the following command
to install Apache.
[root@amsterdam ~]# rpm –ivh httpd-2.4.4-3.1.x86_64.rpm
After the installation the server does not start automatically, in order to start the service, you
have to use any of the following command on Fedora, CentOs or Red Hat.
[root@amsterdam ~]# /usr/sbin/apachectl start
[root@amsterdam ~]# servicehttpd start
[root@amsterdam ~]# /etc/init.d/httpd start

1.9.2 FTP Server


FTP (File Transfer Protocol) is a relatively old and most used standard network protocol used
for uploading/downloading files between two computers over a network.

Configure FTP Server on RHEL


vsftpd package is required for FTP Server. Check whether package is installed or not. If
package is missing install it first.
rpm-vsftpd
Configure vsftpd service to start at boot

Current status of vsftpd service must be running. Start if it is stopped. Restart vsftpd service
whenever you made any change in configuration file.

1.9.3 NFS Server


NFS (Network File System) is basically developed for sharing of files and folders between
Linux/Unix systems by Sun Microsystems in 1980. It allows you to mount your local file
- 19 -
systems over a network and remote hosts to interact with them as they are mounted locally
on the same system. With the help of NFS, we can set up file sharing between Unix to Linux
system and Linux to Unix system.

Benefits of NFS
 NFS allows local access to remote files.
 It uses standard client/server architecture for file sharing between all *nix based machines.
 With NFS it is not necessary that both machines run on the same OS.
 With the help of NFS we can configure centralized storage solutions.
 Users get their data irrespective of physical location.
 No manual refresh needed for new files.
 Newer version of NFS also supports acl, pseudo root mounts.
 Can be secured with Firewalls and Kerberos.

Setup and Configure NFS Mounts on Linux Server


To setup NFS mounts, we’ll be needing at least two Linux/Unix machines. Here in this tutorial,
I’ll be using two servers.
 NFS Server: nfsserver.example.com with IP-192.168.0.100
 NFS Client : nfsclient.example.com with IP-192.168.0.101

Installing NFS Server and NFS Client


We need to install NFS packages on our NFS Server as well as on NFS Client machine. We can
install it via “yum” (Red Hat Linux) and “apt-get” (Debian and Ubuntu) package installers.
[root@nfsserver ~]# yum install nfs-utilsnfs-utils-lib
[root@nfsserver ~]# yum install portmap (not required with NFSv4)
[root@nfsserver ~]# apt-get installnfs-utilsnfs-utils-lib
Now start the services on both machines.
[root@nfsserver ~]# /etc/init.d/portmap start
[root@nfsserver ~]# /etc/init.d/nfs start
[root@nfsserver ~]# chkconfig --level 35 portmap on
[root@nfsserver ~]# chkconfig --level 35 nfs on
After installing packages and starting services on both the machines, we need to configure
both the machines for file sharing.

Setting Up the NFS Server


First we will be configuring the NFS server.
Configure Export directory
For sharing a directory with NFS, we need to make an entry in “/etc/exports” configuration
file. Here I’ll be creating a new directory named “nfsshare” in “/” partition to share with client
server, you can also share an already existing directory with NFS.
[root@nfsserver ~]# mkdir /nfsshare
- 20 -
Now we need to make an entry in “/etc/exports” and restart the services to make our
directory shareable in the network.
[root@nfsserver ~]# vi /etc/exports
/nfsshare 192.168.0.101(rw,sync,no_root_squash)
In the above example, there is a directory in / partition named “nfsshare” is being shared with
client IP “192.168.0.101” with read and write (rw) privilege, you can also use hostname of the
client in the place of IP in above example.

1.9.4 CISF Server


If you have files on a Linux or UNIX server that you want to share with Windows clients, you
need to set up a Common Internet File System server. CFIS is actually the newest version of
Microsoft’s Server Management Block. Samba is the de facto standard in the Linux and UNIX
world for setting up a CIFS/SMB server. Ubuntu, a commonly used implementation of Linux,
makes a good platform for Samba.
1. Type the following command at the command line prompt to install Samba on your Linux
server: sudo apt-get install samba
2. Open the configuration file /etc/samba/smb.conf in a text editor and edit it as follows:
workgroup = # set to your network’s Windows workgroup name ……. security = user
#uncomment this line, which is further down in the file
3. Inserting these lines in the configuration file to set up a shared directory in the
configuration file: [share] comment = Linux Server Shared Directory # description of the
share path = /srv/shares/shared-dir # shared directory path browsable = yes # allow
browsing with Windows guest ok = yes # connect without a password read only = no #
allow read and write privileges create mask = 0755 # permissions for new files
4. Save the file and exit the text editor. Type the following at the command line prompt to
create the shared directory on the server and give it the permissions needed for sharing:
sudomkdir –p /srv/shares/shared-dirsudochownnobody.nogroup /srv/shares/shared-dir
5. Type the following to restart Samba to load and activate the new configuration
parameters: sudo restart smbdsudo restart nmbdYou should be able to use a Windows
client to see the Linux file server and the shared folder.

1.9.5 DNS Server


Domain Name Service (DNS) is an internet service that maps IP addresses to fully qualified
domain names (FQDN) and vice versa.
BIND stands for Berkley Internet Naming Daemon.
BIND is the most common program used for maintaining a name server on Linux.
Install Bind
Install the bind9 package using the appropriate package management utilities for your Linux
distributions.
On Debian/Ubuntu flavors, do the following:
$ sudo apt-get install bind9

- 21 -
Configure Cache NameServer
The job of a DNS caching server is to query other DNS servers and cache the response. Next
time when the same query is given, it will provide the response from the cache. The cache will
be updated periodically.

Please note that even though you can configure bind to work as a Primary and as a Caching
server, it is not advised to do so for security reasons. Having a separate caching server is
advisable.

All we have to do to configure a Cache NameServer is to add your ISP (Internet Service
Provider)’s DNS server or any OpenDNS server to the file /etc/bind/named.conf.options. For
Example, we will use google’s public DNS servers, 8.8.8.8 and 8.8.4.4.

Uncomment and edit the following line as shown below in /etc/bind/named.conf.options file.

forwarders {
8.8.8.8;
8.8.4.4;
};
After the above change, restart the DNS server.
$ sudo service bind9 restart
Configure Primary/Master Nameserver
Next, we will configure bind9 to be the Primary/Master for the domain/zone
“thegeekstuff.net”.
As a first step in configuring our Primary/Master Nameserver, we should add Forward and
Reverse resolution to bind9.
To add a DNS Forward and Reverse resolution to bind9, edit /etc/bind9/named.conf.local.
zone "thegeekstuff.net" {
type master;
file "/etc/bind/db.thegeekstuff.net";
};
zone "0.42.10.in-addr.arpa" {
type master;
notify no;
file "/etc/bind/db.10";
};
Now the file /etc/bind/db.thegeekstuff.net will have the details for resolving hostname to IP
address for this domain/zone, and the file /etc/bind/db.10 will have the details for resolving IP
address to hostname.
Build the Forward Resolution for Primary/Master NameServer

- 22 -
Now we will add the details which are necessary for forward resolution into
/etc/bind/db.thegeekstuff.net.
First, copy /etc/bind/db.localto /etc/bind/db.thegeekstuff.net
$ sudocp /etc/bind/db.local /etc/bind/db.thegeekstuff.net
Next, edit the /etc/bind/db.thegeekstuff.net and replace the following.
 In the line which has SOA: localhost. – This is the FQDN of the server in charge for this
domain. I’ve installed bind9 in 10.42.0.83, whose hostname is “ns”. So replace the
“localhost.” with “ns.thegeekstuff.net.”. Make sure it end’s with a dot(.).
 In the line which has SOA: root.localhost. – This is the E-Mail address of the person who is
responsible for this server. Use dot(.) instead of @. I’ve replaced with lak.localhost.
 In the line which has NS: localhost. – This is defining the Name server for the domain (NS).
We have to change this to the fully qualified domain name of the name server. Change it
to “ns.thegeekstuff.net.”. Make sure you have a “.” at the end.
 Next, define the A record and MX record for the domain. A record is the one which maps
hostname to IP address, and MX record will tell the mailserver to use for this domain.
Once the changes are done, the /etc/bind/db.thegeekstuff.net file will look like the following:
$TTL 604800
@ IN SOA ns.thegeekstuff.net. lak.localhost. (
1024 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ) ; Negative Cache TTL
;
@ IN NS ns.thegeekstuff.net.
thegeekstuff.net. IN MX 10 mail.thegeekstuff.net.
ns IN A 10.42.0.83
web IN A 10.42.0.80
mail IN A 10.42.0.70
Build the Reverse Resolution for Primary/Master NameServer
We will add the details which are necessary for reverse resolution to the file /etc/bind/db.10.
Copy the file /etc/bind/db.127 to /etc/bind/db.10
$ sudocp /etc/bind/db.127 /etc/bind/db.10
Next, edit the /etc/bind/db.10 file, and basically changing the same options as
/etc/bind/db.thegeekstuff.net

$TTL 604800
@ IN SOA ns.thegeekstuff.net. root.localhost. (
20 ; Serial
604800 ; Refresh
86400 ; Retry
- 23 -
2419200 ; Expire
604800 ) ; Negative Cache TTL
;
@ IN NSns.
Next, for each A record in /etc/bind/db.thegeekstuff.net, add a PTR record.
$TTL 604800
@ IN SOA ns.thegeekstuff.net. root.thegeekstuff.net. (
20 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ) ; Negative Cache TTL
;
@ IN NSns.
83 IN PTR ns.thegeekstuff.net.
70 IN PTR mail.thegeekstuff.net.
80 IN PTR web.thegeekstuff.net.
Whenever you are modifying the file db.thegeekstuff.net and db.10, you need to increment
the “Serial” number as well. Typically admin uses DDMMYYSS for serial numbers and when
they modify, the change the serial number appropriately.
Finally, restart the bind9 service:
$ sudo service bind9 restart

1.8 & 1.9Check your progress:

1 ________ allows an administrator to configure network settings for all clients on a central
server.
2 ________ is used for uploading/downloading files between two computers over a
network.
3 ________ allows you to mount your local file systems over a network and remote hosts
to interact with them as they are mounted locally on the same system.
4 ________ is an internet service that maps IP addresses to fully qualified domain names
(FQDN) and vice versa.

1.10 Summary
An operating system (commonly abbreviated OS and O/S) is the infrastructure software
component of a computer system; it is responsible for the management and coordination of
activities and the sharing of the limited resources of the computer. the kernel is the central
component of most computer operating systems. LILO does not depend on a specific file
system, and can boot an operating system (e.g., Linux kernel images) from floppy disks and
hard disks. GRUB is the reference implementation of the Multiboot Specification, which allows

- 24 -
a user to have several different operating systems on their computer at once, and to choose
which one to run when the computer starts.
To create a user account, you use the adduser command. Linux uses groups to define a set of
related user accounts that can share access to a file or directory. You probably won't often find
it necessary to configure group definitions, particularly if you use your system as a desktop
system rather than a server. The cron daemon on Linux runs tasks in the background at
specific times; it’s like the Task Scheduler on Windows.
The cron daemon on Linux runs tasks in the background at specific times; it’s like the Task
Scheduler on Windows. Hardware RAID is a physical storage device which is built from
multiple hard disks. Software RAID is a logical storage device which is built from attached disks
in system. Logical Volume Management (LVM) makes it easier to manage disk space. If a file
system needs more space, it can be added to its logical volumes from the free spaces in its
volume group and the file system can be re-sized as we wish.
YUM (Yellowdog Updater Modified) is an open source command-line as well as graphical
based package management tool for RPM (RedHat Package Manager) based Linux systems. It
allows users and system administrator to easily install, update, remove or search software
packages on a system. RPM (Red Hat Package Manager) is an default open source and most
popular package management utility for Red Hat based systems like (RHEL, CentOS and
Fedora). The tool allows system administrators and users to install, update, uninstall, query,
verify and manage system software packages in Unix/Linux operating systems. DHCP, or
Dynamic Host Configuration Protocol, allows an administrator to configure network settings
for all clients on a central server.

1.11 Check Your Progress Answers


1.1, 1.2 Check your progress:

1. kernel
2. LILO
3. GRUB

1.3, 1.4 Check your progress:

1. adduser
2. chfn
3. adduser
4. cron

1.5, 1.6 & 1.7Check your progress:

1. Level 0
2. Level 5
3. LVM

1.8 & 1.9Check your progress:

1. DHCP

- 25 -
2. FTP
3. NFS
4. DNS

1.12 Questions for self-study:


1. Write a short note on Linux boot loaders?
2. What is LVM?
3. Write steps to configure DHCP server.
4. What are the basic components of RAID?

- 26 -
Chapter II

Introduction to Virtualization
2.0 Objective
2.1 Introduction
2.2 Types of Virtualization
2.3 Virtualization- Advantages and Disadvantages
2.4 Relationship between Virtualization & Cloud Computing
2.5 Summary
2.6 Check Your Progress Answers

6.0 Objective
After studying this chapter you will be able to understand,
 Virtualization technology,
 Different types of virtualization,
 Advantage and disadvantages of virtualization &
 Relationship between virtualization and cloud computing.

2.1 Introduction:
Virtualization is a technology that helps us to install different Operating Systems on hardware.
They are completely separated and independent from each other. It is defined as – “In
computing, virtualization is a broad term that refers to the abstraction of computer resources.

Virtualization hides the physical characteristics of computing resources from their users, their
applications or end users. This includes making a single physical resource (such as a server, an
operating system, an application or a storage device) appear to function as multiple virtual
resources. It can also include making multiple physical resources (such as storage devices or
servers) appear as a single virtual resource.”

Virtualization is often:
 The creation of many virtual resources from one physical resource.
 The creation of one virtual resource from one or more physical resource

2.2 Types of Virtualization


Today the term virtualization is widely applied to a number of concepts, some of which are
described below:
 Server Virtualization
 Client & Desktop Virtualization
 Services and Applications Virtualization
 Network Virtualization
 Storage Virtualization

Let us now discuss each of these in detail.

- 27 -
Server Virtualization
It is virtualizing your server infrastructure where you do not have to use any more physical
servers for different purposes.

Client & Desktop Virtualization


This is similar to server virtualization, but this time is on the user’s site where you virtualize
their desktops. We change their desktops with thin clients and by utilizing the datacenter
resources.

Services and Applications Virtualization


The virtualization technology isolates applications from the underlying operating system and
from other applications, in order to increase compatibility and manageability. For example –
Docker can be used for that purpose.

- 28 -
Network Virtualization
It is a part of virtualization infrastructure, which is used especially if you are going to visualize
your servers. It helps you in creating multiple switching, Vlans, NAT-ing, etc.

The following illustration shows the VMware schema:

Storage Virtualization
This is widely used in datacenters where you have a big storage and it helps you to create,
delete, allocated storage to different hardware. This allocation is done through network
connection. The leader on storage is SAN. A schematic illustration is given below:

- 29 -
2.1, 2.2 Check your progress:
1 ________ Virtualization is a part of virtualization infrastructure, which is used especially if
you are going to visualize your servers.
2 ________ Virtualization is widely used in datacenters where you have a big storage and it
helps you to create, delete, allocated storage to different hardware.

2.3 Virtualization- Advantages and Disadvantages


We will discuss some of the most common advantages and disadvantages of Virtualization.
Advantages of Virtualization
Following are some of the most recognized advantages of Virtualization, which areexplained in
detail.
 Using Virtualization for Efficient Hardware Utilization
Virtualization decreases costs by reducing the need for physical hardware systems.
Virtual machines use efficient hardware, which lowers the quantities of hardware,
associated maintenance costs and reduces the power along with cooling the demand.
You can allocate memory, space and CPU in just a second, making you more self-
independent from hardware vendors.
 Using Virtualization to Increase Availability
Virtualization platforms offer a number of advanced features that are not found on
physical servers, which increase uptime and availability. Although the vendor feature
names may be different, they usually offer capabilities such as live migration, storage
migration, fault tolerance, high availability and distributed resource scheduling. These
technologies keep virtual machines chugging along or give them the ability to recover
from unplanned outages.
The ability to move a virtual machine from one server to another is perhaps one of the
greatest single benefits of virtualization with far reaching uses. As the technology
continues to mature to the point where it can do long-distance migrations, such as
being able to move a virtual machine from one data center to another no matter the
network latency involved.
 Disaster Recovery
Disaster recovery is very easy when your servers are virtualized. With up-to-
datesnapshots of your virtual machines, you can quickly get back up and running.
Anorganization can more easily create an affordable replication site. If a disaster strikes
in the data center or server room itself, you can always move those virtual machines
- 30 -
elsewhere into a cloud provider. Having that level of flexibility means your disaster
recovery plan will be easier to enact and will have a 99% success rate.
 Save Energy
Moving physical servers to virtual machines and consolidating them onto far fewer
physical servers’ means lowering monthly power and cooling costs in the data center. It
reduces carbon footprint and helps to clean up the air we breathe. Consumers want to
see companies reducing their output of pollution and taking responsibility.
 Deploying Servers too fast
You can quickly clone an image, master template or existing virtual machine to get a
server up and running within minutes. You do not have to fill out purchase orders, wait
for shipping and receiving and then rack, stack, and cable a physical machine only to
spend additional hours waiting for the operating system and applications to complete
their installations. With virtual backup tools like Veeam, redeploying images will be so
fast that your end users will hardly notice there was an issue.
 Save Space in your Server Room or Datacenter
Imagine a simple example: you have two racks with 30 physical servers and 4 switches.
By virtualizing your servers, it will help you to reduce half the space used by the
physical servers. The result can be two physical servers in a rack with one switch,
where each physical server holds 15 virtualized servers.
 Testing and setting up Lab Environment
While you are testing or installing something on your servers and it crashes, do not
panic, as there is no data loss. Just revert to a previous snapshot and you can move
forward as if the mistake did not even happen. You can also isolate these testing
environments from end users while still keeping them online. When you have
completely done your work, deploy it in live.
 Shifting all your Local Infrastructure to Cloud in a day
If you decide to shift your entire virtualized infrastructure into a cloud provider, you
can does it in a day? All the hypervisors offer you tools to export your virtual servers.
 Possibility to Divide Services
If you have a single server, holding different applications this can increase the
possibility of the services to crash with each other and increasing the fail rate of the
server. If you virtualize this server, you can put applications in separated environments
from each other as we have discussed previously.

Disadvantages of Virtualization
Although you cannot find many disadvantages for virtualization, we will discuss a few
prominent one as follows:

 Extra Costs
Maybe you have to invest in the virtualization software and possibly additional
hardware might be required to make the virtualization possible. This depends on your
existing network. Many businesses have sufficient capacity to accommodate the
virtualization without requiring much cash. If you have an infrastructure that is more
than five years old, you have to consider an initial renewal budget.
 Software Licensing

- 31 -
This is becoming less of a problem as more software vendors adapt to the increased
adoption of virtualization. However, it is important to check with your vendors to
understand how they view software use in a virtualized environment.
 Learn the new Infrastructure
Implementing and managing a virtualized environment will require IT staff with
expertise in virtualization. On the user side, a typical virtual environment will operate
similarly to the non-virtual environment. There are some applications that do not
adapt well to the virtualized environment.

2.4 Relationship between Virtualization & Cloud Computing


The technology behind virtualization is known as a virtual machine monitor (VMM) or virtual
manager, which separates compute environments from the actual physical infrastructure.
Virtualization makes servers, workstations, storage, and other systems independent of the
physical hardware layer, a network infrastructure services provider. "This is done by installing
a Hypervisor on top of the hardware layer, where the systems are then installed."

Essentially, virtualization differs from cloud computing because virtualization is software that
manipulates hardware, while cloud computing refers to a service that results from that
manipulation.

"Virtualization is a foundational element of cloud computing and helps deliver on the value of
cloud computing," Adams said. "Cloud computing is the delivery of shared computing
resources, software or data — as a service and on-demand through the Internet”.

Most of the confusion occurs because virtualization and cloud computing work together to
provide different types of services, as is the case with private clouds.

The cloud can, and most often does, include virtualization products to deliver the compute
service, said Rick Philips, vice president of compute solutions at IT firm Weidenhammer. "The
difference is that a true cloud provides self-service capability, elasticity, automated
management, scalability, and pay-as you go service that is not inherent in virtualization."\

A practical comparison
Virtualization can make 1 resource act like many, while cloud computing lets different
departments (through private cloud) or companies (through a public cloud) access a single
pool of automatically provisioned resources.

Virtualization
Virtualization is technology that allows you to create multiple simulated environments or
dedicated resources from a single, physical hardware system. Software called a hypervisor
connects directly to that hardware and allows you to split 1 system into separate, distinct, and
secure environments known as virtual machines (VMs). These VMs rely on the hypervisor’s
ability to separate the machine’s resources from the hardware and distribute them
appropriately.

Cloud Computing

- 32 -
Cloud computing is a set of principles and approaches to deliver compute, network, and
storage infrastructure resources, services, platforms, and applications to users on-demand
across any network. These infrastructure resources, services, and applications are sourced
from clouds, which are pools of virtual resources orchestrated by management and
automation software so they can be accessed by users on-demand through self-service portals
supported by automatic scaling and dynamic resource allocation.

2.3, 2.4Check your progress:


1 ________ is a foundational element of cloud computing and helps deliver on the value of cloud
computing.
2 ________ is technology that allows you to create multiple simulated environments or dedicated
resources from a single, physical hardware system.
3 ________ computing is a set of principles and approaches to deliver compute, network, and
storage infrastructure resources, services, platforms, and applications to users on-demand across
any network.

2.5 Summery
Virtualization is a technology that helps us to install different Operating Systems on hardware.
They are completely separated and independent from each other. Server Virtualization is
ayour server infrastructure where you do not have to use any more physical servers for
different purposes. Storage Virtualization is done through network connection. The leader on
storage is SAN. Virtualization decreases costs by reducing the need for physical hardware
systems. Disaster recovery is very easy when your servers are virtualized. Virtualization is a
foundational element of cloud computing and helps deliver on the value of cloud computing.
Virtualization is technology that allows you to create multiple simulated environments or
dedicated resources from a single, physical hardware system. Cloud computing is a set of
principles and approaches to deliver compute, network, and storage infrastructure resources,
services, platforms, and applications to users on-demand across any network.

2.6 Check your progress answers:


2.1, 2.2 Check your progress:

1. Network
2. Storage

2.3, 2.4Check your progress:

1. Virtualization
2. Virtualization
3. Cloud

2.7 Questions for self-study:


1. Explain the different types of virtualization.
2. State the advantages of virtualization.
3. Explain the relationship between Virtualization & Cloud Computing.

- 33 -
- 34 -
Chapter III

Virtualization for Enterprise

3.0 Objective
3.1 Introduction
3.2 Understanding Different Types of Hypervisors
3.3 Installing Hyper-V in a windows 10 workstation
3.4 Creating a VM with VMware Workstation
3.5 iscsi Intro & Setup
3.6 Network-attached Storage
3.7 Storage Area Network
3.8 VLAN
3.9 Summary
3.10 Check Your Progress Answers

6.0 Objective
After studying this chapter you will be able to understand,
 Different types of Hypervisor and their use.
 Installation and working of Bare metal hypervisor
 Creating virtual machine using VMware and Virtual box
 Network attached storage and Storage area network
 VLAN

3.1 Introduction:
A hypervisor is a function which abstracts and isolates operating systems and applications
from the underlying computer hardware. This process of abstraction allows the underlying
host machine hardware to independently operate one or more virtual machines as guests. This
feature allow multiple guest VMs to effectively share the system's physical compute resources,
such as processor cycles, memory space, network bandwidth. A hypervisor is sometimes also
called a virtual machine monitor (VMM).

3.2 Understanding Different Types of Hypervisors

A hypervisor is a thin software layer that intercepts operating system calls to the hardware. It
is also called as the Virtual Machine Monitor (VMM). It creates a virtual platform on the host
computer, on top of which multiple guest operating systems are executed and monitored.

Hypervisors are two types:


 Native of Bare Metal Hypervisor
 Hosted Hypervisor

Native or Bare Metal Hypervisor


Native hypervisors are software systems that run directly on the host's hardware to control
- 35 -
the hardware and to monitor the Guest Operating Systems. The guest operating system runs
on a separate level above the hypervisor. All of them have a Virtual Machine Manager.

Examples of this virtual machine architecture are Oracle VM, Microsoft Hyper-V, VMWare ESX
and Xen.

Hosted Hypervisor
Hosted hypervisors are designed to run within a traditional operating system. In other words,
a hosted hypervisor adds a distinct software layer on top of the host operating system. While,
the guest operating system becomes a third software level above the hardware.

A well-known example of a hosted hypervisor is Oracle VM VirtualBox. Others include


VMWare Server and Workstation, Microsoft Virtual PC, KVM, QEMU and Parallels.

3.3 Installing Hyper-V in awindows 10 workstation

To install it in Windows 7, 8, 10 versions, you have to check if your computer supports


- 36 -
virtualization. Following are the basic requirements:
• Windows 10 Pro or Enterprise 64-bit Operating System.
• A 64-bit processor with Second Level Address Translation (SLAT).
• 4GB system RAM at minimum.
• BIOS-level Hardware Virtualization support.

In my case, we have a laptop HP Probook 450 G3, which supports it. Before continuing with
the installation, follow the steps given below.

Step 1: Ensure that hardware virtualization support is turned on in the BIOS settings as shown
below:

Step 2: Type in the search bar “turn windows features on or off” and click on that feature as
shown below.

Step 3: Select and enable Hyper-V.

- 37 -
Creating a Virtual Machine with Hyper-V
In this section, we will learn how to create a virtual machine. To begin with, we have to open
the Hyper-V manager and then follow the steps given below.
Step 1: Go to “Server Manager” -- Click on “Hyper-V Manager”.

Step 2: Click “New” on the left Panel or on the “Actions” button.

- 38 -
Step 3: Double-click on “Virtual Machine…”

Step 4: A new table will open -- Type Name of your new machine -- click “Next”.

Specify the generation of OS (32bit or 64 bit)

- 39 -
Step 5: A new table will be opened where you have to allocate the memory. Keep in mind you
cannot choose more memory than you have physically.

Step 6: In the “Connection” drop down box, choose your physical network adaptor click on
“Next”.

Step 7: Now it is time to create a Virtual Hard disk, if you already have one, choose the second
option.

- 40 -
Step 8: Select the Image of ISO that has to be installed -- click on “Finish”.

- 41 -
Step 9: After clicking on finish, you would get the following message as shown in the
screenshot below.

Step 10: To connect to the Virtual machine, Right Click on the created machine click on
“Connect…”

Step 11: After that, installation of your ISO will continue.

Setting up Networking with Hyper-V

42
The Hyper-V vSwitch is software, layer-2 Ethernet network-traffic switch. It allows
administrators to connect VMs to either physical or virtual networks. It is available by
default within the Hyper-V Manager installation and contains extended capabilities for
security and resource tracking.

If you attempt to create a VM right after the set-up process, you will not be able to connect
it to a network.

To set up a network environment, you will need to select the Virtual Switch Manager in the
right hand side panel of Hyper-V Manager as shown in the screenshot below.

The Virtual Switch Manager helps configure the vSwitch and the Global Network Settings,
which simply lets you change the default ‘MAC Address Range’, if you see any reason for
that.

Creation of the virtual switch is easy and there are three vSwitch types available, which are
described below:

• External vSwitch will link a physical NIC of the Hyper-V host with a virtual one and then
give your VMs access outside of the host. This means that your physical network and
internet (if your physical network is connected to internet).
• Internal vSwitch should be used for building an independent virtual network, when you
need to connect VMs to each other and to a hypervisor as well.
• Private vSwitch will create a virtual network where all connected VMs will see each
other, but not the Hyper-V host. This will completely isolate the VMs in that sandbox.

43
Here, we have selected “External” and then “Create Virtual Switch”. The table with the
setting of the vSwitch will be open where we will fill the fields as shown below:

• Name: is the name that we will put to identify the vSwitch.


• Notes: is the description for us, generally, we put friendly descriptions to be understood.
• Connection Type: is external as explained earlier and selects a physical network card on
my server.

Once all this is entered, click on “OK”.

44
Allocating Processors & Memory to a VM using Hyper-V

In this section, we will see the task of allocating CPU, Memory and Disk Resources to the
virtual machines that are running on a server. The key to allocating CPU or any other type of
resource in Hyper-V is to remember that everything is relative.

For example, Microsoft has released some guidelines for Virtualizing Exchange Server. One
of the things that was listed was that the overall system requirements for Exchange Server
are identical whether Exchange is being run on a virtual machine or on a dedicated server.

To allocate one of the features mentioned above, we need to click on the “Settings…” tab in
the right hand side panel.

To allocate more memory to the selected virtual machine, click on the “Memory” tab on the
left hand side of the screen. You will also have “Startup RAM”, where you can allocate as
much ram as you have physically to a VM machine -- Click on “Ok”.

45
To allocate more processors, click on the “Processor” tab on the left hand side of the panel.
Then you can enter the number of virtual processors for your machine.

46
If you need to expand, compress the capacity of the virtual hard disk. Click on the “IDE
controller 0” on the left hand side panel -- click on “Edit”.

Once all the above changes are done, Click on “Next”

Select one of the options based on your need (all of them have their respective
descriptions) and then click on “Next”.

47
Click on “Finish” and wait for the process to finish.

Check your progress 3.1, 3.2 & 3.3


1. A ________ is a function which abstracts and isolates operating systems and applications from
the underlying computer hardware.
2. ________hypervisors are designed to run within a traditional operating system.
3. ________hypervisors are software systems that run directly on the host's hardware to control
the hardware and to monitor the Guest Operating Systems.

3.4 Creating a VM with VMware Workstation


To create a virtual machine, we have to follow the steps given below.
Step 1: Click on “Player” -- File -- New Virtual Machine.

Step 2: A table will pop-up requesting you to find a “Boot disk”, “Boot Image” or to install OS
at a later stage.
We will choose the second option and click on “Browse”. Then we have to click on the ISO
image, which we want to install. Once all this is done, click on “Next”.

48
Step 3: As I am installing windows server 2012, it will pop-up a table requesting to enter the
serial key click directly on “Next”, if you want to activate the non- commercial version for
Windows.

Step 4: After the above step is complete, a dialogue box opens. Click “Yes”.

49
Step 5: Click “Next”.

Step 6: In the “Maximum size disk” box, enter the value of your virtual Hard disk, which in
our case is 60GB. Then click on “Next”.

Step 7: Click on “Finish”.

50
Setting up Networking with VMware Workstation
To set up the networking modes of a virtual machine in a VMware Workstation, we have to
click on the “Edit virtual machine settings”.

A table will be opened with the settings of networking and on the left hand side panel of
this table click on “Network Adaptor”.

On the left of this table, you can see the networking modes as shown in the following
screenshots.

51
VMware ESXi: The Purpose-Built Bare Metal Hypervisor

VMware ESXi is a purpose-built bare-metal hypervisor that installs directly onto a physical
server. With direct access to and control of underlying resources, ESXi is more efficient than
hosted architectures and can effectively partition hardware to increase consolidation ratios
and cut costs for our customers.

VMware ESXi (formerly ESX) is an enterprise-class, type-1 hypervisor developed by VMware


for deploying and serving virtual computers. As a type-1 hypervisor, ESXi is not a software
application that is installed on an operating system (OS); instead, it includes and integrates
vital OS components, such as a kernel.

ESX runs on bare metal (without running an operating system) unlike other VMware
products. It includes its own kernel: A Linux kernel is started first, and is then used to load a
variety of specialized virtualization components, including ESX, which is otherwise known as
the vmkernel component. The Linux kernel is the primary virtual machine; it is invoked by
the service console. At normal run-time, the vmkernel is running on the bare computer, and
the Linux-based service console runs as the first virtual machine. VMware dropped
development of ESX at version 4.1, and now uses ESXi, which does not include a Linux
kernel.

What ESXi Delivers

Today’s IT teams are under unprecedented pressure to meet fluctuating market trends and
heightened customer demands. At the same time, these teams must stretch IT resources to
accommodate increasingly complex projects. Fortunately, ESXi can help balance the need
for better business outcomes and IT savings. Here’s how:

• Consolidates hardware for higher capacity utilization.


• Increases performance for a competitive edge.

52
• Streamlines IT administration through centralized management.
• Reduces costs, such as CapEx and OpEx savings.
• Minimizes hardware resources needed to run hypervisor for cost savings and more
efficient utilization.

3.5iSCSI

iSCSI is a protocol for transporting SCSI commands and data over TCP/IP Ethernet
connections. The iSCSI protocol is an open standard, which was developed under the
auspices of the Internet Engineering Task Force (IETF), the body responsible for most
internet and networking standards. The standard was ratified in 2003 and is documented in
a number of standards documents, the prime one being RFC3720. In fact iSCSI embraces a
family of protocols; it uses Ethernet and TCP/IP as its underlying transport mechanism, it
makes use of standard authentication protocols such as CHAP, and uses other protocols
such as iSNS for discovery.

Fortunately, virtually all these protocols and their details are hidden from the end user, and
setting up iSCSI storage is very easy. However, some knowledge of a few iSCSI fundamentals
and terms is useful.

Initiators and targets

There are two types of iSCSI network entity: initiator and target.

An initiator initiates a connection to a target. An initiator normally runs on a host computer,


and may be either a software driver or a hardware plug-in card, often called a host bus
adapter (HBA). A software initiator uses one of the computer’s Ethernet ports for its
physical connection, whereas an HBA has its own dedicated port. vSphere supports both
hardware and software initiators, and includes a built-in software initiator.

Hardware initiators are less widely used, although may be useful in very high performance
applications, or if 10-gigabit Ethernet support is required.

iSCSI targets are embedded in iSCSI storage controllers. They are the software that makes
the storage available to host computers, making it appear just like any other sort of disk
drive.

Initiators and targets have an IP address, just like any other network entity. They are also
identified using an iSCSI name, called the iSCSI Qualified Name (IQN). The IQN must be
unique world-wide, and is made up of a number of components, identifying the vendor, and
then uniquely identifying the initiator or target. An example of an IQN is:

iqn.2001-04.com.example:storage:diskarray-sn-123456789

Since these names are rather unwieldy, initiators and targets also use short, user-friendly
names, sometimes called aliases.

Sessions and connections

When an initiator wants to establish a connection with a target, it establishes an iSCSI


session. A session consists of a TCP/IP connection between an initiator and a target.

53
Sessions are normally established (or re-established) automatically when the host computer
starts up, although they also can be manually established and broken.

CHAP authentication

It there are security concerns, it is possible to set up target and initiator authentication,
using the CHAP authentication protocol. With CHAP authentication, an initiator can only
connect to a target if it knows the target’s password, or secret. To set up CHAP, the same
secret needs to be given to both the initiator and target.

It is also possible to use mutual CHAP, where there is a second authentication phase. After
the target has authenticated the initiator, the initiator then authenticates the target, using
an initiator secret.

Put simply, with CHAP, the target can ensure that the initiator attempting to connect is who
it claims to be; with mutual CHAP, the initiator can also ensure the target is who it claims to
be.

iSCSI discovery

iSCSI discovery is the process by which an iSCSI initiator ‘discovers’ an iSCSI target. Discovery
is done using a special type of session, called a discovery session, where an initiator
connects to a storage controller and asks for a list of the targets present on the controller.
The target responds with a list of all the targets to which the initiator has access.

The iSNS protocol can also be used for discovery, but is not widely used and vSphere does
not support it. SvSAN includes limited support for iSNS, in that it can be used for non-
mirrored targets but not mirrored ones.

3.6 Network-attached Storage

NAS is "Any server that shares its own storage with others on the network and acts as a file
server in the simplest form". Network Attached Storage shares files over the network.
Some of the most significant protocols used are SMB, NFS, CIFS, and TCP/IP. When you
access files on a file server on your windows system, it is NAS.
NAS will be using an Ethernet connection for sharing files over the network. The NAS
device will have an IP address and then will be accessible over the network through that IP
address. Biggest providers of NAS are QNAP and Lenovo.

54
The following illustration shows how NAS works.

Check your progress 3.4, 3.5 & 3.6


1. VMware ESXi is a purpose-built ______ hypervisor that installs directly onto a physical server.
2. ______ is a protocol for transporting SCSI commands and data over TCP/IP Ethernet
connections.
3. ______ is any server that shares its own storage with others on the network and acts as a file
server in the simplest form.

3.7 Storage-area Network


SANs allow multiple servers to share a pool of storage; making it appear to the server as if it
were local or directly attached storage. A dedicated networking standard, Fiber Channel, has
been developed to allow blocks to be moved between servers and storage at high speed. It
uses dedicated switches and a fiber-based cabling system, which separates it from the day-
to-day traffic traversing the busy enterprise network. While the well-established SCSI
protocol enables communication between the servers’ host bus adaptors and the disk
system.

55
The following illustration shows how a SAN switch operates.

What Is a Storage Area Network?

A Storage Area Network (SAN) is a specialized, high-speed network that provides block-level
network access to storage. SANs are typically composed of hosts, switches, storage
elements, and storage devices that are interconnected using a variety of technologies,
topologies, and protocols. SANs may also span multiple sites.

SANs are often used to:

• Improve application availability (e.g., multiple data paths)


• Enhance application performance (e.g., off-load storage functions, segregate networks,
etc.)
• Increase storage utilization and effectiveness (e.g., consolidate storage resources,
provide tiered storage, etc.), and improve data protection and security.

SANs also typically play an important role in an organization's Business Continuity


Management (BCM) activities.

A SAN presents storage devices to a host such that the storage appears to be locally
attached. This simplified presentation of storage to a host is accomplished through the use
of different types of virtualization.

56
A Storage Area Network (SAN) is a specialized, high-speed network that provides block-level
network access to storage. SANs are typically composed of hosts, switches, storage
elements, and storage devices that are interconnected using a variety of technologies,
topologies, and protocols. SANs may also span multiple sites.

A SAN presents storage devices to a host such that the storage appears to be locally
attached. This simplified presentation of storage to a host is accomplished through the use
of different types of virtualization.

SANs are commonly based on Fibre Channel (FC) technology that utilizes the Fibre Channel
Protocol (FCP) for open systems and proprietary variants for mainframes. In addition, the
use of Fibre Channel over Ethernet (FCoE) makes it possible to move FC traffic across
existing high speed Ethernet infrastructures and converge, storage and IP protocols onto a
single cable. Other technologies like Internet Small Computing System Interface (iSCSI),
commonly used in small and medium sized organizations as a less expensive alternative to
FC, and InfiniBand, commonly used in high performance computing environments, can also
be used. In addition, it is possible to use gateways to move data between different SAN
technologies.

3.8 VLAN

A virtual local area network (VLAN) is a logical group of workstations, servers and network
devices that appear to be on the same LAN despite their geographical distribution. A VLAN
allows a network of computers and users to communicate in a simulated environment as if
they exist in a single LAN and are sharing a single broadcast and multicast domain. VLANs
are implemented to achieve scalability, security and ease of network management and can
quickly adapt to changes in network requirements and relocation of workstations and server
nodes.

Higher-end switches allow the functionality and implementation of VLANs. The purpose of
implementing a VLAN is to improve the performance of a network or apply appropriate
security features.

57
Computer networks can be segmented into local area networks (LANs) and wide area
networks (WANs). Network devices such as switches, hubs, bridges, workstations and
servers connected to each other in the same network at a specific location are generally
known as LANs. A LAN is also considered a broadcast domain.

A VLAN allows several networks to work virtually as one LAN. One of the most beneficial
elements of a VLAN is that it removes latency in the network, which saves network
resources and increases network efficiency. In addition, VLANs are created to provide
segmentation and assist in issues like security, network management and scalability. Traffic
patterns can also easily be controlled by using VLANs.

The key benefits of implementing VLANs include:

• Allowing network administrators to apply additional security to network communication


• Making expansion and relocation of a network or a network device easier
• Providing flexibility because administrators are able to configure in a centralized
environment while the devices might be located in different geographical locations
• Decreasing the latency and traffic load on the network and the network devices, offering
increased performance

VLANs also have some disadvantages and limitations as listed below:

• High risk of virus issues because one infected system may spread a virus through the
whole logical network
• Equipment limitations in very large networks because additional routers might be
needed to control the workload
• More effective at controlling latency than a WAN, but less efficient than a LAN.

Check your progress 3.7 & 3.8


1. ______ allow multiple servers to share a pool of storage; making it appear to the server as if it
were local or directly attached storage.
2. ______ is a logical group of workstations, servers and network devices that appear to be on the
same LAN despite their geographical distribution.

58
3. ______ is a specialized, high-speed network that provides block-level network access to storage.

3.9 Summary
A hypervisor is a function which abstracts and isolates operating systems and applications
from the underlying computer hardware. It creates a virtual platform on the host computer,
on top of which multiple guest operating systems are executed and monitored. Hypervisors
are two types:
• Native of Bare Metal Hypervisor
• Hosted Hypervisor
Native hypervisors are software systems that run directly on the host's hardware to control
the hardware and to monitor the Guest Operating Systems. Hosted hypervisors are
designed to run within a traditional operating system.
iSCSI is a protocol for transporting SCSI commands and data over TCP/IP Ethernet
connections. The iSCSI protocol is an open standard, which was developed under the
auspices of the Internet Engineering Task Force (IETF), the body responsible for most
internet and networking standards.
NAS is "Any server that shares its own storage with others on the network and acts as a file
server in the simplest form". Network Attached Storage shares files over the network. A
Storage Area Network (SAN) is a specialized, high-speed network that provides block-level
network access to storage. A SAN presents storage devices to a host such that the storage
appears to be locally attached.
A virtual local area network (VLAN) is a logical group of workstations, servers and network
devices that appear to be on the same LAN despite their geographical distribution. VLANs
are implemented to achieve scalability, security and ease of network management and can
quickly adapt to changes in network requirements and relocation of workstations and server
nodes.
3.10 Check Your Progress Answers

Check your progress 3.1, 3.2 & 3.3

1. hypervisor
2. Hosted
3. Native

Check your progress 3.4, 3.5 & 3.6

1. NAS
2. iSCSI
3. bare-metal

Check your progress 3.7 & 3.8

1. SANs
2. VLAN
3. SAN

59
3.11 Questions for self-study:
1. What is hypervisor?
2. Explain the different types of hypervisors?
3. Write a short note on VMware ESXi.
4. What Is a Storage Area Network?
5. What are the key benefits of VLAN.

60
Chapter IV

Cloud Computing Fundamental

4.0 Objective
4.1 Introduction
4.2 Deployment models of cloud computing:
4.3 The three delivery models are:
4.4 Risks of Cloud Computing
4.5 Disaster Recovery in Cloud Computing
4.6 Summary
4.7 Check Your Progress Answers

4.0 Objective
After studying this chapter you will be able to understand,
 Cloud computing and its characteristics
 Deployment models of cloud computing
 Cloud computing delivery models
 Risk factors involved in cloud computing
 Disaster recovery and planning

4.1 Introduction:
“Cloud computing is a model for enabling convenient, on-demand network access to a
shared pool of configurable resources (e.g., networks, servers, storage, applications, and
services) that can be rapidly provisioned and released with minimal management effort or
service provider interaction.”

Cloud computing intends to realize the concept of computing as a utility, just like water, gas,
electricity and telephony. It also embodies the desire of computing resources as true
services. Software and computing platform and computing infrastructure may all be
regarded as services with no concern as to how or from where they are actually provided.
The potential of cloud computing has been recognized by major industry players such that
the top five software companies by sales revenue all have major cloud offerings.

Characteristics of Cloud Computing


There are four key characteristics of cloud computing. They are shown in the following
diagram:
 On Demand Self-Service
Cloud Computing allows the users to use web services and resources on demand.
One can logon to a website at any time and use them.
 Broad Network Access

61
Since Cloud Computing is completely web based, it can be accessed from anywhere
and at any time.
 Resource Pooling
Cloud Computing allows multiple tenants to share a pool of resources. One can share
single physical instance of hardware, database and basic infrastructure.
 Rapid Elasticity
It is very easy to scale up or down the resources at any time. Resources used by the
customers or currently assigned to customers are automatically monitored and
resources.

4.2 Deployment models of cloud computing:


 Private cloud: The services of a private cloud are used only by a single organization
and are not exposed to the public. A private cloud is hosted inside the organization
and is behind a firewall, so the organization has full control of who has access to the
cloud infrastructure. The virtual machines are then still assigned to a limited number
of users.
 Public cloud: The services of a public cloud are exposed to the public and can be
used by anyone. Usually the cloud provider offers a virtualized server with an
assigned IP address to the customer. An example of a public cloud is Amazon Web
Services (AWS).
 Community cloud: The services of a community cloud are used by several
organizations to lower the costs, as compared to a private cloud.
 Hybrid cloud: The services of a hybrid cloud can be distributed in multiple cloud
types. An example of such a deployment is when sensitive information is kept in
private cloud services by an internal application. That application is then connected
to the application on a public cloud to extend the application functionality.
 Distributed cloud: The services of a distributed cloud are distributed among several
machines at different locations but connected to the same network.

4.3 The three delivery models are:


 Software as a Service (SaaS): use of provider’s applications over a network. SaaS
provides access to the service, but you don’t have to manage it because it’s done by
the service provider. When using SaaS, we’re basically renting the right to use an
application over the Internet.
 Cloud Platform as a Service (PaaS): deployment of customer-created applications to
a cloud. PaaS provides a platform such as operating system, database, web server,
etc. We’re renting a platform or an operating system from the cloud provider.
 Cloud Infrastructure as a Service (IaaS): rental of processing, storage, network
capacity, and other fundamental computing resources. IaaS (infrastructure as a
service) provides the entire infrastructure, including physical/virtual machines,
firewalls, load balancers, hypervisors, etc. When using IaaS, we’re basically
outsourcing a complete traditional IT environment where we’re renting a complete
computer infrastructure that can be used as a service over the Internet.

62
Service Model for Cloud Computing
 Desktop as a service: We’re connecting to a desktop operating system over the
Internet, which enables us to use it from anywhere. It’s also not affected if our own
physical laptop gets stolen, because we can still use it.
 Storage as a service: We’re using storage that physically exists on the Internet as it is
present locally. This is very often used in cloud computing and is the primary basis of
a NAS (network attached storage) system.
 Database as a service: Here we’re using a database service installed in the cloud as if
it was installed locally. One great benefit of using database as a service is that we can
use highly configurable and scalable databases with ease.
 Information as a service: We can access any data in the cloud by using the defined
API as if it was present locally.
 Security as a service: This enables the use of security services as if they were
implemented locally.

There are other services that exist in the cloud, but we’ve presented just the most
widespread ones that are used on a daily basis.

If we want to start using the cloud, we need to determine which service model we want to
use. The decision largely depends on what we want to deploy to the cloud. If we would like
to deploy a simple web application, we might want to choose an SaaS solution, where
everything will be managed by the service provides and we only have to worry about writing
the application code. An example of this is writing an application that can run on Heroku.

We can think of the service models in the term of layers, where the IaaS is the bottom layer,
which gives us the most access to customize most of the needed infrastructure. The PaaS is
the middle layer, which automates certain things, but is less configurable. The top layer is
SaaS, which offers the least configuration, but automates a large part of the infrastructure
that we need when deploying an application.

63
Cloud Computing Examples
Two popular cloud computing facilities are Amazon Elastic Compute Cloud (EC2) and Google
App Engine. Amazon EC2 is part of a set of standalone services which include S3 for storage,
EC2 for hosting and the simpleDB database. Google App Engine is an end-to-end service,
which combines everything into one package to provide a PaaS facility. With Amazon EC2,
users may rent virtual machine instances to run their own software and users can monitor
and increase/decrease the number of VMs as demand changes.
To use Amazon EC2 users would:
 Create an Amazon Machine Image (AMI): incorporate applications, libraries, data
and associated settings
 Upload AMI to Amazon S3
 Use Amazon EC2 web service to configure security and network access
 Choose OS, start AMI instances
 Monitor & control via web interface or APIs.

Google’s App Engine allows developers to run their web applications on Google’s
infrastructure.
To do so a user would:
 Download App Engine SDK
 Develop the application locally as a set of python programs
 Register for an application ID
 Submit the application to Google.

Having provided an overview of cloud computing we may now consider computer forensics
before we then proceed to consider the two together.

Benefits of Cloud Computing


Cloud Computing has numerous advantages. Some of them are listed below:
 One can access applications as utilities, over the Internet.
 Manipulate and configure the application online at any time.
 It does not require installing a specific piece of software to access or manipulating
cloud application.
 Cloud Computing offers online development and deployment tools, programming
runtime environment through Platform as a Service model.
 Cloud resources are available over the network in a manner that provides platform
independent access to any type of clients.
 Cloud Computing offers on-demand self-service. The resources can be used without
interaction with cloud service provider.
 Cloud Computing is highly cost effective because it operates at higher efficiencies
with greater utilization. It just requires an Internet connection.
 Cloud Computing offers load balancing that makes it more reliable.

Check your progress 4.1, 4.2 & 4.3

64
1 A _______ cloud is hosted inside the organization and is behind a firewall, so the
organization has full control of who has access to the cloud infrastructure.
2 The services of a _______ cloud are used by several organizations to lower the costs, as
compared to a private cloud.
3 _______ provides a platform such as operating system, database, web server, etc.

4.4 Risks of Cloud Computing


Although Cloud Computing is a great innovation in the world of computing, there also exist
downsides of cloud computing. Some of them are discussed below:

 Security & Privacy


It is the biggest concern about cloud computing. Since data management and
infrastructure management in cloud is provided by third-party, it is always a risk to
handover the sensitive information to such providers. Although the cloud computing
vendors ensure more secure password protected accounts, any sign of security
breach would result in loss of clients and businesses.
 Lock-In
It is very difficult for the customers to switch from one Cloud Service Provider (CSP)
to another. It results in dependency on a particular CSP for service.
 Isolation Failure
This risk involves the failure of isolation mechanism that separates storage, memory,
routing between the different tenants.
 Management Interface Compromise
In case of public cloud provider, the customer management interfaces are accessible
through the Internet.
 Insecure or Incomplete Data Deletion
It is possible that the data requested for deletion may not get deleted. It happens
either because extra copies of data are stored but are not available or disk destroyed
also stores data from other tenants.

4.5 Disaster Recovery in Cloud Computing


For any business, securing its data is of prime importance. With recent rise in cyber-attack,
decision makers are looking ways to strengthen its data security and how to eliminate such
threats. Cloud computing is the most recent and popular technology that enables a business
to host its data on the internet and servers instead of local storage elements. This, in turn,
saves the cost of setting up an independent infrastructure and allows an organization to
access the data anytime anywhere.

While the cloud technology is becoming more prevalent among both public and private
sectors, the fact that concerns the businesses is migrating data to cloud. The modern day
data backup and recovery solutions have shifted to cloud-based data backup and disaster
recovery solutions which are less hardware-dependent. They provide businesses agile
recovering approach by saving crucial data at the time of disruption.

65
But the most important thing while evaluating the cloud data backup and recovery platform
is to think of your IT goals. The all-around budget is also a factor to consider. As cloud plays
an important role in moving capital expenditures to operational expenditures, IT managers
need to ensure integrating cloud disaster recovery solution that fits within the budget and
helps accomplish performance goals. In order to make this happen, you need to consider
the following factors for moving to an advanced cloud-based disaster recovery solution.

Five Things to Consider For Cloud Backup and Disaster Recovery


 Analyze cloud cost: In an ideal world, where all the data may be stored in the cloud,
the budget decisions often compel a judgment call on the most crucial data. While
seeking integrated cloud recovery, decide which technology fits the budget. It should
also include the backup and recovery of files, server images for physical and virtual
servers that do not limit the number of servers and endpoints, databases, auditing
and 24/7 engineer-level support. When it comes to operational expenditure, you
should inevitably consider scalability as an important factor.
 Determine the backup speed: As the size of datasets increases continuously, the
most effective solution is one that manages the capacity and provides the required
data backup transfer. Speed is highly important to meet the backup window and
quickly recover the data. Additionally, a high-speed data transfer rate provides
organizations a better shot that ensures the applications and systems are backed up
within a specified window having minimal disruption.
 Transition from hardware-focused approach: According to IDC analyst firm, the
average cost of downtime is approximate $100,000 per hour. What’s more surprising
is the fact that most businesses experience 10 to 20 hours of unplanned downtime
every year that too without the occurrence of any natural disaster. The legacy
backup and recovery systems have been dependent on tape backup and hardware
which is neither cost efficient nor has the ability to effectively withstand the data
onslaught prevalent in the organizations today. Also, these legacy systems are ill-
equipped to deliver quick recovery from natural disasters. The hardware approach is
expensive as the organization needs to wait long to get an appliance replaced. While
directly-move-to-cloud approach accelerates the data recovery eradicating the need
of waiting for the hardware appliance.
 Set-up recovery time objective: It is important to know how long your business can
go without accessing the data. If you set a recovery time objective, it will provide the
parameters the IT managers require to work with for providing backup and restore.
This can take a whole day or just an hour.
 Provide an efficient user experience: An IT expert should be able to manage the
appliance-free cloud recovery solution from any business location. The managers
should have the ability to log in through the web and start the restore. In fact, the
highly advanced solutions allow you to download the files without having to recover
the entire server image at first. Even so, ease of use is the only standard for
evaluating cloud recovery options.

Check your progress 4.4 & 4.5

66
1 What is the risk factors involved in cloud computing?
2 Which points needs to be considered while configuring cloud Backup and Disaster
Recovery

4.6 Summary
Cloud computing is a model for enabling convenient, on-demand network access to a shared
pool of configurable resources that can be rapidly provisioned and released with minimal
management effort or service provider interaction.
Deployment models of cloud computing
 Private cloud
 Public cloud
 Community cloud
 Hybrid cloud
 Distributed cloud
SaaS provides access to the service, but you don’t have to manage it because it’s done by
the service provider. PaaS provides a platform such as operating system, database, web
server, etc. PaaS provides a platform such as operating system, database, web server, etc.
IaaS (infrastructure as a service) provides the entire infrastructure, including physical/virtual
machines, firewalls, load balancers, hypervisors, etc.

4.7 Check Your Progress Answers


Check your progress 4.1, 4.2 & 4.3
1) private
2) PaaS
3) community

Check your progress 4.4 & 4.5


1) Security & Privacy, Lock-In, Isolation Failure, Management Interface Compromise,
Insecure or Incomplete Data Deletion
2) Analyze cloud cost, determine the backup speed, Transition from hardware-focused
approach, Set-up recovery time objective and provide an efficient user experience.

4.8 Questions for self-study:


1) Write a short note on cloud computing.
2) What are the Deployment models of cloud computing?
3) Explain different Service Model used in Cloud Computing.
4) Explain the risks of Cloud Computing.
5) Write a short note on Disaster Recovery in Cloud Computing.

67
Chapter V

Cloud Applications& Services

5.0 Objective
5.1 Introduction
5.2 Technologies and the processes required when deploying web services
5.3 Advantages & Disadvantages of Cloud Application
5.4 Cloud Services
5.5 Cloud Economics
5.6 Cloud infrastructure components
5.7 Summary
5.8 Check Your Progress Answers

5.0 Objective
After studying this chapter you will be able to understand,
 Virtualization cloud model
 Web services used in cloud computing
 Advantages and disadvantages of cloud application
 Cloud services like IaaS, PaaS &SaaS
 Cloud infrastructure components.

5.1 Introduction
Cloud Computing Services typically provide information technology as a service over the
Internet or dedicated network. These services are delivered on demand, and payment based
on usage. These services range from full applications as (Software as Service) and
development platforms (Platform as Service), to servers, storage, and virtual desktops
(Infrastructure as Service).

Corporate and government entities utilize cloud computing services to address a variety of
application and infrastructure needs such as CRM, database, compute, and data storage.
Main advantage of using Cloud computing over traditional IT environment are, cloud
computing services deliver IT resources in few minutes to hours and with costs to pay per
usage. This helps organizations to manage the expenses more efficiently. Also in the same
way consumers utilize cloud computing services to simplify application utilization, store,
share, and protect content, and enable access from any web-connected device.

5.2 Technologies and the processes required when deploying web services:
There are certain technologies that are working behind the cloud computing platforms
making cloud computing flexible, reliable and usable. These technologies are listed below:
 Virtualization
 Service-Oriented Architecture (SOA)
 Grid Computing
 Utility Computing

68
Virtualization
Virtualizationis a technique, which allows sharing single physical instance of an application
or resource among multiple organizations or tenants (customers). It does so by assigning a
logical name to a physical resource and providing a pointer to that physical resource when
demanded.

The Multitenantarchitecture offers virtual isolationamong the multiple tenants and


therefore the organizations can use and customize the application as though they each have
its own instance running.

Service-Oriented Architecture
Service-Oriented Architecture helps to use applications as a service for other applications
regardless the type of vendor, product or technology. Therefore, it is possible to exchange
of data between applications of different vendors without additional programming or
making changes to services.

69
Grid Computing
Grid Computing refers to distributed computing in which a group of computers from
multiple locations are connected with each other to achieve common objective. These
computer resources are heterogeneous and geographically dispersed. Grid Computing
breaks complex task into smaller pieces. These smaller pieces are distributed to CPUs that
reside within the grid.

Utility Computing
Utility computing is based on Pay per Use model. It offers computational resources on
demand as a metered service. Cloud computing, grid computing, and managed IT services
are based on the concept of utility computing.

5.3 Advantages & Disadvantages of Cloud Application


Advantages of Cloud Application

70
 Usability: All cloud storage services reviewed in this topic have desktop folders for
Mac’s and PC’s. This allows users to drag and drop files between the cloud storage
and their local storage.
 Bandwidth: You can avoid emailing files to individuals and instead send a web link to
recipients through your email.
 Accessibility: Stored files can be accessed from anywhere via Internet connection.
 Disaster Recovery: It is highly recommended that businesses have an emergency
backup plan ready in the case of an emergency. Cloud storage can be used as a
back-up plan by businesses by providing a second copy of important files. These files
are stored at a remote location and can be accessed through an internet connection.
 Cost Savings: Businesses and organizations can often reduce annual operating costs
by using cloud storage; cloud storage costs about 3 cents per gigabyte to store data
internally. Users can see additional cost savings because it does not require internal
power to store information remotely.

Disadvantages of Cloud Application


 Usability: Be careful when using drag/drop to move a document into the cloud
storage folder. This will permanently move your document from its original folder to
the cloud storage location. Remember a copy and paste instead of drag/drop if you
want to retain the document’s original location in addition to moving a copy onto
the cloud storage folder.
 Bandwidth: Several cloud storage services have a specific bandwidth allowance. If an
organization surpasses the given allowance, the additional charges could be
significant. However, some providers allow unlimited bandwidth. This is a factor that
companies should consider when looking at a cloud storage provider.
 Accessibility: If you have no internet connection, you have no access to your data.
 Data Security: There are concerns with the safety and privacy of important data
stored remotely. The possibility of private data commingling with other
organizations makes some businesses uneasy. If you want to know more about those
issues that govern data security and privacy, here is an interesting article on the
recent privacy debates.
 Software: If you want to be able to manipulate your files locally through multiple
devices, you’ll need to download the service on all devices.

Check your progress 5.1, 5.2 & 5.3


 ________ Computing refers to distributed computing in which a group of computers from
multiple locations are connected with each other to achieve common objective.
 ________ computing offers computational resources on demand as a metered service.
 ________Oriented Architecture helps to use applications as a service for other applications
regardless the type of vendor, product or technology.

5.4 Cloud Services


There are different cloud services such as: -
1) infrastructure-as-a-service:
IaaS provides access to fundamental resources such as physical machines, virtual machines,
virtual storage, etc., Apart from these resources, the IaaS also offers:

71
 Virtual machine disk storage
 Virtual local area network (VLANs)
 Load balancers
 IP addresses
 Software bundles

All of the above resources are made available to end user via server virtualization.
Moreover, these resources are accessed by the customers as if they own them.

Benefits
IaaSallows the cloud provider to freely locate the infrastructure over the Internet in a cost-
effective manner. Some of the key benefits of IaaS are listed below:
 Full Control of the computing resources through Administrative Access to VMs.
 Flexible and Efficient renting of Computer Hardware.
 Portability, Interoperability with Legacy Applications.

Full Control over Computing Resources through Administrative Access to VMS


IaaS allows the consumer to access computing resources through administrative access to
virtual machines in the following manner:
 Consumer issues administrative command to cloud provider to run the virtual
machine or to save data on cloud's server.
 Consumer issues administrative command to virtual machines they owned to start
web server or installing new applications.

Flexible and Efficient Renting Of Computer Hardware


 IaaS resources such as virtual machines, storages, bandwidth, IP addresses,
monitoring services, firewalls, etc., all are made available to the consumers on rent.

72
The consumer has to pay based the length of time a consumer retains a resource.
Also with administrative access to virtual machines, the consumer can also run any
software, even a custom operating system.

Portability, Interoperability with Legacy Applications


 It is possible to maintain legacy between applications and workloads between IaaS
clouds. For example, network applications such as web server, e-mail server that
normally runs on consumer-owned server hardware can also be run from VMs in
IaaS cloud.

Issues
IaaS shares issues with PaaS and SaaS, such as Network dependence and browser based
risks. It also has some specific issues associated with it. These issues are mentioned in the
following diagram:

Compatibility with Legacy Security Vulnerabilities


Because IaaS offers the consumer to run legacy software in provider's infrastructure,
therefore it exposes consumers to all of the security vulnerabilities of such legacy software.

Characteristics
Here are the characteristics of IaaS service model:
 Virtual machines with pre-installed software.
 Virtual machines with pre-installed Operating Systems such as Windows, Linux, and
Solaris.
 On-demand availability of resources.
 Allows storing copies of particular data in different locations.
 The computing resources can be easily scaled up and down.

2) Platform-as-a-Service
PaaS offers the runtime environment for applications. It also offers development &
deployment tools, required to develop applications. PaaS has a feature of point-and-click

73
tools that enables non-developers to create web applications. Google's App Engine&
Force.comare examples of PaaS offering vendors.

Developer may log on to these websites and use the built-in API to create web-based
applications. But the disadvantage of using PaaS is that the developer lock-in with a
particular vendor. For example, an application written in Python against Google's API using
Google's App Engine is likely to work only in that environment.

Therefore, the vendor lock-in is the biggest problem in PaaS. The following diagram shows
how PaaS offers an API and development tools to the developers and how it helps the end
user to access business applications.

Benefits
 Lower Administrative Overhead
Consumer need not to bother much about the administration because it's the
responsibility of cloud provider.
 Lower Total Cost of Ownership
Consumer need not purchase expensive hardware, servers, power and data storage.
 Scalable Solutions
It is very easy to scale up or down automatically based on application resource
demands

74
Issues
Like SaaS, PaaS also place significant burdens on consumer's browsers to maintain reliable
and secure connections to the provider systems. Therefore, PaaS shares many of the issues
of SaaS. However, there are some specific issues associated with PaaS as shown in the
following diagram:

Characteristics
Here are the characteristics of PaaS service model:
 PaaS offers browser based development environment. It allows the developer to
create database and edit the application code either via Application Programming
Interface or point-and-click tools.

 PaaS provides built-in security, scalability, and web service interfaces.


 PaaS provides built-in tools for defining workflow and approval processes and
defining business rules.
 It is easy to integrate with other applications on the same platform.

75
 PaaS also provides web services interfaces that allow us to connect the applications
outside the platform.

3) Software-as-a-Service
Software as a Service (SaaS) model allows providing software application as a service to the
end users. It refers to software that is deployed on a hosted service and is accessible via
Internet. There are several SaaS applications; some of them are listed below:
 Billing and Invoicing System
 Customer Relationship Management (CRM) applications
 Help Desk Applications
 Human Resource (HR) Solutions

Some of the SaaS applications are not customizable such as an Office Suite. But SaaS
provides us Application Programming Interface (API), which allows the developer to
develop a customized application.

Characteristics
Here are the characteristics of SaaS service model:
 SaaS makes the software available over the Internet.
 The Software is maintained by the vendor rather than where they are running.
 The license to the software may be subscription based or usage based. And it is
billed on recurring basis.
 SaaS applications are cost effective since they do not require any maintenance at
end user side.
 They are available on demand.
 They can be scaled up or down on demand.
 They are automatically upgraded and updated.
 SaaS offers share data model. Therefore, multiple users can share single instance of
infrastructure. It is not required to hard code the functionality for individual users.
 All users are running same version of the software.

Benefits
Using SaaS has proved to be beneficial in terms of scalability, efficiency, performance
and much more. Some of the benefits are listed below:
 Modest Software Tools
 Efficient use of Software Licenses
 Centralized Management & Data
 Platform responsibilities managed by provider
 Multitenant solutions

5.5 Cloud Economics


Cloud economics is a branch of knowledge concerned with the principles, costs and benefits
of cloud computing.

Because Management of companies are constantly challenged to deliver information


technology services with the greatest value for the business, they must determine

76
specifically how cloud services will affect an IT budget and staffing needs. In assessing cloud
economics, CIOs and IT leaders weigh the costs pertaining to infrastructure, management,
research and development, security and support to determine if moving to the cloud makes
sense given their organization's specific circumstances.

Although the cloud can facilitate resource provisioning and flexible pricing, there are several
cloud computing costs beyond instance price lists to consider. Pricing usually includes
storage, networking, load balancing, security, redundancy, backup, software services and
operating system licenses but some cloud computing considerations that affect resource
contention, bandwidth and salaries can come as a surprise.

IT leaders within an organization must closely examine the economics of moving to the
cloud before deciding whether to invest in the expertise and time that is required to
maximize cloud investments.

Each cloud service provider has a unique bundle of services and pricing models. Different
providers have unique price advantages for different products. Typically, pricing variables
are based on the period of usage with some providers allowing for by the minute usage as
well as discounts for longer commitments.

The most common model for SaaS based products is on a per user, per month basis though
there may be different levels based on storage requirements, contractual commitments or
access to advanced features. PaaS and IaaS pricing models are more granular, with costs for
specific resources or ‘resource sets’ consumption. Aside from financial competitiveness look
for flexibility in terms of resource variables but also in terms of speed to provision and de
provision.

Application Architecture that allows you to scale different workload elements independently
means you can use cloud resources more efficiently. You may find that your ability to fine
tune scalability is affected by the way your cloud service provider packages its services and
you'll want to find a provider that matches your requirements in this regard.

Cloud computing infrastructures available for implementing cloud based services.

Cloud infrastructure refers to the hardware and software components -- such as


servers, storage, a network and virtualization software that are needed to support the
computing requirements of a cloud computing model.

Cloud infrastructure also includes an abstraction layer that virtualizes resources and logically
presents them to users through application program interfaces and API-enabled command-
line or graphical interfaces.

In cloud computing, these virtualized resources are hosted by a service provider or IT


department and are delivered to users over a network or the internet. These resources
include virtual machines and components, such as servers, memory, network switches,
firewalls, load balancers and storage.

77
Check your progress 5.4 & 5.5
1. _______ provides access to fundamental resources such as physical machines, virtual
machines, virtual storage.
2. _______ offers development & deployment tools, required to develop applications.
3. _______model allows providing software application as a service to the end users.

5.6 Cloud infrastructure components


In a cloud computing architecture, cloud infrastructure refers to the back-end components
the hardware elements found within most enterprise data centers. These include multi-
socket, multicore servers, persistent storage and local area network equipment, such as
switches and routers but on much greater scale.

Major public cloud providers, such as Amazon Web Services (AWS) or Google Cloud
Platform, offer services based on shared, multi-tenant servers. This model requires massive
compute capacity to handle both unpredictable changes in user demand and to optimally
balance demand across fewer servers. As a result, cloud infrastructure typically consists of
high-density systems with shared power.

Additionally, unlike most traditional data center infrastructures, cloud infrastructure


typically uses locally attached storage, both solid-state drives (SSDs) and hard disk drive
(HDDs), instead of shared disk arrays on a storage area network.

The disks in each system are aggregated using a distributed file system designed for a
particular storage scenario, such as object, big data or block. Decoupling the storage control
and management from the physical implementation via a distributed file system simplifies
scaling.

It also helps cloud providers match capacity to users' workloads by incrementally adding
compute nodes with the requisite number and type of local disks, rather than in large
amounts via a large storage chassis.

5.7 Cloud infrastructure as a service


Cloud infrastructure is the hardware and software components required for cloud

78
computing.Infrastructure as a service (IaaS) is a cloud model that gives organizations the
ability to rent those IT infrastructure components -- including compute, storage and
networking -- over the internet from a public cloud provider. This public cloud service model
is often referred to as IaaS.

IaaS eliminates the upfront capital costs associated with on-premises infrastructure, and
instead follows a usage-based consumption model. In this pay-per-usage model, users only
pay for the infrastructure services consumed, generally on an hourly, weekly or monthly
basis.

Cloud providers typically price IaaS on a metered basis, with rates corresponding to usage at
a given level of performance. For virtual servers, this means different prices for various
server sizes, typically measured as an increment of a standard virtual CPU size and
corresponding memory. For storage, pricing is typically based on the type of storage service,
such as object or block, performance level (SSD or HDD) and availability a single storage
location or replication across multiple geographic regions. Capacity is measured by usage
per unit time typically per month.

Check your progress 5.6 & 5.7


1. Cloud providers typically price ________ on a metered basis, with rates corresponding
to usage at a given level of performance.
2. ________cloud service to rent IT infrastructure components is often referred to as IaaS.

5.8 Summery
Virtualization is a technique, which allows sharing single physical instance of an application
or resource among multiple organizations or tenants (customers). The Multitenant
architecture offers virtual isolation among the multiple tenants and therefore the
organizations can use and customize the application as though they each have its own
instance running.
Service-Oriented Architecture helps to use applications as a service for other applications
regardless the type of vendor, product or technology. Grid Computing refers to distributed
computing in which a group of computers from multiple locations are connected with each
other to achieve common objective. Utility computing offers computational resources on
demand as a metered service. Cloud computing, grid computing, and managed IT services
are based on the concept of utility computing.
IaaS provides access to fundamental resources such as physical machines, virtual machines,
virtual storage, etc. PaaS offers development & deployment tools, required to develop
applications. PaaS has a feature of point-and-click tools that enables non-developers to
create web applications. SaaS refers to software that is deployed on a hosted service and is
accessible via Internet.
Cloud infrastructure refers to the hardware and software components -- such as servers,
storage, a network and virtualization software that are needed to support the computing
requirements of a cloud computing model. Major public cloud providers, such as Amazon
Web Services (AWS) or Google Cloud Platform, offer services based on shared, multi-tenant
servers. Infrastructure as a service (IaaS) is a cloud model that gives organizations the ability

79
to rent those IT infrastructure components -- including compute, storage and networking --
over the internet from a public cloud provider.

5.9 Check your progress answers


Check your progress 5.1, 5.2 & 5.3
1. Grid
2. Utility
3. Service

Check your progress 5.4 & 5.5


1. IaaS
2. PaaS
3. SaaS

Check your progress 5.6 & 5.7


1. IaaS
2. Public

5.10 Questions for Self-Study


1. Explain the virtualized cloud model.
2. Write a short note on Service-Oriented Architecture.
3. Explain the advantages of Cloud Application.
4. Explain the different cloud services.
5. Write a short note on cloud economics.

80
Chapter 6

Selecting Cloud Platform

6.0 Objective
6.1 Introduction
6.2 Selecting Cloud Platform
6.3 Strategy Planning Phase
6.4 Cloud Computing Tactics Planning Phase
6.5 Cloud Computing Deployment Phase
6.6 Case Study
6.7 Summary
6.8 Check Your Progress Answers

6.0 Objective
After studying this chapter you will be able to understand,
 To consider your business requirements before selecting cloud platform,
 Strategy required in selecting cloud platform,
 How to analyze problems and risks in the cloud application,
 Cloud Computing Deployment Phase,
 How to Selecting Economic Cloud Deployment Model

6.1 Introduction
To select the proper cloud platform you need to consider the requirement of applications or
data to be hosted on cloud platform. The first point to be considered is type of date
stored.Next element to be considered is the size of business and the scalability of the
services used, Number of devices to be connected to cloud server. Last point to be
considered is budget; user has to select free and paid cloud services according to usage.

6.2 Selecting Cloud Platform


Before deploying applications to cloud, it is necessary to consider your business
requirements. Following are the issues one must have to think about:
 Data Security and Privacy Requirement
 Budget Requirements
 Type of cloud - public, private or hybrid
 Data backup requirements
 Training requirements
 Dashboard and reporting requirements
 Client access requirements
 Data export requirements

81
To meet all of these requirements, it is necessary to have well-compiled planning. Here in
this tutorial, we will discuss the various planning phases that must be practiced by an
enterprise before migrating to the entire business on cloud. Each of these planning phases is
described in the following diagram:

6.3 Strategy Planning Phase


Total cloud adoption will impact on many different business processes and involves the
coordination of many different parts of the business. A successful cloud implementation
cannot happen without integrated business-driven strategic planning.

Businesses expect a lot from their cloud investments, and even though the term "cloud" has
been around for several years, the industry is still, relatively speaking, in its infancy. Very
few companies have clear cloud computing strategies in place, and even fewer have a
strategic plan laid out, which can be detrimental to the success of the cloud computing
model.

As businesses start to plan strategically for cloud adoption, they need to carefully consider
the scope of their planning activities. Cloud computing is a big change, but it is also only a
single part of a journey that began over a decade ago, that saw the industry transitioning
from old, legacy systems and monolithic technology, to a service oriented approach to ICTs.

In this, we analyze the strategy problems that customer might face. There are two steps to
perform this analysis:
 Cloud Computing Value Proposition
 Cloud Computing Strategy Planning
82
I. Cloud Computing Value Proposition
In this, we analyze the factors influencing the customers when applying cloud computing
mode and target the key problems they wish to solve.
These key factors are:
 IT management simplification.
 Operation and maintenance cost reduction.
 Business mode innovation.
 Low cost outsourcing hosting.
 High service quality outsourcing hosting.

All of the above analysis helps in decision making for future development.

II. Cloud Computing Strategy Planning


The strategy establishment is based on the analysis result of the above step. In this step, a
strategy document is prepared according to the conditions a customer might face when
applying cloud computing mode.

Strategic planning for cloud must also fit in with the basic business strategies, particularly
around services. This doesn't necessarily mean a massive plan encompassing every level of
the organization, but rather one that takes into account the value chain and makes sure that
innovation isn't stifled and business opportunities are not cut short by current application
boundaries. A "mud against the wall" approach will not work; rather formulate a strong and
logical framework in which several options can work.

Cloud computing has grown and changed a lot since its early days. In its early stages, cloud
computing was more about cost restructuring, particularly where automation and
standardization were concerned. Platform-as-a-Service (PaaS) was used mainly for non-
critical or niche business applications and the Software-as-a-Service (SaaS) layer were used
for critical web applications such as ERP, CRM and other critical applications such as email.

Cloud computing's next stage will focus more on business services as well as the
architecture of those services. Cloud services will need to align with all the client facing
business services, to create a truly service oriented business. Because of this, cloud
computing needs to be fully integrated with strategic planning, particularly for businesses
that are after more than just rationalizing their infrastructure investments.

Cloud computing will continue to evolve and grow over the next few years. This fast
morphing environment requires planning that can prepare and plan for growth and change.
Companies must plan for ongoing and fast maturing of all the variables - technology,

83
suppliers, standards, regulations, products and services, as well as internal capabilities such
as skills and learning.

Check your progress 6.1, 6.2 & 6.3


1. In Cloud Computing, ________ is used to analyze the factors influencing the customers
when applying cloud computing mode and target the key problems they wish to solve.
2. ________ planning for cloud must also fit in with the basic business strategies,
particularly around services.
3. In Cloud Computing ________ Planning, strategy document is prepared according to the
conditions a customer might face when applying cloud computing mode.

6.4 Cloud Computing Tactics Planning Phase


This step performs analysis of problems and risks in the cloud application to ensure the
customers that the cloud computing successfully meet their business goals.

This phase involves the Tacticsplanning steps:


 Business Architecture Development
 IT Architecture development
 Requirements on Quality of Service Development
 Transformation Plan development

I. Business Architecture Development


In this step, we recognize the risks that might be caused by cloud computing application
from a business perspective.
II. IT Architecture Development
In this step, we identify the applications that support the business processes and the
technologies required to support enterprise applications and data systems.
III. Requirements On Quality Of Service Development
Quality of Service refers to the non-functional requirements such as reliability, security,
disaster recovery, etc. The success of applying cloud computing mode depends on these
non-functional factors.
IV. Transformation Plan Development
In this step, we formulate all kinds of plans that are required to transform current business
to cloud computing modes.

6.5 Cloud Computing Deployment Phase


Cloud Computing Deployment phase focuses on both of the above two phases. It involves
the following two steps:
 Cloud Computing Provider
 Maintenance and Technical Service.

84
I. Cloud Computing Provider
This step includes selecting a cloud provider on basis of Service Level Agreement (SLA),
which defines the level of service the provider will meet.
II. Maintenance and Technical Service
Maintenance and Technical services are provided by the cloud provider. They must have to
ensure the quality of services.

6.6 Case Study


I Case Study: “Selecting Economic Cloud Deployment Model”
In public cloud, the cloud infrastructure, resource and services is made available to the
general public or various users or a large industry group or to an organization. Public cloud
deployment model represents a true cloud services model. Public cloud is the most
commonly described, popular and widely used model for deployment of cloud services. In
this model, all of the physical resources are owned and operated by a third party cloud
computing service provider. Cloud service provider provides services to multiple clients
which utilizes these resources through the public Internet. Cloud Services can be
dynamically provisioned and can be billed based on usage. This model provides the highest
degree of cost savings while requiring the least amount of overhead.

The public cloud offers most possible benefits and greatest potential risks. Public cloud
model helps to reduce capital expenditure and bring down operational IT costs as
application and computing resources are managed by a third party services provider.
Moreover, customers only pay for what they use on a usage subscription basis and they can
terminate their service at any time. Thus, public clouds accelerate deployments and reduce
costs

In addition, a public cloud also obviates the need for customers to maintain and upgrade
application code and infrastructure. Many public cloud customers are astonished to see new
software features automatically appear in their software without notice. Also, the public
cloud frees up IT departments to focus on more value-added activities rather than hardware
and software upgrades and maintenance. Google is a good example of a public cloud, as its
service provided by a vendor free of charge or on the basis of a pay-per-user license policy.

Risks
Public cloud also comes with many risks. Among these, the Security and privacy are the
biggest risks due to fear of organization moving data and processing beyond their own
boundary, exposes sensitive corporate data to get into the wrong hands but the actual fact
is that most corporate resources are more secure in the public cloud than in a corporate
data center. Public cloud providers specialize in data center operations and management,
and must meet the most stringent requirements for security and privacy. However, there

85
are compliance regulations that legally require some establishment to maintain data within
corporate firewalls the exact location of their data, which is generally impossible in a public
cloud which virtualizes data and processing across grid national or international computers.

Challenges
The public cloud poses many challenges such as reliability, cost, blank slate and technology
viability.

Reliability in cloud is the reliability of public cloud resources. For example, Amazon EC2
public cloud has high profile, outages, causing companies to be left stranded without much
visibility into the nature of the outage. Cost of cloud can be extremely difficult to estimate
because pricing is complex and often companies cannot accurately estimate their usage.
Blank Slate in cloud is the way to redefine corporate policies and application workflows from
scratch which generally provides plain vanilla services. Vendor and Technology viability is
the difficult to know which vendors and technologies will be around in the future.

Applications
Public cloud is best suited for business requirements where organizations is required to
manage load spikes, host, applications, utilize meantime infrastructure and manage
applications which are consumed by many users.

II Case Study:“Selecting Cloud Platform”


Considering the expense and complexity of maintaining a traditional data center, it’s no
wonder that companies are turning to cloud computing as a way to reduce costs, increase
efficiencies, and build their business. With cloud computing, companies have access to a
scalable platform; low-cost storage; database technologies; and management, deployment,
and development tools on which to build enterprise-level solutions. Cloud computing helps
businesses in the following ways:
 Reduces costs and complexity
 Adjusts capacity on demand
 Reduces time to market
 Increases opportunities for innovation
 Enhances security

Amazon Web Services (AWS) gives customers access to cloud services at competitive prices,
with the flexibility to meet their business needs. Whether it’s a small startup or a large
enterprise, all companies can leverage the features and functionality of AWS to improve
performance and increase productivity.

Weighing the financial considerations of operating a data center versus using cloud
infrastructure is not as simple as comparing hardware, storage, and compute costs.

86
Whether you own your own data center or rent space at a colocation facility, you have to
manage investments, whether directly or indirectly, including but not limited to:
 Capital expenditures
 Operational expenditures
 Staffing
 Opportunity costs
 Licensing
 Facilities overhead

Typical Data Center Costs

If you’re considering an expansion of your data center or colocation footprint, here are
some questions to ask:

Capacity planning
 How many servers will be added this year? What are the forecasts for the next year
and beyond?
 Can hardware be turned on and off when it’s not being used?
 How does the pricing model work?

Utilization
 What is the average server utilization?
 How much needs to be provisioned for peak load?

Operations
 Are facilities adequate for expansion?
 Is the organization ready for international expansion?
 Can utilities (electricity, cooling) be measured accurately and does budget cover both
average and peak requirements?

87
Optimization
 Can we provide automatic scaling of our current infrastructure, or the ability to
“reserve” capacity?
 What if we need to quickly expand the infrastructure? What costs come into play?

Check your progress 6.4, 6.5 & 6.6


1. The risks that might be caused by cloud computing application from a business
perspective are recognized in _______Architecture Development.
2. We formulate all kinds of plans that are required to transform current business to cloud
computing modes in _______Plan Development.
3. We identify the applications that support the business processes and the technologies
required to support enterprise applications and data systems in _______ Architecture
Development.

6.7 Summary:
Before deploying applications to cloud, it is necessary to consider your business
requirements. Following are the issues one must have to think about:
 Data Security and Privacy Requirement
 Budget Requirements
 Type of cloud - public, private or hybrid
 Data backup requirements
 Training requirements
 Dashboard and reporting requirements
 Client access requirements
 Data export requirements
Cloud Computing Value Proposition will help us to analyze the factors influencing the
customers when applying cloud computing mode and target the key problems they wish to
solve. Cloud Computing Strategy Planning is based on the analysis result of the above step.
In this step, a strategy document is prepared according to the conditions a customer might
face when applying cloud computing mode.
This phase involves the following planning steps:
 Business Architecture Development
 IT Architecture development
 Requirements on Quality of Service Development
 Transformation Plan development
Cloud Computing Deployment phase focuses on both of the above two phases. It involves
the following two steps:
 Cloud Computing Provider
 Maintenance and Technical Service.

88
6.8 Check your progress answers
Check your progress 6.1, 6.2 & 6.3

1. Value Proposition
2. Strategic
3. Strategy

Check your progress 6.4, 6.5& 6.6

1. Business
2. Transformation
3. IT

6.9 Questions for Self-Study


1. State the factors to be considered while selecting cloud platform.
2. Explain the phase involves the Tactics planning.
3. How to select Economic Cloud Deployment Model?

89
QUESTION BANK
1. Write a procedure to create a new user and adding it to group.
2. Explain the different RAID levels.
3. What is YUM?
4. Write a short note of Network File System.
5. How to configure cache NameServer.
6. Differentiate between advantages and disadvantages of virtualization.
7. Write a short note on Virtualization.
8. Write a short note of cloud computing.
9. What is Bare Metal Hypervisor?
10. What is Hosted Hypervisor?
11. Write a short note on iSCSI.
12. Explain Network-attached Storage.
13. Write a short note on VLAN.
14. Explain in detail Characteristics of Cloud Computing.
15. Explain the three delivery models in cloud computing.
16. What are the benefits of cloud computing?
17. Explain the five Things to Consider for Cloud Backup and Disaster Recovery.
18. Write a short note on Grid Computing.
19. Write a short note on Utility Computing.
20. Write a short note on Virtualization.
21. Explain Advantages &dis-advantages of Cloud Application.
22. Write a short note on IaaS.
23. Write a short note on SaaS.
24. Write a short note on PaaS.
25. Write a short note in Cloud infrastructure components.
26. Explain the technologies that are working behind the cloud computing platforms
making cloud computing flexible, reliable and usable?
27. Explain in detail Cloud computing Strategy Planning Phase.
28. Explain in detail Cloud Computing Deployment Phase.
29. Explain with example factors to be considered while Selecting Cloud Platform.

90
Reference Books:

1. Cloud Computing For Dummies, by Fern Halper, Judith Hurwitz, Marcia Kaufman, and
Robin Bloor [ISBN: 1511404582, 9781511404587]
2. Cloud Computing: Principles and Paradigms, by RajkumarBuyya, James Broberg,
AndrzejGoscinski [ISBN-13: 978-8126541256]
3. Cloud Computing - Theory and Practice, by Marinescu [ISBN-13: 978-9351070948]
4. Cloud Computing: Concepts, Technology & Architecture, 1e, by Erl [ISBN-13: 978-
9332535923]
5. Amazon Web Services For Admins For Dummies, by John Paul Mueller [ISBN-13: 978-
8126565634]
6. Distributed and Cloud Computing, 1st edition, Morgan Kaufmann, 2011.[ISBN-13: 978-
0123858801]
7. Enterprise Cloud Computing Technology Architecture Applications, by GautamShroff
[ISBN: 978-0521137355]
8. Cloud Computing, A Practical Approach, by Toby Velte, Anthony Velte, Robert
Elsenpeter [ISBN: 0071626948]
9. Cloud Computing Strategies, by Dimitris N. Chorafas [ISBN: 1439834539]

91

You might also like