CC EBook
CC EBook
1.0 Objective
1.1 Introduction
1.2 What is an OS?
1.3 Users & Groups Administration
1.4 Task Scheduling
1.5 RAID Implementation
1.6 Logical Volume Management (LVM)
1.7 Installing & Managing Packages in Linux
1.8 Configuring DHCP
1.9 Installing Server components
1.10 Summary
1.11 Check Your Progress Answers
1.0 Objective
After studying this chapter you will be able to understand,
Architecture on an Operating system, different boot loaders used in Linux OS,
Creating and managing the user and group rights,
Implementing the different RAID configurations,
Managing the disk space,
Using YUM and RPM to install packages on Linux,
Configuring the network and installing different server components.
1.1 Introduction
Linux is a generic term that commonly refers to Unix-like Computer Operating Systems that
use the Linux kernel. Linux is one of the most prominent examples of free software and open
source development; typically all the underlying source code that can be used, freely modified,
and redistributed by anyone.
Figure- 1.1
What is Linux?
The name "Linux" comes from the Linux kernel, originally written in 1991 by Linus Torvalds.
The system's utilities and libraries usually come from the GNU operating system, announced in
1983 by Richard Stallman. The GNU contribution is the basis for the alternative name
GNU/Linux.
/
/bin
/boot
/dev
/etc
/home
/initrd
/lib
/lost+found
/media
/mnt
/opt
/proc
/root
/sbin
/usr
/var
/srv
/tmp
-3-
program code and data are stored on nonvolatile or persistent local or remote peripheral
memories or mass storage devices. Typical examples of such persistent storage devices are:
hard disk, CD, DVD, USB flash drive and Floppy drive. When a computer is first powered on, it
must initially rely only on the code and data stored in nonvolatile portion of the systems
memory map, such as ROM, NVRAM or CMOS RAM. Persistent code and data residing in the
systems memory map represent the bare minimum needed to access peripheral persistent
devices and load into the systems memory all of the missing parts of the operating system.
Truly speaking at power-on time the computing system does not have an operating system in
memory. Among many things, the computer's hardware alone (processor and systems
memory) cannot perform many complex systems actions of which loading program files from
the disk based file systems is one of the most important tasks.
The program that starts the "chain reaction" which ends with the entire operating system
being loaded is known as bootstrap loader. Early computer designers creatively imagined that
before being ready to "run" a computer program, a small initiating program called a bootstrap
loader, bootstrap or boot loader had to run first. This program's only job is to load other
software for the operating system to start. Often, multiple-stage boot loaders are used, in
which several small programs of increasing complexity sequentially summon one after the
other in a process of chain loading, until the last of them loads the operating system.
In case of Linux there are two types of Boot Loaders:
1. LILO
2. GRUB
LILO:
LILO was originally developed by Werner Almesberger, while its current developer is John
Coffman. LILO does not depend on a specific file system, and can boot an operating system
(e.g., Linux kernel images) from floppy disks and hard disks. One of up to sixteen different
images can be selected at boot time. Various parameters, such as the root device, can be set
independently for each kernel. LILO can be placed either in the master boot record (MBR) or
the boot sector of a partition. In the latter case something else must be placed in the MBR to
load LILO.
At system start, only the BIOS drivers are available for LILO to access hard disks. For this
reason, with very old BIOS, the accessible area is limited to cylinders 0 to 1023 of the first two
hard disks. For later BIOS, LILO can use 32-bit "logical block addressing" (LBA) to access
practically the entire storage of all the harddisks that the BIOS allows access to. LILO was the
default bootloader for most Linux distributions in the years after the popularity of loadlin.
Today, most distributions use GRUB as the default bootloader.
-4-
Figure- 1.3
GRUB:
GNU GRUB ("GRUB" for short) is a boot loader package from the GNU Project. GRUB is the
reference implementation of the Multiboot Specification, which allows a user to have several
different operating systems on their computer at once, and to choose which one to run when
the computer starts. GRUB can be used to select from different kernel images available on a
particular operating system's partitions, as well as to pass boot-time parameters to such
kernels.
GNU GRUB developed from a previous package called the Grand Unified Bootloader (a play on
grand unified theory). It is predominantly used on Unix-like systems; the GNU operating
system uses GNU GRUB as its boot loader, as do most general-purpose Linux distributions.
Solaris has used GRUB as its bootloader on x86 systems since the Solaris 10 1/06 release.
1 In computer science, the _______ is the central component of most computer operating
systems.
2 _______ does not depend on a specific file system, and can boot an operating system from
floppy disks and hard disks.
3 _______ is the reference implementation of the Multiboot Specification, which allows a user to
have several different operating systems on their computer at once.
-5-
Adding user newbie...
Adding new group newbie (1001).
Adding new user newbie (1001) with group newbie.
Creating home directory /home/newbie.
Copying files from /etc/skel
Changing password for newbie
Enter the new password (minimum of 5, maximum of 8 characters)
Please use a combination of upper and lower case letters and numbers.
Re-enter new password:
Password changed.
Changing the user information for newbie
Enter the new value, or press return for the default
Full Name []:
Newbie Dewbie
Room Number []:
Work Phone []:
Home Phone []:
Other []:
Is the information correct? [y/n] y
linux:~#
Notice that the lines where the password was typed were overwritten by the subsequent
lines. Moreover, for security, passwords are not echoed to the console as they are typed.
Notice also that several of the information fields were omitted - for example, Room
Number.
You can specify such information if you think it may be useful, but the system makes no use of
the information and doesn't require you to provide it.
The similarly named useradd command also creates a user account, but does not prompt you
for the password or other information.
When the command establishes a user account, it creates a home directory for the user. In the
previous example, the command would have created the directory /home/newbie. It also
places several configuration files in the home directory, copying them from the directory
/etc/skel. These files generally have names beginning with the dot (.) character, so they are
hidden from an ordinary ls command. Use the -a argument of ls to list the names of the files.
The files are generally ordinary text files, which you can view with a text editor, such as ae. By
modifying the contents of such files, you can control the operation of the associated
application.
-6-
You can change the name associated with a user account, by using the chfn command:
chfn -f
nameuserid
Where, name specifies the new name and userid specifies the account to be modified. If the
name contains spaces or other special characters, it should be enclosed in double quotes (").
For example, to change the name associated with the account newbie to Dewbie Newbie, you
would enter the following command:
chfn -f "Dewbie Newbie" newbie
To change a password, you use the passwd command. To change your own password, enter a
command like this one:
passwd
This command changes the password associated with the current user account. You don't
have to be logged in as root to change a password. Because of this, users can change their own
passwords without the help of the system administrator. The root user, however, can change
the password associated with any user account, as you'll see shortly. Of course, only root can
do so - other users can change only their own password.
As the root user, you can change the password associated with any user account. The system
doesn't ask you for the current password, it immediately prompts for the new password:
linux:~# passwd newbie
Changing password for newbie
Enter the new password (minimum of 5, maximum of 8 characters)
Please use a combination of upper and lower case letters and numbers.
New password:
Re-enter new password:
Password changed.
Information on users is stored in the file /etc/passwd, which you can view using a text editor.
Any user can read this file, though only the root user can modify it. If you selected shadow
passwords, passwords are encrypted and stored in the file /etc/shadow, which can be read
only by the root user.
-7-
Linux uses groups to define a set of related user accounts that can share access to a file or
directory. You probably won't often find it necessary to configure group definitions,
particularly if you use your system as a desktop system rather than a server. However, when
you wish, you create and delete groups and modify their membership lists.
Creating a group
To create a new group, use the groupadd command:
groupadd group
Where, group specifies the name of the group to be added. Groups are stored in the file
/etc/group, which can be read by any user but modified only by root.
For example, to add a group named newbies, you would enter the following command:
groupadd newbies
Deleting a group
To delete a group, use the groupdel command:
groupdel group
Where group specifies the name of the group to be deleted. For example, to delete the group
named newbies, you would enter the following command:
groupdel newbies
-8-
Password: The encrypted password associated with the group. This field is not generally used,
containing an x instead.
Group ID: The unique numeric ID associated with the group.
Member list
A list of user accounts, with a comma (,) separating each user account from the next.
To remove a member from a group, first create a backup copy of the /etc/group file:
cp /etc/group /etc/group.SAVE
Crontab files can be used to automate backups, system maintenance and other repetitive
tasks. The syntax is powerful and flexible, so you can have a task run every fifteen minutes or
at a specific minute on a specific day every year.
Open terminal:
Use the crontab -e command to open your user account’s crontab file. Commands in this file
run with your user account’s permissions. If you want a command to run with system
permissions, use the sudocrontab -e command to open the root account’s crontab file. Use the
su -c “crontab -e” command instead if your Linux distribution doesn’t use sudo.
Lines in the crontab file are written in the following sequence, with the following acceptable
values:
minute(0-59) hour(0-23) day(1-31) month(1-12) weekday(0-6) command
You can use an asterisk (*) character to match any value. For example, using a asterisk for the
month would cause the command to run every month.
-9-
For example, let’s say we want to run the command /usr/bin/example at 12:30 a.m. every day.
We’d type:
29 0 * * * /usr/bin/example
We use 29 for the 30-minute mark and 0 for 12 a.m. because the minute, hour and weekday
values start at 0. Note that the day and month values start at 1 instead of 0.
Hardware RAID
Hardware RAID is a physical storage device which is built from multiple hard disks. While
connecting with system all disks appears as a single SCSI disk in system. From system points of
view there is no difference between a regular SCSI disk and a Hardware RAID device. System
can use hardware RAID device as a single SCSI disk.
Hardware RAID has its own independent disk subsystem and resources. It does not use any
resources from system such as power, RAM and CPU. Hardware RAID does not put any extra
load in system. Since it has its own dedicate resources, it provides high performance.
Software RAID
Software RAID is a logical storage device which is built from attached disks in system. It uses all
resources from system. It provides slow performance but cost nothing. In this tutorial we will
learn how to create and manage software RAID in detail.
This tutorial is the last part of our article “Linux Disk Management Explained in Easy Language
with Examples”. You can read other parts of this article here.
- 10 -
A RAID device can be configured in multiple ways. Depending on configuration it can be
categorized in ten different levels. Before we discuss RAID levels in more detail, let’s have a
quick look on some important terminology used in RAID configuration.
Chunk: - This is the size of data block used in RAID configuration. If chunk size is 64KB then
there would be 16 chunks in 1MB (1024KB/64KB) RAID array.
Hot Spare: - This is the additional disk in RAID array. If any disk fails, data from faulty disk will
be migrated in this spare disk automatically.
Mirroring: - If this feature is enabled, a copy of same data will be saved in other disk also. It is
just like making an additional copy of data for backup purpose.
Striping: - If this feature is enabled, data will be written in all available disks randomly. It is just
like sharing data between all disks, so all of them fill equally.
Parity: - This is method of regenerating lost data from saved parity information.
RAID Level 0
This level provides striping without parity. Since it does not store any parity data and perform
read and write operation simultaneously, speed would be much faster than other level. This
level requires at least two hard disks. All hard disks in this level are filled equally. You should
use this level only if read and write speed are concerned. If you decide to use this level then
always deploy alternative data backup plan. As any single disk failure from array will result in
total data loss.
RAID Level 1
This level provides parity without striping. It writes all data on two disks. If one disk is failed or
removed, we still have all data on other disk. This level requires double hard disks. It means if
you want to use 2 hard disks then you have to deploy 4 hard disks or if you want use one hard
disk then you have to deploy two hard disks. First hard disk stores original data while other
disk stores the exact copy of first disk. Since data is written twice, performance will be
reduced. You should use this level only if data safety is concerned at any cost.
RAID Level 5
This level provides both parity and striping. It requires at least three disks. It writes parity data
equally in all disks. If one disk is failed, data can be reconstructed from parity data available on
remaining disks. This provides a combination of integrity and performance. Wherever possible
you should always use this level.
- 11 -
If RAID device is properly configured, there will be no difference between software RAID and
hardware RAID from operating system’s point of view. Operating system will access RAID
device as a regular hard disk, no matter whether it is a software RAID or hardware RAID.
Linux provides md kernel module for software RAID configuration. In order to use software
RAID we have to configure RAID md device which is a composite of two or more storage
devices.
Logical Volume Management (LVM) makes it easier to manage disk space. If a file system
needs more space, it can be added to its logical volumes from the free spaces in its volume
group and the file system can be re-sized as we wish. If a disk starts to fail, replacement disk
can be registered as a physical volume with the volume group and the logical volumes extents
can be migrated to the new disk without data loss.
In a modern world every Server needs more space day by day for that we need to expand
depending on our needs. Logical volumes can be use in RAID, SAN. A Physical Disk will be
grouped to create a volume Group. Inside volume group we need to slice the space to create
Logical volumes. While using logical volumes we can extend across multiple disks, logical
volumes or reduce logical volumes in size with some commands without reformatting and re-
partitioning the current disk. Volumes can stripes data across multiple disks this can increase
the I/O stats.
LVM Features
It is flexible to expand the space at any time.
Any file systems can be installed and handle.
Migration can be used to recover faulty disk.
Restore the file system using Snapshot features to earlier stage. etc…
- 15 -
1.8 Configuring DHCP
DHCP, or Dynamic Host Configuration Protocol, allows an administrator to configure network
settings for all clients on a central server.
The DHCP clients request an IP address and other network settings from the DHCP server on
the network. The DHCP server in turn leases the client an IP address within a given range or
leases the client an IP address based on the MAC address of the client's network interface card
(NIC). The information includes its IP address, along with the network's name server, gateway,
and proxy addresses,including the netmask.
Nothing has to be configured manually on the local system, except to specify the DHCP server
it should get its network configuration from. If an IP address is assigned according to the MAC
address of the client's NIC, the same IP address can be leased to the client every time the
client requests one. DHCP makes network administration easier and less prone to error.
Now a new window will show you all available LAN card select your LAN card
- 16 -
Assign IP in this box and click ok
click on ok, quit and again quit to come back on root prompt.
restart the network service so new IP address can take place on LAN card
#service network restart
main configuration file of dhcp server is dhcpd.conf. This file located on /etc directory. If this
file is not present there or you have corrupted this file, then copy new file first, if ask for
overwrite press y
- 17 -
Make these change in this file to configure dhcp server
remove this line # - - - default gateway
set option routers to 192.168.0.254
set option subnet-mask to 255.255.255.0
optionnis domain to example.com
option domain-name to example.com
option domain-name-servers to 192.168.0.254
range dynamic-bootp to 192.168.0.10 192.168.0.50;
After change this file should look like this
- 18 -
You can install Apache via the default Package Manager available on all Red Hat based
distributions like CentOs, Red Hat and Fedora.
[root@amsterdam ~]# yum install httpd
The apache source tarball could be converted into an rpm file using the following command.
[root@amsterdam ~]# rpmbuild -tb httpd-2.4.x.tar.bz2
It is mandatory to have -devel package installed on your server for creating .rpm file from
source.
Once you convert the source file into an rpm installer, you could use the following command
to install Apache.
[root@amsterdam ~]# rpm –ivh httpd-2.4.4-3.1.x86_64.rpm
After the installation the server does not start automatically, in order to start the service, you
have to use any of the following command on Fedora, CentOs or Red Hat.
[root@amsterdam ~]# /usr/sbin/apachectl start
[root@amsterdam ~]# servicehttpd start
[root@amsterdam ~]# /etc/init.d/httpd start
Current status of vsftpd service must be running. Start if it is stopped. Restart vsftpd service
whenever you made any change in configuration file.
Benefits of NFS
NFS allows local access to remote files.
It uses standard client/server architecture for file sharing between all *nix based machines.
With NFS it is not necessary that both machines run on the same OS.
With the help of NFS we can configure centralized storage solutions.
Users get their data irrespective of physical location.
No manual refresh needed for new files.
Newer version of NFS also supports acl, pseudo root mounts.
Can be secured with Firewalls and Kerberos.
- 21 -
Configure Cache NameServer
The job of a DNS caching server is to query other DNS servers and cache the response. Next
time when the same query is given, it will provide the response from the cache. The cache will
be updated periodically.
Please note that even though you can configure bind to work as a Primary and as a Caching
server, it is not advised to do so for security reasons. Having a separate caching server is
advisable.
All we have to do to configure a Cache NameServer is to add your ISP (Internet Service
Provider)’s DNS server or any OpenDNS server to the file /etc/bind/named.conf.options. For
Example, we will use google’s public DNS servers, 8.8.8.8 and 8.8.4.4.
Uncomment and edit the following line as shown below in /etc/bind/named.conf.options file.
forwarders {
8.8.8.8;
8.8.4.4;
};
After the above change, restart the DNS server.
$ sudo service bind9 restart
Configure Primary/Master Nameserver
Next, we will configure bind9 to be the Primary/Master for the domain/zone
“thegeekstuff.net”.
As a first step in configuring our Primary/Master Nameserver, we should add Forward and
Reverse resolution to bind9.
To add a DNS Forward and Reverse resolution to bind9, edit /etc/bind9/named.conf.local.
zone "thegeekstuff.net" {
type master;
file "/etc/bind/db.thegeekstuff.net";
};
zone "0.42.10.in-addr.arpa" {
type master;
notify no;
file "/etc/bind/db.10";
};
Now the file /etc/bind/db.thegeekstuff.net will have the details for resolving hostname to IP
address for this domain/zone, and the file /etc/bind/db.10 will have the details for resolving IP
address to hostname.
Build the Forward Resolution for Primary/Master NameServer
- 22 -
Now we will add the details which are necessary for forward resolution into
/etc/bind/db.thegeekstuff.net.
First, copy /etc/bind/db.localto /etc/bind/db.thegeekstuff.net
$ sudocp /etc/bind/db.local /etc/bind/db.thegeekstuff.net
Next, edit the /etc/bind/db.thegeekstuff.net and replace the following.
In the line which has SOA: localhost. – This is the FQDN of the server in charge for this
domain. I’ve installed bind9 in 10.42.0.83, whose hostname is “ns”. So replace the
“localhost.” with “ns.thegeekstuff.net.”. Make sure it end’s with a dot(.).
In the line which has SOA: root.localhost. – This is the E-Mail address of the person who is
responsible for this server. Use dot(.) instead of @. I’ve replaced with lak.localhost.
In the line which has NS: localhost. – This is defining the Name server for the domain (NS).
We have to change this to the fully qualified domain name of the name server. Change it
to “ns.thegeekstuff.net.”. Make sure you have a “.” at the end.
Next, define the A record and MX record for the domain. A record is the one which maps
hostname to IP address, and MX record will tell the mailserver to use for this domain.
Once the changes are done, the /etc/bind/db.thegeekstuff.net file will look like the following:
$TTL 604800
@ IN SOA ns.thegeekstuff.net. lak.localhost. (
1024 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ) ; Negative Cache TTL
;
@ IN NS ns.thegeekstuff.net.
thegeekstuff.net. IN MX 10 mail.thegeekstuff.net.
ns IN A 10.42.0.83
web IN A 10.42.0.80
mail IN A 10.42.0.70
Build the Reverse Resolution for Primary/Master NameServer
We will add the details which are necessary for reverse resolution to the file /etc/bind/db.10.
Copy the file /etc/bind/db.127 to /etc/bind/db.10
$ sudocp /etc/bind/db.127 /etc/bind/db.10
Next, edit the /etc/bind/db.10 file, and basically changing the same options as
/etc/bind/db.thegeekstuff.net
$TTL 604800
@ IN SOA ns.thegeekstuff.net. root.localhost. (
20 ; Serial
604800 ; Refresh
86400 ; Retry
- 23 -
2419200 ; Expire
604800 ) ; Negative Cache TTL
;
@ IN NSns.
Next, for each A record in /etc/bind/db.thegeekstuff.net, add a PTR record.
$TTL 604800
@ IN SOA ns.thegeekstuff.net. root.thegeekstuff.net. (
20 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ) ; Negative Cache TTL
;
@ IN NSns.
83 IN PTR ns.thegeekstuff.net.
70 IN PTR mail.thegeekstuff.net.
80 IN PTR web.thegeekstuff.net.
Whenever you are modifying the file db.thegeekstuff.net and db.10, you need to increment
the “Serial” number as well. Typically admin uses DDMMYYSS for serial numbers and when
they modify, the change the serial number appropriately.
Finally, restart the bind9 service:
$ sudo service bind9 restart
1 ________ allows an administrator to configure network settings for all clients on a central
server.
2 ________ is used for uploading/downloading files between two computers over a
network.
3 ________ allows you to mount your local file systems over a network and remote hosts
to interact with them as they are mounted locally on the same system.
4 ________ is an internet service that maps IP addresses to fully qualified domain names
(FQDN) and vice versa.
1.10 Summary
An operating system (commonly abbreviated OS and O/S) is the infrastructure software
component of a computer system; it is responsible for the management and coordination of
activities and the sharing of the limited resources of the computer. the kernel is the central
component of most computer operating systems. LILO does not depend on a specific file
system, and can boot an operating system (e.g., Linux kernel images) from floppy disks and
hard disks. GRUB is the reference implementation of the Multiboot Specification, which allows
- 24 -
a user to have several different operating systems on their computer at once, and to choose
which one to run when the computer starts.
To create a user account, you use the adduser command. Linux uses groups to define a set of
related user accounts that can share access to a file or directory. You probably won't often find
it necessary to configure group definitions, particularly if you use your system as a desktop
system rather than a server. The cron daemon on Linux runs tasks in the background at
specific times; it’s like the Task Scheduler on Windows.
The cron daemon on Linux runs tasks in the background at specific times; it’s like the Task
Scheduler on Windows. Hardware RAID is a physical storage device which is built from
multiple hard disks. Software RAID is a logical storage device which is built from attached disks
in system. Logical Volume Management (LVM) makes it easier to manage disk space. If a file
system needs more space, it can be added to its logical volumes from the free spaces in its
volume group and the file system can be re-sized as we wish.
YUM (Yellowdog Updater Modified) is an open source command-line as well as graphical
based package management tool for RPM (RedHat Package Manager) based Linux systems. It
allows users and system administrator to easily install, update, remove or search software
packages on a system. RPM (Red Hat Package Manager) is an default open source and most
popular package management utility for Red Hat based systems like (RHEL, CentOS and
Fedora). The tool allows system administrators and users to install, update, uninstall, query,
verify and manage system software packages in Unix/Linux operating systems. DHCP, or
Dynamic Host Configuration Protocol, allows an administrator to configure network settings
for all clients on a central server.
1. kernel
2. LILO
3. GRUB
1. adduser
2. chfn
3. adduser
4. cron
1. Level 0
2. Level 5
3. LVM
1. DHCP
- 25 -
2. FTP
3. NFS
4. DNS
- 26 -
Chapter II
Introduction to Virtualization
2.0 Objective
2.1 Introduction
2.2 Types of Virtualization
2.3 Virtualization- Advantages and Disadvantages
2.4 Relationship between Virtualization & Cloud Computing
2.5 Summary
2.6 Check Your Progress Answers
6.0 Objective
After studying this chapter you will be able to understand,
Virtualization technology,
Different types of virtualization,
Advantage and disadvantages of virtualization &
Relationship between virtualization and cloud computing.
2.1 Introduction:
Virtualization is a technology that helps us to install different Operating Systems on hardware.
They are completely separated and independent from each other. It is defined as – “In
computing, virtualization is a broad term that refers to the abstraction of computer resources.
Virtualization hides the physical characteristics of computing resources from their users, their
applications or end users. This includes making a single physical resource (such as a server, an
operating system, an application or a storage device) appear to function as multiple virtual
resources. It can also include making multiple physical resources (such as storage devices or
servers) appear as a single virtual resource.”
Virtualization is often:
The creation of many virtual resources from one physical resource.
The creation of one virtual resource from one or more physical resource
- 27 -
Server Virtualization
It is virtualizing your server infrastructure where you do not have to use any more physical
servers for different purposes.
- 28 -
Network Virtualization
It is a part of virtualization infrastructure, which is used especially if you are going to visualize
your servers. It helps you in creating multiple switching, Vlans, NAT-ing, etc.
Storage Virtualization
This is widely used in datacenters where you have a big storage and it helps you to create,
delete, allocated storage to different hardware. This allocation is done through network
connection. The leader on storage is SAN. A schematic illustration is given below:
- 29 -
2.1, 2.2 Check your progress:
1 ________ Virtualization is a part of virtualization infrastructure, which is used especially if
you are going to visualize your servers.
2 ________ Virtualization is widely used in datacenters where you have a big storage and it
helps you to create, delete, allocated storage to different hardware.
Disadvantages of Virtualization
Although you cannot find many disadvantages for virtualization, we will discuss a few
prominent one as follows:
Extra Costs
Maybe you have to invest in the virtualization software and possibly additional
hardware might be required to make the virtualization possible. This depends on your
existing network. Many businesses have sufficient capacity to accommodate the
virtualization without requiring much cash. If you have an infrastructure that is more
than five years old, you have to consider an initial renewal budget.
Software Licensing
- 31 -
This is becoming less of a problem as more software vendors adapt to the increased
adoption of virtualization. However, it is important to check with your vendors to
understand how they view software use in a virtualized environment.
Learn the new Infrastructure
Implementing and managing a virtualized environment will require IT staff with
expertise in virtualization. On the user side, a typical virtual environment will operate
similarly to the non-virtual environment. There are some applications that do not
adapt well to the virtualized environment.
Essentially, virtualization differs from cloud computing because virtualization is software that
manipulates hardware, while cloud computing refers to a service that results from that
manipulation.
"Virtualization is a foundational element of cloud computing and helps deliver on the value of
cloud computing," Adams said. "Cloud computing is the delivery of shared computing
resources, software or data — as a service and on-demand through the Internet”.
Most of the confusion occurs because virtualization and cloud computing work together to
provide different types of services, as is the case with private clouds.
The cloud can, and most often does, include virtualization products to deliver the compute
service, said Rick Philips, vice president of compute solutions at IT firm Weidenhammer. "The
difference is that a true cloud provides self-service capability, elasticity, automated
management, scalability, and pay-as you go service that is not inherent in virtualization."\
A practical comparison
Virtualization can make 1 resource act like many, while cloud computing lets different
departments (through private cloud) or companies (through a public cloud) access a single
pool of automatically provisioned resources.
Virtualization
Virtualization is technology that allows you to create multiple simulated environments or
dedicated resources from a single, physical hardware system. Software called a hypervisor
connects directly to that hardware and allows you to split 1 system into separate, distinct, and
secure environments known as virtual machines (VMs). These VMs rely on the hypervisor’s
ability to separate the machine’s resources from the hardware and distribute them
appropriately.
Cloud Computing
- 32 -
Cloud computing is a set of principles and approaches to deliver compute, network, and
storage infrastructure resources, services, platforms, and applications to users on-demand
across any network. These infrastructure resources, services, and applications are sourced
from clouds, which are pools of virtual resources orchestrated by management and
automation software so they can be accessed by users on-demand through self-service portals
supported by automatic scaling and dynamic resource allocation.
2.5 Summery
Virtualization is a technology that helps us to install different Operating Systems on hardware.
They are completely separated and independent from each other. Server Virtualization is
ayour server infrastructure where you do not have to use any more physical servers for
different purposes. Storage Virtualization is done through network connection. The leader on
storage is SAN. Virtualization decreases costs by reducing the need for physical hardware
systems. Disaster recovery is very easy when your servers are virtualized. Virtualization is a
foundational element of cloud computing and helps deliver on the value of cloud computing.
Virtualization is technology that allows you to create multiple simulated environments or
dedicated resources from a single, physical hardware system. Cloud computing is a set of
principles and approaches to deliver compute, network, and storage infrastructure resources,
services, platforms, and applications to users on-demand across any network.
1. Network
2. Storage
1. Virtualization
2. Virtualization
3. Cloud
- 33 -
- 34 -
Chapter III
3.0 Objective
3.1 Introduction
3.2 Understanding Different Types of Hypervisors
3.3 Installing Hyper-V in a windows 10 workstation
3.4 Creating a VM with VMware Workstation
3.5 iscsi Intro & Setup
3.6 Network-attached Storage
3.7 Storage Area Network
3.8 VLAN
3.9 Summary
3.10 Check Your Progress Answers
6.0 Objective
After studying this chapter you will be able to understand,
Different types of Hypervisor and their use.
Installation and working of Bare metal hypervisor
Creating virtual machine using VMware and Virtual box
Network attached storage and Storage area network
VLAN
3.1 Introduction:
A hypervisor is a function which abstracts and isolates operating systems and applications
from the underlying computer hardware. This process of abstraction allows the underlying
host machine hardware to independently operate one or more virtual machines as guests. This
feature allow multiple guest VMs to effectively share the system's physical compute resources,
such as processor cycles, memory space, network bandwidth. A hypervisor is sometimes also
called a virtual machine monitor (VMM).
A hypervisor is a thin software layer that intercepts operating system calls to the hardware. It
is also called as the Virtual Machine Monitor (VMM). It creates a virtual platform on the host
computer, on top of which multiple guest operating systems are executed and monitored.
Examples of this virtual machine architecture are Oracle VM, Microsoft Hyper-V, VMWare ESX
and Xen.
Hosted Hypervisor
Hosted hypervisors are designed to run within a traditional operating system. In other words,
a hosted hypervisor adds a distinct software layer on top of the host operating system. While,
the guest operating system becomes a third software level above the hardware.
In my case, we have a laptop HP Probook 450 G3, which supports it. Before continuing with
the installation, follow the steps given below.
Step 1: Ensure that hardware virtualization support is turned on in the BIOS settings as shown
below:
Step 2: Type in the search bar “turn windows features on or off” and click on that feature as
shown below.
- 37 -
Creating a Virtual Machine with Hyper-V
In this section, we will learn how to create a virtual machine. To begin with, we have to open
the Hyper-V manager and then follow the steps given below.
Step 1: Go to “Server Manager” -- Click on “Hyper-V Manager”.
- 38 -
Step 3: Double-click on “Virtual Machine…”
Step 4: A new table will open -- Type Name of your new machine -- click “Next”.
- 39 -
Step 5: A new table will be opened where you have to allocate the memory. Keep in mind you
cannot choose more memory than you have physically.
Step 6: In the “Connection” drop down box, choose your physical network adaptor click on
“Next”.
Step 7: Now it is time to create a Virtual Hard disk, if you already have one, choose the second
option.
- 40 -
Step 8: Select the Image of ISO that has to be installed -- click on “Finish”.
- 41 -
Step 9: After clicking on finish, you would get the following message as shown in the
screenshot below.
Step 10: To connect to the Virtual machine, Right Click on the created machine click on
“Connect…”
42
The Hyper-V vSwitch is software, layer-2 Ethernet network-traffic switch. It allows
administrators to connect VMs to either physical or virtual networks. It is available by
default within the Hyper-V Manager installation and contains extended capabilities for
security and resource tracking.
If you attempt to create a VM right after the set-up process, you will not be able to connect
it to a network.
To set up a network environment, you will need to select the Virtual Switch Manager in the
right hand side panel of Hyper-V Manager as shown in the screenshot below.
The Virtual Switch Manager helps configure the vSwitch and the Global Network Settings,
which simply lets you change the default ‘MAC Address Range’, if you see any reason for
that.
Creation of the virtual switch is easy and there are three vSwitch types available, which are
described below:
• External vSwitch will link a physical NIC of the Hyper-V host with a virtual one and then
give your VMs access outside of the host. This means that your physical network and
internet (if your physical network is connected to internet).
• Internal vSwitch should be used for building an independent virtual network, when you
need to connect VMs to each other and to a hypervisor as well.
• Private vSwitch will create a virtual network where all connected VMs will see each
other, but not the Hyper-V host. This will completely isolate the VMs in that sandbox.
43
Here, we have selected “External” and then “Create Virtual Switch”. The table with the
setting of the vSwitch will be open where we will fill the fields as shown below:
44
Allocating Processors & Memory to a VM using Hyper-V
In this section, we will see the task of allocating CPU, Memory and Disk Resources to the
virtual machines that are running on a server. The key to allocating CPU or any other type of
resource in Hyper-V is to remember that everything is relative.
For example, Microsoft has released some guidelines for Virtualizing Exchange Server. One
of the things that was listed was that the overall system requirements for Exchange Server
are identical whether Exchange is being run on a virtual machine or on a dedicated server.
To allocate one of the features mentioned above, we need to click on the “Settings…” tab in
the right hand side panel.
To allocate more memory to the selected virtual machine, click on the “Memory” tab on the
left hand side of the screen. You will also have “Startup RAM”, where you can allocate as
much ram as you have physically to a VM machine -- Click on “Ok”.
45
To allocate more processors, click on the “Processor” tab on the left hand side of the panel.
Then you can enter the number of virtual processors for your machine.
46
If you need to expand, compress the capacity of the virtual hard disk. Click on the “IDE
controller 0” on the left hand side panel -- click on “Edit”.
Select one of the options based on your need (all of them have their respective
descriptions) and then click on “Next”.
47
Click on “Finish” and wait for the process to finish.
Step 2: A table will pop-up requesting you to find a “Boot disk”, “Boot Image” or to install OS
at a later stage.
We will choose the second option and click on “Browse”. Then we have to click on the ISO
image, which we want to install. Once all this is done, click on “Next”.
48
Step 3: As I am installing windows server 2012, it will pop-up a table requesting to enter the
serial key click directly on “Next”, if you want to activate the non- commercial version for
Windows.
Step 4: After the above step is complete, a dialogue box opens. Click “Yes”.
49
Step 5: Click “Next”.
Step 6: In the “Maximum size disk” box, enter the value of your virtual Hard disk, which in
our case is 60GB. Then click on “Next”.
50
Setting up Networking with VMware Workstation
To set up the networking modes of a virtual machine in a VMware Workstation, we have to
click on the “Edit virtual machine settings”.
A table will be opened with the settings of networking and on the left hand side panel of
this table click on “Network Adaptor”.
On the left of this table, you can see the networking modes as shown in the following
screenshots.
51
VMware ESXi: The Purpose-Built Bare Metal Hypervisor
VMware ESXi is a purpose-built bare-metal hypervisor that installs directly onto a physical
server. With direct access to and control of underlying resources, ESXi is more efficient than
hosted architectures and can effectively partition hardware to increase consolidation ratios
and cut costs for our customers.
ESX runs on bare metal (without running an operating system) unlike other VMware
products. It includes its own kernel: A Linux kernel is started first, and is then used to load a
variety of specialized virtualization components, including ESX, which is otherwise known as
the vmkernel component. The Linux kernel is the primary virtual machine; it is invoked by
the service console. At normal run-time, the vmkernel is running on the bare computer, and
the Linux-based service console runs as the first virtual machine. VMware dropped
development of ESX at version 4.1, and now uses ESXi, which does not include a Linux
kernel.
Today’s IT teams are under unprecedented pressure to meet fluctuating market trends and
heightened customer demands. At the same time, these teams must stretch IT resources to
accommodate increasingly complex projects. Fortunately, ESXi can help balance the need
for better business outcomes and IT savings. Here’s how:
52
• Streamlines IT administration through centralized management.
• Reduces costs, such as CapEx and OpEx savings.
• Minimizes hardware resources needed to run hypervisor for cost savings and more
efficient utilization.
3.5iSCSI
iSCSI is a protocol for transporting SCSI commands and data over TCP/IP Ethernet
connections. The iSCSI protocol is an open standard, which was developed under the
auspices of the Internet Engineering Task Force (IETF), the body responsible for most
internet and networking standards. The standard was ratified in 2003 and is documented in
a number of standards documents, the prime one being RFC3720. In fact iSCSI embraces a
family of protocols; it uses Ethernet and TCP/IP as its underlying transport mechanism, it
makes use of standard authentication protocols such as CHAP, and uses other protocols
such as iSNS for discovery.
Fortunately, virtually all these protocols and their details are hidden from the end user, and
setting up iSCSI storage is very easy. However, some knowledge of a few iSCSI fundamentals
and terms is useful.
There are two types of iSCSI network entity: initiator and target.
Hardware initiators are less widely used, although may be useful in very high performance
applications, or if 10-gigabit Ethernet support is required.
iSCSI targets are embedded in iSCSI storage controllers. They are the software that makes
the storage available to host computers, making it appear just like any other sort of disk
drive.
Initiators and targets have an IP address, just like any other network entity. They are also
identified using an iSCSI name, called the iSCSI Qualified Name (IQN). The IQN must be
unique world-wide, and is made up of a number of components, identifying the vendor, and
then uniquely identifying the initiator or target. An example of an IQN is:
iqn.2001-04.com.example:storage:diskarray-sn-123456789
Since these names are rather unwieldy, initiators and targets also use short, user-friendly
names, sometimes called aliases.
53
Sessions are normally established (or re-established) automatically when the host computer
starts up, although they also can be manually established and broken.
CHAP authentication
It there are security concerns, it is possible to set up target and initiator authentication,
using the CHAP authentication protocol. With CHAP authentication, an initiator can only
connect to a target if it knows the target’s password, or secret. To set up CHAP, the same
secret needs to be given to both the initiator and target.
It is also possible to use mutual CHAP, where there is a second authentication phase. After
the target has authenticated the initiator, the initiator then authenticates the target, using
an initiator secret.
Put simply, with CHAP, the target can ensure that the initiator attempting to connect is who
it claims to be; with mutual CHAP, the initiator can also ensure the target is who it claims to
be.
iSCSI discovery
iSCSI discovery is the process by which an iSCSI initiator ‘discovers’ an iSCSI target. Discovery
is done using a special type of session, called a discovery session, where an initiator
connects to a storage controller and asks for a list of the targets present on the controller.
The target responds with a list of all the targets to which the initiator has access.
The iSNS protocol can also be used for discovery, but is not widely used and vSphere does
not support it. SvSAN includes limited support for iSNS, in that it can be used for non-
mirrored targets but not mirrored ones.
NAS is "Any server that shares its own storage with others on the network and acts as a file
server in the simplest form". Network Attached Storage shares files over the network.
Some of the most significant protocols used are SMB, NFS, CIFS, and TCP/IP. When you
access files on a file server on your windows system, it is NAS.
NAS will be using an Ethernet connection for sharing files over the network. The NAS
device will have an IP address and then will be accessible over the network through that IP
address. Biggest providers of NAS are QNAP and Lenovo.
54
The following illustration shows how NAS works.
55
The following illustration shows how a SAN switch operates.
A Storage Area Network (SAN) is a specialized, high-speed network that provides block-level
network access to storage. SANs are typically composed of hosts, switches, storage
elements, and storage devices that are interconnected using a variety of technologies,
topologies, and protocols. SANs may also span multiple sites.
A SAN presents storage devices to a host such that the storage appears to be locally
attached. This simplified presentation of storage to a host is accomplished through the use
of different types of virtualization.
56
A Storage Area Network (SAN) is a specialized, high-speed network that provides block-level
network access to storage. SANs are typically composed of hosts, switches, storage
elements, and storage devices that are interconnected using a variety of technologies,
topologies, and protocols. SANs may also span multiple sites.
A SAN presents storage devices to a host such that the storage appears to be locally
attached. This simplified presentation of storage to a host is accomplished through the use
of different types of virtualization.
SANs are commonly based on Fibre Channel (FC) technology that utilizes the Fibre Channel
Protocol (FCP) for open systems and proprietary variants for mainframes. In addition, the
use of Fibre Channel over Ethernet (FCoE) makes it possible to move FC traffic across
existing high speed Ethernet infrastructures and converge, storage and IP protocols onto a
single cable. Other technologies like Internet Small Computing System Interface (iSCSI),
commonly used in small and medium sized organizations as a less expensive alternative to
FC, and InfiniBand, commonly used in high performance computing environments, can also
be used. In addition, it is possible to use gateways to move data between different SAN
technologies.
3.8 VLAN
A virtual local area network (VLAN) is a logical group of workstations, servers and network
devices that appear to be on the same LAN despite their geographical distribution. A VLAN
allows a network of computers and users to communicate in a simulated environment as if
they exist in a single LAN and are sharing a single broadcast and multicast domain. VLANs
are implemented to achieve scalability, security and ease of network management and can
quickly adapt to changes in network requirements and relocation of workstations and server
nodes.
Higher-end switches allow the functionality and implementation of VLANs. The purpose of
implementing a VLAN is to improve the performance of a network or apply appropriate
security features.
57
Computer networks can be segmented into local area networks (LANs) and wide area
networks (WANs). Network devices such as switches, hubs, bridges, workstations and
servers connected to each other in the same network at a specific location are generally
known as LANs. A LAN is also considered a broadcast domain.
A VLAN allows several networks to work virtually as one LAN. One of the most beneficial
elements of a VLAN is that it removes latency in the network, which saves network
resources and increases network efficiency. In addition, VLANs are created to provide
segmentation and assist in issues like security, network management and scalability. Traffic
patterns can also easily be controlled by using VLANs.
• High risk of virus issues because one infected system may spread a virus through the
whole logical network
• Equipment limitations in very large networks because additional routers might be
needed to control the workload
• More effective at controlling latency than a WAN, but less efficient than a LAN.
58
3. ______ is a specialized, high-speed network that provides block-level network access to storage.
3.9 Summary
A hypervisor is a function which abstracts and isolates operating systems and applications
from the underlying computer hardware. It creates a virtual platform on the host computer,
on top of which multiple guest operating systems are executed and monitored. Hypervisors
are two types:
• Native of Bare Metal Hypervisor
• Hosted Hypervisor
Native hypervisors are software systems that run directly on the host's hardware to control
the hardware and to monitor the Guest Operating Systems. Hosted hypervisors are
designed to run within a traditional operating system.
iSCSI is a protocol for transporting SCSI commands and data over TCP/IP Ethernet
connections. The iSCSI protocol is an open standard, which was developed under the
auspices of the Internet Engineering Task Force (IETF), the body responsible for most
internet and networking standards.
NAS is "Any server that shares its own storage with others on the network and acts as a file
server in the simplest form". Network Attached Storage shares files over the network. A
Storage Area Network (SAN) is a specialized, high-speed network that provides block-level
network access to storage. A SAN presents storage devices to a host such that the storage
appears to be locally attached.
A virtual local area network (VLAN) is a logical group of workstations, servers and network
devices that appear to be on the same LAN despite their geographical distribution. VLANs
are implemented to achieve scalability, security and ease of network management and can
quickly adapt to changes in network requirements and relocation of workstations and server
nodes.
3.10 Check Your Progress Answers
1. hypervisor
2. Hosted
3. Native
1. NAS
2. iSCSI
3. bare-metal
1. SANs
2. VLAN
3. SAN
59
3.11 Questions for self-study:
1. What is hypervisor?
2. Explain the different types of hypervisors?
3. Write a short note on VMware ESXi.
4. What Is a Storage Area Network?
5. What are the key benefits of VLAN.
60
Chapter IV
4.0 Objective
4.1 Introduction
4.2 Deployment models of cloud computing:
4.3 The three delivery models are:
4.4 Risks of Cloud Computing
4.5 Disaster Recovery in Cloud Computing
4.6 Summary
4.7 Check Your Progress Answers
4.0 Objective
After studying this chapter you will be able to understand,
Cloud computing and its characteristics
Deployment models of cloud computing
Cloud computing delivery models
Risk factors involved in cloud computing
Disaster recovery and planning
4.1 Introduction:
“Cloud computing is a model for enabling convenient, on-demand network access to a
shared pool of configurable resources (e.g., networks, servers, storage, applications, and
services) that can be rapidly provisioned and released with minimal management effort or
service provider interaction.”
Cloud computing intends to realize the concept of computing as a utility, just like water, gas,
electricity and telephony. It also embodies the desire of computing resources as true
services. Software and computing platform and computing infrastructure may all be
regarded as services with no concern as to how or from where they are actually provided.
The potential of cloud computing has been recognized by major industry players such that
the top five software companies by sales revenue all have major cloud offerings.
61
Since Cloud Computing is completely web based, it can be accessed from anywhere
and at any time.
Resource Pooling
Cloud Computing allows multiple tenants to share a pool of resources. One can share
single physical instance of hardware, database and basic infrastructure.
Rapid Elasticity
It is very easy to scale up or down the resources at any time. Resources used by the
customers or currently assigned to customers are automatically monitored and
resources.
62
Service Model for Cloud Computing
Desktop as a service: We’re connecting to a desktop operating system over the
Internet, which enables us to use it from anywhere. It’s also not affected if our own
physical laptop gets stolen, because we can still use it.
Storage as a service: We’re using storage that physically exists on the Internet as it is
present locally. This is very often used in cloud computing and is the primary basis of
a NAS (network attached storage) system.
Database as a service: Here we’re using a database service installed in the cloud as if
it was installed locally. One great benefit of using database as a service is that we can
use highly configurable and scalable databases with ease.
Information as a service: We can access any data in the cloud by using the defined
API as if it was present locally.
Security as a service: This enables the use of security services as if they were
implemented locally.
There are other services that exist in the cloud, but we’ve presented just the most
widespread ones that are used on a daily basis.
If we want to start using the cloud, we need to determine which service model we want to
use. The decision largely depends on what we want to deploy to the cloud. If we would like
to deploy a simple web application, we might want to choose an SaaS solution, where
everything will be managed by the service provides and we only have to worry about writing
the application code. An example of this is writing an application that can run on Heroku.
We can think of the service models in the term of layers, where the IaaS is the bottom layer,
which gives us the most access to customize most of the needed infrastructure. The PaaS is
the middle layer, which automates certain things, but is less configurable. The top layer is
SaaS, which offers the least configuration, but automates a large part of the infrastructure
that we need when deploying an application.
63
Cloud Computing Examples
Two popular cloud computing facilities are Amazon Elastic Compute Cloud (EC2) and Google
App Engine. Amazon EC2 is part of a set of standalone services which include S3 for storage,
EC2 for hosting and the simpleDB database. Google App Engine is an end-to-end service,
which combines everything into one package to provide a PaaS facility. With Amazon EC2,
users may rent virtual machine instances to run their own software and users can monitor
and increase/decrease the number of VMs as demand changes.
To use Amazon EC2 users would:
Create an Amazon Machine Image (AMI): incorporate applications, libraries, data
and associated settings
Upload AMI to Amazon S3
Use Amazon EC2 web service to configure security and network access
Choose OS, start AMI instances
Monitor & control via web interface or APIs.
Google’s App Engine allows developers to run their web applications on Google’s
infrastructure.
To do so a user would:
Download App Engine SDK
Develop the application locally as a set of python programs
Register for an application ID
Submit the application to Google.
Having provided an overview of cloud computing we may now consider computer forensics
before we then proceed to consider the two together.
64
1 A _______ cloud is hosted inside the organization and is behind a firewall, so the
organization has full control of who has access to the cloud infrastructure.
2 The services of a _______ cloud are used by several organizations to lower the costs, as
compared to a private cloud.
3 _______ provides a platform such as operating system, database, web server, etc.
While the cloud technology is becoming more prevalent among both public and private
sectors, the fact that concerns the businesses is migrating data to cloud. The modern day
data backup and recovery solutions have shifted to cloud-based data backup and disaster
recovery solutions which are less hardware-dependent. They provide businesses agile
recovering approach by saving crucial data at the time of disruption.
65
But the most important thing while evaluating the cloud data backup and recovery platform
is to think of your IT goals. The all-around budget is also a factor to consider. As cloud plays
an important role in moving capital expenditures to operational expenditures, IT managers
need to ensure integrating cloud disaster recovery solution that fits within the budget and
helps accomplish performance goals. In order to make this happen, you need to consider
the following factors for moving to an advanced cloud-based disaster recovery solution.
66
1 What is the risk factors involved in cloud computing?
2 Which points needs to be considered while configuring cloud Backup and Disaster
Recovery
4.6 Summary
Cloud computing is a model for enabling convenient, on-demand network access to a shared
pool of configurable resources that can be rapidly provisioned and released with minimal
management effort or service provider interaction.
Deployment models of cloud computing
Private cloud
Public cloud
Community cloud
Hybrid cloud
Distributed cloud
SaaS provides access to the service, but you don’t have to manage it because it’s done by
the service provider. PaaS provides a platform such as operating system, database, web
server, etc. PaaS provides a platform such as operating system, database, web server, etc.
IaaS (infrastructure as a service) provides the entire infrastructure, including physical/virtual
machines, firewalls, load balancers, hypervisors, etc.
67
Chapter V
5.0 Objective
5.1 Introduction
5.2 Technologies and the processes required when deploying web services
5.3 Advantages & Disadvantages of Cloud Application
5.4 Cloud Services
5.5 Cloud Economics
5.6 Cloud infrastructure components
5.7 Summary
5.8 Check Your Progress Answers
5.0 Objective
After studying this chapter you will be able to understand,
Virtualization cloud model
Web services used in cloud computing
Advantages and disadvantages of cloud application
Cloud services like IaaS, PaaS &SaaS
Cloud infrastructure components.
5.1 Introduction
Cloud Computing Services typically provide information technology as a service over the
Internet or dedicated network. These services are delivered on demand, and payment based
on usage. These services range from full applications as (Software as Service) and
development platforms (Platform as Service), to servers, storage, and virtual desktops
(Infrastructure as Service).
Corporate and government entities utilize cloud computing services to address a variety of
application and infrastructure needs such as CRM, database, compute, and data storage.
Main advantage of using Cloud computing over traditional IT environment are, cloud
computing services deliver IT resources in few minutes to hours and with costs to pay per
usage. This helps organizations to manage the expenses more efficiently. Also in the same
way consumers utilize cloud computing services to simplify application utilization, store,
share, and protect content, and enable access from any web-connected device.
5.2 Technologies and the processes required when deploying web services:
There are certain technologies that are working behind the cloud computing platforms
making cloud computing flexible, reliable and usable. These technologies are listed below:
Virtualization
Service-Oriented Architecture (SOA)
Grid Computing
Utility Computing
68
Virtualization
Virtualizationis a technique, which allows sharing single physical instance of an application
or resource among multiple organizations or tenants (customers). It does so by assigning a
logical name to a physical resource and providing a pointer to that physical resource when
demanded.
Service-Oriented Architecture
Service-Oriented Architecture helps to use applications as a service for other applications
regardless the type of vendor, product or technology. Therefore, it is possible to exchange
of data between applications of different vendors without additional programming or
making changes to services.
69
Grid Computing
Grid Computing refers to distributed computing in which a group of computers from
multiple locations are connected with each other to achieve common objective. These
computer resources are heterogeneous and geographically dispersed. Grid Computing
breaks complex task into smaller pieces. These smaller pieces are distributed to CPUs that
reside within the grid.
Utility Computing
Utility computing is based on Pay per Use model. It offers computational resources on
demand as a metered service. Cloud computing, grid computing, and managed IT services
are based on the concept of utility computing.
70
Usability: All cloud storage services reviewed in this topic have desktop folders for
Mac’s and PC’s. This allows users to drag and drop files between the cloud storage
and their local storage.
Bandwidth: You can avoid emailing files to individuals and instead send a web link to
recipients through your email.
Accessibility: Stored files can be accessed from anywhere via Internet connection.
Disaster Recovery: It is highly recommended that businesses have an emergency
backup plan ready in the case of an emergency. Cloud storage can be used as a
back-up plan by businesses by providing a second copy of important files. These files
are stored at a remote location and can be accessed through an internet connection.
Cost Savings: Businesses and organizations can often reduce annual operating costs
by using cloud storage; cloud storage costs about 3 cents per gigabyte to store data
internally. Users can see additional cost savings because it does not require internal
power to store information remotely.
71
Virtual machine disk storage
Virtual local area network (VLANs)
Load balancers
IP addresses
Software bundles
All of the above resources are made available to end user via server virtualization.
Moreover, these resources are accessed by the customers as if they own them.
Benefits
IaaSallows the cloud provider to freely locate the infrastructure over the Internet in a cost-
effective manner. Some of the key benefits of IaaS are listed below:
Full Control of the computing resources through Administrative Access to VMs.
Flexible and Efficient renting of Computer Hardware.
Portability, Interoperability with Legacy Applications.
72
The consumer has to pay based the length of time a consumer retains a resource.
Also with administrative access to virtual machines, the consumer can also run any
software, even a custom operating system.
Issues
IaaS shares issues with PaaS and SaaS, such as Network dependence and browser based
risks. It also has some specific issues associated with it. These issues are mentioned in the
following diagram:
Characteristics
Here are the characteristics of IaaS service model:
Virtual machines with pre-installed software.
Virtual machines with pre-installed Operating Systems such as Windows, Linux, and
Solaris.
On-demand availability of resources.
Allows storing copies of particular data in different locations.
The computing resources can be easily scaled up and down.
2) Platform-as-a-Service
PaaS offers the runtime environment for applications. It also offers development &
deployment tools, required to develop applications. PaaS has a feature of point-and-click
73
tools that enables non-developers to create web applications. Google's App Engine&
Force.comare examples of PaaS offering vendors.
Developer may log on to these websites and use the built-in API to create web-based
applications. But the disadvantage of using PaaS is that the developer lock-in with a
particular vendor. For example, an application written in Python against Google's API using
Google's App Engine is likely to work only in that environment.
Therefore, the vendor lock-in is the biggest problem in PaaS. The following diagram shows
how PaaS offers an API and development tools to the developers and how it helps the end
user to access business applications.
Benefits
Lower Administrative Overhead
Consumer need not to bother much about the administration because it's the
responsibility of cloud provider.
Lower Total Cost of Ownership
Consumer need not purchase expensive hardware, servers, power and data storage.
Scalable Solutions
It is very easy to scale up or down automatically based on application resource
demands
74
Issues
Like SaaS, PaaS also place significant burdens on consumer's browsers to maintain reliable
and secure connections to the provider systems. Therefore, PaaS shares many of the issues
of SaaS. However, there are some specific issues associated with PaaS as shown in the
following diagram:
Characteristics
Here are the characteristics of PaaS service model:
PaaS offers browser based development environment. It allows the developer to
create database and edit the application code either via Application Programming
Interface or point-and-click tools.
75
PaaS also provides web services interfaces that allow us to connect the applications
outside the platform.
3) Software-as-a-Service
Software as a Service (SaaS) model allows providing software application as a service to the
end users. It refers to software that is deployed on a hosted service and is accessible via
Internet. There are several SaaS applications; some of them are listed below:
Billing and Invoicing System
Customer Relationship Management (CRM) applications
Help Desk Applications
Human Resource (HR) Solutions
Some of the SaaS applications are not customizable such as an Office Suite. But SaaS
provides us Application Programming Interface (API), which allows the developer to
develop a customized application.
Characteristics
Here are the characteristics of SaaS service model:
SaaS makes the software available over the Internet.
The Software is maintained by the vendor rather than where they are running.
The license to the software may be subscription based or usage based. And it is
billed on recurring basis.
SaaS applications are cost effective since they do not require any maintenance at
end user side.
They are available on demand.
They can be scaled up or down on demand.
They are automatically upgraded and updated.
SaaS offers share data model. Therefore, multiple users can share single instance of
infrastructure. It is not required to hard code the functionality for individual users.
All users are running same version of the software.
Benefits
Using SaaS has proved to be beneficial in terms of scalability, efficiency, performance
and much more. Some of the benefits are listed below:
Modest Software Tools
Efficient use of Software Licenses
Centralized Management & Data
Platform responsibilities managed by provider
Multitenant solutions
76
specifically how cloud services will affect an IT budget and staffing needs. In assessing cloud
economics, CIOs and IT leaders weigh the costs pertaining to infrastructure, management,
research and development, security and support to determine if moving to the cloud makes
sense given their organization's specific circumstances.
Although the cloud can facilitate resource provisioning and flexible pricing, there are several
cloud computing costs beyond instance price lists to consider. Pricing usually includes
storage, networking, load balancing, security, redundancy, backup, software services and
operating system licenses but some cloud computing considerations that affect resource
contention, bandwidth and salaries can come as a surprise.
IT leaders within an organization must closely examine the economics of moving to the
cloud before deciding whether to invest in the expertise and time that is required to
maximize cloud investments.
Each cloud service provider has a unique bundle of services and pricing models. Different
providers have unique price advantages for different products. Typically, pricing variables
are based on the period of usage with some providers allowing for by the minute usage as
well as discounts for longer commitments.
The most common model for SaaS based products is on a per user, per month basis though
there may be different levels based on storage requirements, contractual commitments or
access to advanced features. PaaS and IaaS pricing models are more granular, with costs for
specific resources or ‘resource sets’ consumption. Aside from financial competitiveness look
for flexibility in terms of resource variables but also in terms of speed to provision and de
provision.
Application Architecture that allows you to scale different workload elements independently
means you can use cloud resources more efficiently. You may find that your ability to fine
tune scalability is affected by the way your cloud service provider packages its services and
you'll want to find a provider that matches your requirements in this regard.
Cloud infrastructure also includes an abstraction layer that virtualizes resources and logically
presents them to users through application program interfaces and API-enabled command-
line or graphical interfaces.
77
Check your progress 5.4 & 5.5
1. _______ provides access to fundamental resources such as physical machines, virtual
machines, virtual storage.
2. _______ offers development & deployment tools, required to develop applications.
3. _______model allows providing software application as a service to the end users.
Major public cloud providers, such as Amazon Web Services (AWS) or Google Cloud
Platform, offer services based on shared, multi-tenant servers. This model requires massive
compute capacity to handle both unpredictable changes in user demand and to optimally
balance demand across fewer servers. As a result, cloud infrastructure typically consists of
high-density systems with shared power.
The disks in each system are aggregated using a distributed file system designed for a
particular storage scenario, such as object, big data or block. Decoupling the storage control
and management from the physical implementation via a distributed file system simplifies
scaling.
It also helps cloud providers match capacity to users' workloads by incrementally adding
compute nodes with the requisite number and type of local disks, rather than in large
amounts via a large storage chassis.
78
computing.Infrastructure as a service (IaaS) is a cloud model that gives organizations the
ability to rent those IT infrastructure components -- including compute, storage and
networking -- over the internet from a public cloud provider. This public cloud service model
is often referred to as IaaS.
IaaS eliminates the upfront capital costs associated with on-premises infrastructure, and
instead follows a usage-based consumption model. In this pay-per-usage model, users only
pay for the infrastructure services consumed, generally on an hourly, weekly or monthly
basis.
Cloud providers typically price IaaS on a metered basis, with rates corresponding to usage at
a given level of performance. For virtual servers, this means different prices for various
server sizes, typically measured as an increment of a standard virtual CPU size and
corresponding memory. For storage, pricing is typically based on the type of storage service,
such as object or block, performance level (SSD or HDD) and availability a single storage
location or replication across multiple geographic regions. Capacity is measured by usage
per unit time typically per month.
5.8 Summery
Virtualization is a technique, which allows sharing single physical instance of an application
or resource among multiple organizations or tenants (customers). The Multitenant
architecture offers virtual isolation among the multiple tenants and therefore the
organizations can use and customize the application as though they each have its own
instance running.
Service-Oriented Architecture helps to use applications as a service for other applications
regardless the type of vendor, product or technology. Grid Computing refers to distributed
computing in which a group of computers from multiple locations are connected with each
other to achieve common objective. Utility computing offers computational resources on
demand as a metered service. Cloud computing, grid computing, and managed IT services
are based on the concept of utility computing.
IaaS provides access to fundamental resources such as physical machines, virtual machines,
virtual storage, etc. PaaS offers development & deployment tools, required to develop
applications. PaaS has a feature of point-and-click tools that enables non-developers to
create web applications. SaaS refers to software that is deployed on a hosted service and is
accessible via Internet.
Cloud infrastructure refers to the hardware and software components -- such as servers,
storage, a network and virtualization software that are needed to support the computing
requirements of a cloud computing model. Major public cloud providers, such as Amazon
Web Services (AWS) or Google Cloud Platform, offer services based on shared, multi-tenant
servers. Infrastructure as a service (IaaS) is a cloud model that gives organizations the ability
79
to rent those IT infrastructure components -- including compute, storage and networking --
over the internet from a public cloud provider.
80
Chapter 6
6.0 Objective
6.1 Introduction
6.2 Selecting Cloud Platform
6.3 Strategy Planning Phase
6.4 Cloud Computing Tactics Planning Phase
6.5 Cloud Computing Deployment Phase
6.6 Case Study
6.7 Summary
6.8 Check Your Progress Answers
6.0 Objective
After studying this chapter you will be able to understand,
To consider your business requirements before selecting cloud platform,
Strategy required in selecting cloud platform,
How to analyze problems and risks in the cloud application,
Cloud Computing Deployment Phase,
How to Selecting Economic Cloud Deployment Model
6.1 Introduction
To select the proper cloud platform you need to consider the requirement of applications or
data to be hosted on cloud platform. The first point to be considered is type of date
stored.Next element to be considered is the size of business and the scalability of the
services used, Number of devices to be connected to cloud server. Last point to be
considered is budget; user has to select free and paid cloud services according to usage.
81
To meet all of these requirements, it is necessary to have well-compiled planning. Here in
this tutorial, we will discuss the various planning phases that must be practiced by an
enterprise before migrating to the entire business on cloud. Each of these planning phases is
described in the following diagram:
Businesses expect a lot from their cloud investments, and even though the term "cloud" has
been around for several years, the industry is still, relatively speaking, in its infancy. Very
few companies have clear cloud computing strategies in place, and even fewer have a
strategic plan laid out, which can be detrimental to the success of the cloud computing
model.
As businesses start to plan strategically for cloud adoption, they need to carefully consider
the scope of their planning activities. Cloud computing is a big change, but it is also only a
single part of a journey that began over a decade ago, that saw the industry transitioning
from old, legacy systems and monolithic technology, to a service oriented approach to ICTs.
In this, we analyze the strategy problems that customer might face. There are two steps to
perform this analysis:
Cloud Computing Value Proposition
Cloud Computing Strategy Planning
82
I. Cloud Computing Value Proposition
In this, we analyze the factors influencing the customers when applying cloud computing
mode and target the key problems they wish to solve.
These key factors are:
IT management simplification.
Operation and maintenance cost reduction.
Business mode innovation.
Low cost outsourcing hosting.
High service quality outsourcing hosting.
All of the above analysis helps in decision making for future development.
Strategic planning for cloud must also fit in with the basic business strategies, particularly
around services. This doesn't necessarily mean a massive plan encompassing every level of
the organization, but rather one that takes into account the value chain and makes sure that
innovation isn't stifled and business opportunities are not cut short by current application
boundaries. A "mud against the wall" approach will not work; rather formulate a strong and
logical framework in which several options can work.
Cloud computing has grown and changed a lot since its early days. In its early stages, cloud
computing was more about cost restructuring, particularly where automation and
standardization were concerned. Platform-as-a-Service (PaaS) was used mainly for non-
critical or niche business applications and the Software-as-a-Service (SaaS) layer were used
for critical web applications such as ERP, CRM and other critical applications such as email.
Cloud computing's next stage will focus more on business services as well as the
architecture of those services. Cloud services will need to align with all the client facing
business services, to create a truly service oriented business. Because of this, cloud
computing needs to be fully integrated with strategic planning, particularly for businesses
that are after more than just rationalizing their infrastructure investments.
Cloud computing will continue to evolve and grow over the next few years. This fast
morphing environment requires planning that can prepare and plan for growth and change.
Companies must plan for ongoing and fast maturing of all the variables - technology,
83
suppliers, standards, regulations, products and services, as well as internal capabilities such
as skills and learning.
84
I. Cloud Computing Provider
This step includes selecting a cloud provider on basis of Service Level Agreement (SLA),
which defines the level of service the provider will meet.
II. Maintenance and Technical Service
Maintenance and Technical services are provided by the cloud provider. They must have to
ensure the quality of services.
The public cloud offers most possible benefits and greatest potential risks. Public cloud
model helps to reduce capital expenditure and bring down operational IT costs as
application and computing resources are managed by a third party services provider.
Moreover, customers only pay for what they use on a usage subscription basis and they can
terminate their service at any time. Thus, public clouds accelerate deployments and reduce
costs
In addition, a public cloud also obviates the need for customers to maintain and upgrade
application code and infrastructure. Many public cloud customers are astonished to see new
software features automatically appear in their software without notice. Also, the public
cloud frees up IT departments to focus on more value-added activities rather than hardware
and software upgrades and maintenance. Google is a good example of a public cloud, as its
service provided by a vendor free of charge or on the basis of a pay-per-user license policy.
Risks
Public cloud also comes with many risks. Among these, the Security and privacy are the
biggest risks due to fear of organization moving data and processing beyond their own
boundary, exposes sensitive corporate data to get into the wrong hands but the actual fact
is that most corporate resources are more secure in the public cloud than in a corporate
data center. Public cloud providers specialize in data center operations and management,
and must meet the most stringent requirements for security and privacy. However, there
85
are compliance regulations that legally require some establishment to maintain data within
corporate firewalls the exact location of their data, which is generally impossible in a public
cloud which virtualizes data and processing across grid national or international computers.
Challenges
The public cloud poses many challenges such as reliability, cost, blank slate and technology
viability.
Reliability in cloud is the reliability of public cloud resources. For example, Amazon EC2
public cloud has high profile, outages, causing companies to be left stranded without much
visibility into the nature of the outage. Cost of cloud can be extremely difficult to estimate
because pricing is complex and often companies cannot accurately estimate their usage.
Blank Slate in cloud is the way to redefine corporate policies and application workflows from
scratch which generally provides plain vanilla services. Vendor and Technology viability is
the difficult to know which vendors and technologies will be around in the future.
Applications
Public cloud is best suited for business requirements where organizations is required to
manage load spikes, host, applications, utilize meantime infrastructure and manage
applications which are consumed by many users.
Amazon Web Services (AWS) gives customers access to cloud services at competitive prices,
with the flexibility to meet their business needs. Whether it’s a small startup or a large
enterprise, all companies can leverage the features and functionality of AWS to improve
performance and increase productivity.
Weighing the financial considerations of operating a data center versus using cloud
infrastructure is not as simple as comparing hardware, storage, and compute costs.
86
Whether you own your own data center or rent space at a colocation facility, you have to
manage investments, whether directly or indirectly, including but not limited to:
Capital expenditures
Operational expenditures
Staffing
Opportunity costs
Licensing
Facilities overhead
If you’re considering an expansion of your data center or colocation footprint, here are
some questions to ask:
Capacity planning
How many servers will be added this year? What are the forecasts for the next year
and beyond?
Can hardware be turned on and off when it’s not being used?
How does the pricing model work?
Utilization
What is the average server utilization?
How much needs to be provisioned for peak load?
Operations
Are facilities adequate for expansion?
Is the organization ready for international expansion?
Can utilities (electricity, cooling) be measured accurately and does budget cover both
average and peak requirements?
87
Optimization
Can we provide automatic scaling of our current infrastructure, or the ability to
“reserve” capacity?
What if we need to quickly expand the infrastructure? What costs come into play?
6.7 Summary:
Before deploying applications to cloud, it is necessary to consider your business
requirements. Following are the issues one must have to think about:
Data Security and Privacy Requirement
Budget Requirements
Type of cloud - public, private or hybrid
Data backup requirements
Training requirements
Dashboard and reporting requirements
Client access requirements
Data export requirements
Cloud Computing Value Proposition will help us to analyze the factors influencing the
customers when applying cloud computing mode and target the key problems they wish to
solve. Cloud Computing Strategy Planning is based on the analysis result of the above step.
In this step, a strategy document is prepared according to the conditions a customer might
face when applying cloud computing mode.
This phase involves the following planning steps:
Business Architecture Development
IT Architecture development
Requirements on Quality of Service Development
Transformation Plan development
Cloud Computing Deployment phase focuses on both of the above two phases. It involves
the following two steps:
Cloud Computing Provider
Maintenance and Technical Service.
88
6.8 Check your progress answers
Check your progress 6.1, 6.2 & 6.3
1. Value Proposition
2. Strategic
3. Strategy
1. Business
2. Transformation
3. IT
89
QUESTION BANK
1. Write a procedure to create a new user and adding it to group.
2. Explain the different RAID levels.
3. What is YUM?
4. Write a short note of Network File System.
5. How to configure cache NameServer.
6. Differentiate between advantages and disadvantages of virtualization.
7. Write a short note on Virtualization.
8. Write a short note of cloud computing.
9. What is Bare Metal Hypervisor?
10. What is Hosted Hypervisor?
11. Write a short note on iSCSI.
12. Explain Network-attached Storage.
13. Write a short note on VLAN.
14. Explain in detail Characteristics of Cloud Computing.
15. Explain the three delivery models in cloud computing.
16. What are the benefits of cloud computing?
17. Explain the five Things to Consider for Cloud Backup and Disaster Recovery.
18. Write a short note on Grid Computing.
19. Write a short note on Utility Computing.
20. Write a short note on Virtualization.
21. Explain Advantages &dis-advantages of Cloud Application.
22. Write a short note on IaaS.
23. Write a short note on SaaS.
24. Write a short note on PaaS.
25. Write a short note in Cloud infrastructure components.
26. Explain the technologies that are working behind the cloud computing platforms
making cloud computing flexible, reliable and usable?
27. Explain in detail Cloud computing Strategy Planning Phase.
28. Explain in detail Cloud Computing Deployment Phase.
29. Explain with example factors to be considered while Selecting Cloud Platform.
90
Reference Books:
1. Cloud Computing For Dummies, by Fern Halper, Judith Hurwitz, Marcia Kaufman, and
Robin Bloor [ISBN: 1511404582, 9781511404587]
2. Cloud Computing: Principles and Paradigms, by RajkumarBuyya, James Broberg,
AndrzejGoscinski [ISBN-13: 978-8126541256]
3. Cloud Computing - Theory and Practice, by Marinescu [ISBN-13: 978-9351070948]
4. Cloud Computing: Concepts, Technology & Architecture, 1e, by Erl [ISBN-13: 978-
9332535923]
5. Amazon Web Services For Admins For Dummies, by John Paul Mueller [ISBN-13: 978-
8126565634]
6. Distributed and Cloud Computing, 1st edition, Morgan Kaufmann, 2011.[ISBN-13: 978-
0123858801]
7. Enterprise Cloud Computing Technology Architecture Applications, by GautamShroff
[ISBN: 978-0521137355]
8. Cloud Computing, A Practical Approach, by Toby Velte, Anthony Velte, Robert
Elsenpeter [ISBN: 0071626948]
9. Cloud Computing Strategies, by Dimitris N. Chorafas [ISBN: 1439834539]
91