Linux Unit 5
Linux Unit 5
com
0
Acuity Educare
LINUX SYSTEM
ADMINISTRATION
SEM : V
SEM V: UNIT 5
In Linux, shells like bash and korn support programming construct which are saved as scripts. These scripts
become shell commands and hence many Linux commands are script.
A system administrator should have a little knowledge about scripting to understand how their servers and
applications are started, upgraded, maintained or removed and to understand how a user environment is
built.
We can get the name of our shell prompt, with following command :
Syntax:
echo $SHELL
The $ sign stands for a shell variable, echo will return the text whatever we typed in.
The sign #! is called she-bang and is written at top of the script. It passes instruction to program
/bin/sh.
To run our script in a certain shell, start your script with #! followed by the shell name.
Any line starting with a hash (#) becomes comment. Comment means, that line will not take part inscript
execution. It will not show up in the output.
A file is sourced in two ways. One is either writting as source<fileName> or other is writting
as ../<filename> in the command line. When a file is sourced, the code lines are executed as ifthey
were printed on the command line.
Page 1 of 27
YouTube - Abhay More | Telegram - abhay_more
607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622
TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM - V : LINUX – U5
The difference between sourcing and executing a script is that, while executing a script it runs ina new
shell whereas while sourcing a script, file will be read and executed in the same shell.
In sourcing, script content is displayed in the same shell but while executing script run in a differentshell.
if then else
If then else condition states that if condition meets, output goes to if part otherwise it goes to else
part.
Syntax:
Example
Page 2 of 27
YouTube - Abhay More | Telegram - abhay_more
607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622
TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM - V : LINUX – U5
if then elif
Syntax:
Example:
Page 3 of 27
YouTube - Abhay More | Telegram - abhay_more
607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622
TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM - V : LINUX – U5
Syntax:
Syntax of for loop using in and list of values is shown below. This for loop contains a number of
variables in the list and will execute for each item in the list.
For example, if there are 10 variables in the list, then loop will execute ten times and value will bestored
in varname.
Example
2) Syntax:
While loop
Linux scripting while loop is like C language while loop. There is a condition in while. And commands
are executed till the condition is valid. Once condition becomes false, loop terminates.
Syntax:
Example:
Page 4 of 27
YouTube - Abhay More | Telegram - abhay_more
607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622
TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM - V : LINUX – U5
Until loop
Until loop always executes at least once. Loop while executes till it returns a zero value and untilloop
executes till it returns non-zero value.
Syntax:
Example:
Syntax:
Page 5 of 27
YouTube - Abhay More | Telegram - abhay_more
607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622
TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM - V : LINUX – U5
Example:
#!/bin/bash
read first
read sec
read third
else
Page 6 of 27
YouTube - Abhay More | Telegram - abhay_more
607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622
TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM - V : LINUX – U5
fi
else
else
fi
fi
do
done
echo ""
j=1
while [ $j -le 10 ]
do
done
echo ""
Page 7 of 27
YouTube - Abhay More | Telegram - abhay_more
607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622
TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM - V : LINUX – U5
Q. Write a script to which accept the number from user and print multiplication table.
i=0
while [ $i -le 10 ]
do
i=`expr $i + 1`
done
Q. Write a script which accepts the number from user check whether given number prime or
not.
#!/bin/bash
read num
i=2
do
then
exit
fi
i=`expr $i + 1`
done
Page 8 of 27
YouTube - Abhay More | Telegram - abhay_more
607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622
TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM - V : LINUX – U5
• In a high-availability (HA) cluster, at least two servers are involved when providing high availability of
important services. A typical HA cluster is dedicated to just that service.
• This means that other services, which are not involved in the HA setup, are not offered by cluster.
• To make sure that the service is available, the HA cluster continuously monitors theservers thatare
involved in the cluster (the nodes). If a node goes down, the cluster makessure the service is started as
soon as possible on another node.
• This means in any HA cluster,there will always be a small amount of time that the service is not available
after a failure. But, the goal is to make sure that this amount of time is as small as possible.
Q. List Required Componets of High Availability Cluster and explain any two in detail.
High-Availability Requirements
1. Multiple Nodes
• A High-Availability cluster uses at least two nodes. Maximum number of nodes can be 16 in HA
cluster.
• 16 node clusters are not very common, but they can be deployed to ensure the high
availabilityof a large number of services.
• Administrators normally work with minimum three nodes. This is to ensure that quorum can be
maintainedat all times.
• Quorum is an important element in an HA cluster. It is the minimum number of nodes that must be
available to continue offering services. Typically, the quorum consistsof half-plus-one nodes.
2. Fence Devices
• To maintain the integrity of the cluster, a failing node must be stopped at all times. Hence,
Fencing is used to Prevent Disaster.
• Fencing is also referred to as STONITH, which stands for Shoot The Other Node In TheHead.
• If a node needs to be shut down toprevent corruption on the HA Clusterthen fence device
performs STONITH.
• A fencing device is typically a hardware device, which is available to shut down anothernode,even if
the node itself has failed.
• The problem with fencing is that you’ll need specific hardware to shut down a failingnode,
and that hardware isn’t always available.
• The good news is that we can create a clusterwithout fencing or have a cluster that uses adummy
fence device, such as fence_manual.
• The bad news is that by doing so, we will risk that services in the cluster will be corrupted.
• Thus, when planning the purchase of cluster hardware, make sure to include fencing hardwareas
well.
• Before taking over a resource, the cluster instructs the fencing device to make sure the failingnode
is actually shutdown.
• A power switch can do that by shutting down the physical port(s) to which the nodeis
connected, and a management board can do that by halting the failing node.
• If you’re using an integrated management card for fencing, you’ll need to make sure thatACPIis
disabled on the hosts.
• Example: Using Fencing to Prevent Disaster
• Assume, An IMAP mail server services are provided on node1. If communication in the cluster fails
and node1 can not see node2. As node1 is providing Mail service it will not be affected butnode2 will
imagine that node2 is down as it can not see node2. So node2 will start mail server service.
Page 10 of 27
YouTube - Abhay More | Telegram - abhay_more
607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622
TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM - V : LINUX – U5
• The result, there will be two mail servers on two different nodes and they will write to same file
system at the same time. Then the normal file system like Ext4 or XFS will be corrupted.
• Hence, we need a solution to shut down a failed node before taking over its resources.
Toaccomplish this goal, you’ll need a fencing device.
3. A Dedicated Cluster Interface
• To make sure that cluster traffic is not interrupted by anything we create dedicated cluster
interface.
• For example, if on user network there is a spike in thenumber of packets then it will affect
cluster packets. Hence, a dedicated cluster network is often created.
• This ensures that cluster packets will always get through no matter what happens on theuser
network.
• To increase fault tolerance, the user network can serve as a backup for the cluster traffic, whichis
used in case something physically goes wrong on the cluster networkinterface.
4. Ethernet Bonding
• A network card can always fail. To minimize the impact of this and to get more
bandwidth,Ethernet bonding is often used in HA cluster environments.
• An Ethernet bond is a logicaldevice that groups at least two network cards.
• When configuring this, you choose a protocol that specifies how the bond will work. A round- robin
algorithm is often used, which means that, under normal operation, frames are distributedacross all
network interfacesinvolved in the bond.
• If one of the interfaces goes down, the other interface is capable of handling all the traffic. Other
algorithms are also available, such as active/passiveor LACP, but these often depend onthe features
that are offered by the switch.
• Setting up bonding is rather essential because without a bond, if one interface in theclustergoes
down, the service that the cluster is hosting may become unavailable.
5. Shared Storage
Page 11 of 27
YouTube - Abhay More | Telegram - abhay_more
607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622
TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM - V : LINUX – U5
• Conga
• Conga is the management interface that is used to create and manage clusters.
• It consists oftwo parts: ricci and luci.
• Ricci is the agent that should be installed on all nodes in the cluster,and luci is themanagement
application that talks to ricci.
• As an administrator, you’lllog in to the luci web interface to create and manage cluster.
• Apart from Conga, you can also manage your cluster from the command line. In RHEL 6 the
cssadmin tool is used to manage cluster.
• Corosync
• Corosync takes care of the lower layers of the cluster. It uses the Totem protocol toensurethat all
nodes in the cluster are available.
• If something goes wrong on the Corosync layer, it will notify the upper layers of the cluster,
where the placement of services andresources is addressed.
Rgmanager
• Rgmanager refers to the upper layers of the cluster. It consists of different processes,whichensure
that services are running.
• If Corosync detects a failure on thecluster, the Rgmanager layer will move the services to anode
where they can continue to dotheir work.
• Pacemaker
• Rgmanager is the traditional Red Hat HA cluster stack. There is another Linux-basedHAsolution,
which is known as Pacemaker.
Page 12 of 27
YouTube - Abhay More | Telegram - abhay_more
607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622
TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM - V : LINUX – U5
• Rgmanager is the standard used in RHEL 6. Whereas, Pacemaker is available with RHEL 7.
Q. Explain Configuring Cluster-Based Services.
What is Ethernet bonding? How will you setup bonding?
Setting Up Bonding:
a. Identify the physical network cards that you want to configure in the bonding interface.
• Assume, identifiednetwork card is em1 and em2. These are going to be used in a bonded
configuration.
b. Change the configuration for the physical network cards to make them slaves for the bondinginterface.
• Make sure that configuration file of em1 and em2 has following:
ONBOOT=yes
MASTER=bond
0SLAVE=yes
USERCTL=no
• The bonding interface devices will use names bond0, bond1, and so on, and the configurationfile will
be /etc/sysconfig/network-scripts/ifcfg-bond0 for bond device bond0 and so on.
DEVICE=bond0
IPADDR=192.168.1.100
PREFIX=24 ONBOOT=yes
BOOTPROTO=none
USERCTL=no
BONDING_OPTS="mode=1 miimon=100"
Page 13 of 27
YouTube - Abhay More | Telegram - abhay_more
607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622
TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM - V : LINUX – U5
• After performing all of these steps, your bond device is ready for use. Restart thenetwork, anduse
the ip a command to verify the working.
# vgs
# lvs
1.Open the file /etc/tgt/targets.conf. Add the configuration lines to the end of the targets.
<target iqn.2012-12.com.example.san:mytarget>
backing-store /dev/volg/iscsivol
</target>
# chkconfig tgtd on
--type sendtargets
--portal 192.168.1.1
Step 3: Using the name of the target that you just discovered, log in
--targetname iqn.2012-8.com.example.san:mytarget
--portal 192.168.1.1
--login
Page 15 of 27
YouTube - Abhay More | Telegram - abhay_more
607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622
TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM - V : LINUX – U5
Step 4: Install the lsscsi package, and enter lsscsi toverify that you are connected to the target.
Step 5: Reboot the nodes, and ensure that the iSCSI target connection comes up after thereboot.
1. The cluster cannot run if you have NetworkManager enabled or if it is running. Hence, stop network
manager. Perform this step on all nodes.
# service NetworkManager stop
# chkconfig NetworkManager off
2. On all nodes install ricci and start its service then set a password for the user ricci.#
yum install -y ricci
# service ricci
start# chckconfig ricci
on
# passwd ricci
3. Use the SAN node to host luci. Then start its service#
yum install luci
# service luci start
• While starting luci for the first time, a public/private key pair is generated and installed.
• The name of this certificate is /var/lib/luci/certs/hosts.pem, and it is referred in the configuration
file /var/lib/luci/etc/cacert.config.
• If you want to replace these certificates with certificates that are signed by an external CA, you can just
copy the host certificate to the /var/lib/luci/certs directory. After generating the certificates, luci will
start and offer its services on HTTPS port 8084.
4. Launch a browser, and connect to https://yourserver:8084 to get access to the luci management
interface.
5. After logging into Luci, you need to create a cluster. Click Manage Cluster, and then click the Create
button. This opens the Create New Cluster interface
• Pick a name for your new cluster in the Create New Cluster dialog. The name cannot belonger than 15
characters, and it cannot be called cluster.
• Next, you need to specify the nodes and Password.
Page 16 of 27
YouTube - Abhay More | Telegram - abhay_more
607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622
TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM - V : LINUX – U5
• If your cluster is configured to use a dedicated network for cluster traffic, use the name of the host on
that specific network as the Ricci hostname
• For each node you want to add,click Add Another Node, and specify the name and Ricci password of
that node.
• If you want the cluster nodes to download all updates for cluster packages, select Download Packages.
If you don’t, you can just leave the default Use Locally Installed Packages option selected. In both cases,
required software will be installed.
• The option Reboot Nodes Before Joining Cluster isn’t really necessary. However, byselecting it, you’ll
ensure that the nodes are capable of booting with cluster support.
• For cluster-aware file system like GFS, select Enable Shared Storage Support to make sure the
packages you’ll need for that are installed.
• After entering all parameters, click Create Cluster to start the cluster creation.
• quorum is an important mechanism in the cluster that helps nodes determinewhether they arepart
of the majority of the cluster.
• By default, every node has one vote,and if a node sees at least half of the nodes plus one, thenthere is
quorum.
• An exception exists for two-node clusters, where the <cman two_node="1"> parameter is set in
/etc/cluster/cluster.conf to indicate that it is a two-node cluster in which the quorum rules are
differentbecause otherwise the cluster will never have quorum if one of the nodes is down.
• Particularly in a two-node cluster but also in other clusters that have an even numberof nodes,a
situation of a split brain can arise.
• That is a condition where two parts of the cluster, which have an equal amount of cluster votes,can no
longer reach one another. This means that the services could not run anywhere. To prevent situations
such as this, using a quorum disk can be useful.
Page 17 of 27
YouTube - Abhay More | Telegram - abhay_more
607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622
TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM - V : LINUX – U5
1. On one cluster node, use fdisk to create a partition on the iSCSI device. It doesn’t need to bebig—
100MB is sufficient.
2. On the other cluster node, use the partx -a command to update the partition table. Now check
/proc/partitions on both nodes to verify that the partition on the iSCSI disk has been created.
3. On one of the nodes, use the following command to create the quorum disk:
mkqdisk -c /dev/sdb1 -l quorumdisk.
4. On the other node, use mkqdisk -L to show all quorum disks. You should see the quorum disk with the
label quorumdisk that you just created.
5. In Conga, open the Configuration QDisk tab. On this tab, select the option Use AQuorum Disk. Then
you need to specify the device you want to use. Next, you’ll need to specify the heuristics. In the Path to
Program field, enterping -c 1 192.168.1.1.
Page 18 of 27
YouTube - Abhay More | Telegram - abhay_more
607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622
TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM - V : LINUX – U5
6. After creating the quorum device, you can use the cman_tool status command to verify that it works as
expected
# cman_tool status
To define the fence device, you open the Fence Devices tab in the Conga management
interface. After clicking Add, you’ll see a list of all available fence devices.
Page 19 of 27
YouTube - Abhay More | Telegram - abhay_more
607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622
TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM - V : LINUX – U5
• After selecting the fence device, you need to define its properties. These properties are different for
each fence device, but they commonly include a username, a password, and an IP address.
• After defining the fence devices, you need to connect them to the nodes. You can do this fromthe top
of the Luci management interface. click Nodes, and then select the node to which youwant to add the
fence device.
1. Check the log files. The cluster writes many logs to /var/log/cluster, and one of them may
contain a valuable hint telling why the service isn’t working. Make sure to
Page 20 of 27
YouTube - Abhay More | Telegram - abhay_more
607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622
TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM - V : LINUX – U5
check
/var/log/cluster/rgmanager.log.
2. Use clustat on all nodes to check status of service along with checking from Conga interface
3. From the Conga interface, disable the resource and try to activate everything manually.
4. If you have a problem with the file system resource, make sure to use /dev/disk naming, insteadof device
names like /dev/sdb2, which can change if somethingchanges to the storage topology.
5. If a service appears as disabled in both Conga and in clustat, use clusvcadm –e servicename to enable it.
It may also help to relocate the service to another node.
6. Don’t use the service command on local nodes to verify whether services are running.
• If we have used Ext4 file system for Cluster service, then it is possible that it fail when multiple
nodes try to write in simultaneously.
• If multiple nodes in the cluster need access to the same fi le system at the same time, you’llneed a
cluster filesystem.
• Red Hat offers the Global File System 2 (GFS2)as the default cluster file system.
• Using GFS2 lets you to write to the same file system from multiple nodes at the same time.
1. Onone of the nodes, use fdisk to create a partition on the SAN device, and make sure to mark it as
partition type 0x8e. Reboot both nodes to make sure the partitions are visible on both nodes,and verify
this is the case before continuing.
2. On both nodes, use yum install -y lvm2-cluster gfs2-utils to install cLVM and the GFS
software.
3. On both nodes, use service clvmd start to start the cLVM service and chkconfig clvmd on toenable it.
4. On one node, use pvcreate /dev/sdb3 to mark the LVM partition on the SAN device as a
physical volume.
Page 21 of 27
YouTube - Abhay More | Telegram - abhay_more
607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622
TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM - V : LINUX – U5
7. On both nodes, use lvs to verify that the cluster-enabled LVM volume has been created.
10. Use mkdir /gfsvol to create a directory on which you can mount the GFS volume.
11. Make the mount persistent by adding the following line to /etc/fstab:
/dev/clusgroup/clusvol /gfsvol gfs2 _netdev 0 0
12. Use chkconfig gfs2 on to enable the GFS2 service, which is needed to mount GFS2 volumesfrom
/etc/fstab.
13 .Reboot both nodes to verify that the GFS fi le system is mounted automatically.
1. Insert the Red Hat Enterprise Linux installation DVD in the optical drive of your server.
2. Use mkdir /www/docs/server1.example.com/install to create a subdirectory in theApache
document root for server1.example.com.
3. Use cp -R * /www/docs/server1.example.com/install from the directory wherethe Red Hat
Enterprise Linux installation DVD is mounted to copy all of the files on the DVD to the install
directory in your web server document root.
4. Modify the configuration file for the server1 virtual host in
/etc/httpd/conf.d/server1.example.com, and make sure that it includes the line Options Indexes.
Without this line, the virtual host will show the contents of a directory only if it contains an index.html fi
le.
• PXE boot allows you to boot a server from the network card of the server.
• The PXE server then hands out a boot image, which the server you want to install uses to startthe
initial phase of the boot.
• Two steps are involved:
1. We need to install a TFTP server and have it provide a boot image to PXE clients.
2. We need to configure DHCP to talk to the TFTP server to provide the boot image toPXE clients.
Installing the TFTP Server
• First you need to install the TFTP server package usingyum -y install tftp-server.
TFTP is managed by the xinetd service, and to tell xinetd that it should allow access to TFTP,open
the /etc/xinetd.d/tftp file and change the disabled parameter from Yes to No and restart the
xinetdservice using service xinetd restartand include xinetd in your start chkconfig tftpon.
Steps:
1. Use yum install -y tftpserver to install the TFTP server. Because TFTP is managedby xinetd,use
chkconfig xinetd on to add xinetd to your runlevels.
2. Open the configuration file /etc/xinetd.d/tftp with an editor, and change the linedisabled = yesto
disabled = no.
3. If not yet installed, installaDHCP server. Open the configuration file /etc/dhcp/dhcpd.conf write
option space pxelinux;
Page 23 of 27
YouTube - Abhay More | Telegram - abhay_more
607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622
TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM - V : LINUX – U5
class "pxeclients" {
"PXEClient";
next-server 192.168.1.1;
filename "pxelinux/pxelinux.0";
. Copy syslinux<version>.rpm from the Packages directory on the RHEL installation disc to
/tmp. Extract the file pxelinux.0 from it. This is an essential file for setting up the PXE boot
environment. To extract the RPM file, use cd /tmp to goto the /tmp directory, and from there,use
rpm2cpio syslinux<version>.rpm | cpio-idmv to extract the file.
timeout 10
display
boot.msglabel
Linux
Page 24 of 27
YouTube - Abhay More | Telegram - abhay_more
607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622
TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM - V : LINUX – U5
kernel vmlinuz
append initrd=initrd.img
8. If you want to use a splash image file while doing the PXE boot, copy the
/boot/grub/splash.xpm.gz file to /var/lib/tftptboot/pxelinux/.
9. We can find the files vmlinuz and initrd.img in the directory images/pxeboot onthe Red Hat
installation disc. Copy these to the directory /var/lib/tftpboot/pxelinux/.
10. Use service dhcpd restart and service xinetd restartto restart the requiredservices.
11. Use tail -f /var/log/message to trace what is happening on the server. Connect acomputer
directly to the server, and from that computer, choose PXE boot in the boot menu. The computerstarts
the PXE boot and loads the installationimage that you have prepared for it.
If you want to continue the installation, when the installation program asks “ Whatmediacontains the
packages to be installed?” select URL. Next, enter the URL tothe web server installation image you
createdhttp://server1.example.com/install.
Q. Write steps for Configuring the DHCP Server for PXE Boot.
• PXE boot allows you to boot a server from the network card of the server.
• The PXE server then hands out a boot image, which the server you want to install uses to startthe
initial phase of the boot.
• Two steps are involved:
1. We need to install a TFTP server and have it provide a boot image to PXE clients.
2. We need to configure DHCP to talk to the TFTP server to provide the boot image toPXE clients.
Configuring DHCP for PXE Boot
Next, modify the DHCP server configuration so that it can hand out a bootimage to PXE clients.To do
this, make sure to include the boot lines in dhcpd.conf file, and restart the DHCP server.
Page 25 of 27
YouTube - Abhay More | Telegram - abhay_more
607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622
TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM - V : LINUX – U5
class "pxeclients" {
"PXEClient";
next-server 192.168.1.70;
filename "pxelinux/pxelinux.0";
• class pxeclients ensures that all servers that are performing a PXE boot are recognized
automatically. This is done to avoid problems and to have DHCP handout the PXE boot imageonly to
servers that truly want to do a PXE boot.
• nextserver statement refers to the IP address of the server that hands out the boot image.Thisis
the server that runs the TFTP server.
Finally, a file is handed out.
1. On the installation server, copy the anaconda-ks.cfg file from the /root directory to the
/www/docs/server1.example.com directory.
set the permissions to mode 644, or else the Apache user will not be able to read it.
2. Start Virtual Machine Manager, and click the Create Virtual Machine button. Enter a name forthe
virtual machine, and select Network Install.
3. On the second screen of the Create A New Virtual Machine Wizard, enter the URL to the webserver
installation directory: http://server1.example.com/install.
Open the URL options, and enter this Kickstart URL:
Page 26 of 27
YouTube - Abhay More | Telegram - abhay_more
607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622
TRAINING -> CERTIFICATION -> PLACEMENT BSC IT : SEM - V : LINUX – U5
http://server1.example.com/anaconda-ks.cfg.
4. Accept all the default options in the remaining windows of the Create A New Virtual Machine
Wizard, which will start the installation.
5. Stop the installation after the kickstart file has loaded. After stopping the installation, removethe
kickstart file from the Virtual Machine Manager configuration.
Page 27 of 27
YouTube - Abhay More | Telegram - abhay_more
607A, 6th floor, Ecstasy business park, city of joy, JSD road, mulund (W) | 8591065589/022-25600622