Hp-Ux Student Guide - Part 2 PDF
Hp-Ux Student Guide - Part 2 PDF
Hp-Ux Student Guide - Part 2 PDF
guide
H3065S D.00
training
2003 Hewlett-Packard Development Company, L.P.
OSF, OSF1, OSF/Motif, Motif, and Open Software Foundation are trademarks of the Open Software
Foundation in the U.S. and other countries.
Hewlett-Packard Company shall not be liable for technical or editorial errors or omissions contained
herein. The information is provided “as is” without warranty of any kind and is subject to change without
notice. The warranties for HP products are set forth in the express limited warranty statements
accompanying such products. Nothing herein should be construed as constituting an additional
warranty.
Contents
Overview
Course Description............................................................................................................................ 1
Student Performance Objectives..................................................................................................... 1
Student Profile and Prerequisites.................................................................................................... 8
Curriculum Path ................................................................................................................................ 8
Solutions
Overview
Course Description
This course is targeted at the HP-UX system administrator who must configure and
administer HP-UX 10.X or 11.00 systems in an IEEE 802.3 local area network and be
responsible for HP-UX network administration. This course was updated to include
HP-UX 11.0 material, but still applies to 10.x systems. Differences between the two operating
systems are specified in the student notes sections.
• Describe the role of host names, IPs, MACs, ports, and sockets in the OSI model.
• Describe the role of repeaters, hubs, bridges, switches, routers, gateways, and firewalls in
a local area network.
• Configure software and drivers to support a newly installed network interface card.
• Configure and view the system host name with the hostname command.
• Configure and view the system IP address and netmask with the ifconfig command.
• Configure IP multiplexing.
− lanscan
− lanadmin
− linkloop
− arp/ndd
− ping
− netstat –i
− netstat –a
− netstat –r
− hostname
− nslookup
• Describe how run levels are used during system boot time.
• Create custom startup and shutdown scripts to start additional services during the boot
process.
• Compare and contrast the NFS PV2 and NFS PV3 protocols.
• Export file systems, and determine access privileges for those file systems.
− /etc/hosts
− NIS
− DNS/BIND
• Configure a primary DNS server using the hosts_to_named command.
• Add or remove a host in the DNS database, using the hosts_to_named command.
− /etc/rc.config.d/namesvrs
− /etc/named.conf
− /etc/resolv.conf
• Allow or prevent access to selected Internet services via the inetd.conf file.
• Allow or prevent access for selected clients via the inetd.sec file.
• Allow or prevent access for selected users via the passwd file.
− NTP server
− NTP peer
− NTP broadcast client
− NTP polling client
• Configure an NTP server.
• Create a depot.
Curriculum Path
• Describe the role of host names, IPs, MACs, ports, and sockets in the OSI model.
What Is a Network?
WAN
Student Notes
The System and Network Administration I course that preceded this class dealt primarily
with administration issues on a single system. This course will concentrate on the
technologies and services used to share resources among multiple UNIX hosts on a computer
network. Perhaps we should start with some definitions.
• Few systems these days have a dedicated, locally attached printer. Oftentimes, multiple
systems share one or more network printers.
• Disk resources may be shared via a network, too. Many users access files, directories,
and even executables via network file servers.
• If your desktop computer does not have a tape drive, you may choose to write system
backups to a tape drive physically attached to a tape backup server host elsewhere on
your network.
• Even CPU resources may be shared via a network. Users may run a simple executable on
a desktop system that queries a database server across the network.
HP officially defines a local area network (LAN) as a network that transmits a large
amount of information at a relatively high speed over limited distances within a single facility
or site. For instance, devices within a branch office are oftentimes connected via a local area
network. In a larger organization, each department may have a separate, dedicated LAN.
A wide area network (WAN) is a network that covers a large geographic area, allowing
devices in different cities to communicate with one another, though often at a data
transmission rate that is much slower than a LAN. Oftentimes, multiple LANs are connected
together via a WAN. Types of well-known WANs include the ARPANET and the public X.25
network.
Student Notes
Because no single vendor can meet the needs of the entire networking marketplace,
companies have to draw on multiple vendors for their communications hardware and
software. The unique network architectures and proprietary protocols developed by each
vendor are frequently incompatible, precluding communication among them. The Open
Systems Interconnection (OSI) model was developed by the International Standards
Organization to resolve these incompatibility issues and allow products from different
manufacturers to communicate with one another.
The layer concept, on which the OSI model is based, establishes a set of rules for data
transmission on a variety of levels. In the layered scheme, messages originate from the top
layer (layer 7) of a transmitting computer, move down to its lowest layer (layer 1), and travel
across the network media to the receiving computer. The message arrives at the lowest layer
of the receiving computer (layer 1), and moves up through its various layers to layer 7.
• Layer 5: The session layer allows the setup and termination of a communications path
and synchronizes the dialog between the two systems. It establishes connections between
systems in much the same way as an automatic dialer does between two telephone
systems.
"Terminal emulator"
• Layer 4: The transport layer provides reliable flow of datagrams between sender and
receiver, and ensures that the data arrives at the correct destination. Protocols at this
layer also ensure that a copy of the data is made in case it is lost in transmission.
"Software error correction"
• Layer 3: The network layer decides which path will be taken through the network. It
provides the packet addressing that will tell computers on the network where to route the
user's data.
"Addressing scheme"
• Layer 2: The data link layer provides reliable, error-free media access for data
transmission. It produces the frame around the data.
"Hardware error correction"
• Layer 1: the physical layer establishes the actual physical connection (cable
connection) between the network and the computer equipment. Physical Layer standards
determine what type of signaling is used (what represents a bit 0, what represents a 1),
what cable types and lengths are supported, and what types of connectors may be used.
"Cable"
Table 1
Instructions
The remainder of this chapter provides an overview of the protocols and network address
types that are required to pass data across a network from one process to another. As new
protocols and network address types are introduced, record them in the appropriate layer of
this OSI chart.
Which frames
0x0060B07ef226 are for me?
Student Notes
In order to pass data successfully from host to host on a local area network, there must be
some mechanism for determining which frames of data are destined for which hosts. Media
Access Control addresses solve this problem!
Every LAN card attached to a local area network must have a unique MAC address assigned
to it. Every frame of data passed across the network, then, includes both a source and
destination MAC address. If the destination MAC address on a passing frame matches a host's
own MAC address, the host knows that it should receive that frame of data. Frames destined
for other MAC addresses are ignored. While you may be accustomed to referencing hosts on
the network by "host name" or "IP address," those addresses must be mapped to MAC
addresses before a frame of data can be sent across the network wire. Host names and IP
addresses will be discussed in detail later in this chapter.
The MAC address is a 48-bit number that is set by the LAN card manufacturer. Typically,
HP-UX displays the MAC address as a 12-digit hexadecimal number, preceded by a 0x to
indicate that the value is in hex. The first six hexadecimal digits indicate which manufacturer
produced the card, while the last six digits uniquely distinguish each card produced by that
manufacturer from all others. Currently, HP LAN card MAC addresses begin with 0x080009 or
0x0060b0. The MAC address may be changed via the lanadmin command, but this is not
recommended.
# lanscan
Hardware Station Crd Hdw Net-Interface NM MAC HP-DLPI DLPI
Path Address In# State NamePPA ID Type Support Mjr#
2/0/2 0x0800094A7334 0 UP lan0 snap0 1 ETHER Yes 119
4/0/1 0x080009707AF2 1 UP lan1 snap1 2 ETHER Yes 119
NOTE: The MAC address is often referenced via a variety of different names. All of
these names refer to the same address:
• link-level address
• station address
• physical address
• hardware address
• Ethernet address
Student Notes
In addition to the MAC address assigned to each LAN card by the card manufacturer, each
LAN card on an HP-UX machine is also typically assigned an Internet Protocol (IP) Address.
Internet Protocol Addresses (or IP Addresses) make it possible to group nodes into
logical IP networks, and efficiently pass data between these networks. For instance, hosts
within your Chicago office may all be assigned IP addresses on one IP network, while hosts
in your San Francisco office may be assigned IP addresses on a different IP network. By
looking at a data packet's destination IP address, your network devices can intelligently
"route" data between networks.
IP Address Structure
IP addresses are usually represented by four 8-bit fields, separated by dots ("."). These fields
are called octets. Each 8-bit octet is represented by a decimal number in the range from 0 to
255.
The table below demonstrates the conversion of several 8-bit binary numbers to their
corresponding decimal values:
Using this conversion mechanism, IP addresses may be displayed in either binary or decimal.
Consider the following examples:
10000000.00000001.00000001.00000001 = 128.1.1.1
10001010.10000001.00000001.00000010 = 138.129.1.2
10011100.10011011.11000010.10101010 = 156.153.194.170
The remaining host bits in the IP address uniquely identify each host within the logical
network.
# lanscan
Hardware Station Crd Hdw Net-Interface NM MAC HP-DLPI DLPI
Path Address In# State NamePPA ID Type Support Mjr#
2/0/2 0x0800094A7334 0 UP lan0 snap0 1 ETHER Yes 119
4/0/1 0x080009707AF2 1 UP lan1 snap1 2 ETHER Yes 119
Next, use the ifconfig command to view each LAN card's IP address:
# ifconfig lan0
lan0: flags=843<Up,BROADCAST,RUNNING,MULTICAST>
inet 128.1.1.1 netmask ffff0000 broadcast 128.1.255.255
# netstat –in
Name Mtu Network Address Ipkts Opkts
lan0 1500 128.1.0.0 128.1.1.1 55670 23469
lo0 4136 127.0.0.0 127.0.0.1 3068 3068
CAUTION: Do not assign the same IP address to different hosts. If two hosts on the same
network use the same IP address, errors will occur when communicating with
these hosts.
IP Network Classes
/16 Network 8 Network Bits 8 Network Bits 8 Host Bits 8 Host Bits
/24 Network 8 Network Bits 8 Network Bits 8 Network Bits 8 Host Bits
Student Notes
The previous slide noted that IP addresses have two components: a network component and
a host component. The original designers of the Internet realized that some networks would
be very large, while others would be much smaller. Large networks would require more host
bits to provide a unique host address for each node, while smaller networks would require
fewer host bits to provide a unique host address for each node.
Varying the IP address network/host boundary makes it possible to allocate just enough IP
addresses for any size network. Thus, although every IP address is 32 bits, the boundary
between the network and host portions of an IP address varies from network to network.
When your ISP or IT department assigns you an IP address, the IP will often have a /xx
appended to the end. The /xx identifies the number of network bits in the IP address.
The following table demonstrates the effect of shifting the network boundary. The table only
shows /8, /16, and /24 networks; many others are possible, too.
** Note: Not all of the host addresses are actually usable. One of the addresses in each
network is used as the network address, another is used as the broadcast address. Thus,
there can only be 254 hosts on a /24 network. These special addresses will be discussed later.
Furthermore, the addresses were structured such that network devices could determine an
IP address's class (and network/host boundary!) by simply looking at the first few bits:
Class Net bits Host bits Number of Networks Hosts / Network Range
Class A 8 24 127 16,777,216 1–127
Class B 16 16 16,383 65,536 128–191
Class C 24 8 2,097,151 256 192–223
Unfortunately, the Class A/B/C IP allocation scheme led to inefficient use of the IP address
space, since many organizations were given much larger IP address blocks than they actually
needed. HP, for instance, was assigned Class A address 15.0.0.0/8. This address space
includes over 16 million IP addresses! This largesse was not considered a problem at the
time, since there seemed to be far more addresses than would ever be used. No one
anticipated the tremendous growth in the Internet that has occurred over the last decade.
In the 1990s, the Internet Engineering Task Force (IETF) committee decided to move to the
more flexible scheme known as Classless Internet Domain Routing (CIDR) that is used today.
Now you may be assigned a /13, /14, /15, /16, /23 — or almost any other network type —
depending on the number of hosts on your network.
Furthermore, using the new "Classless" IP addressing scheme, you may find that your IP
address is 192.1.1.1/20. Using the older "Classfull" IP addressing scheme, any IP beginning
with 192 had to be a Class C with 24 network bits. The new scheme is more flexible, but also
somewhat more complicated.
IPv6 Addressing
CIDR addressing and other creative solutions have made it possible to more efficiently use
the existing 32-bit IP address space more efficiently. However, a 32-bit address can represent
at most 232 (about 4 billion) addresses, and as more and more devices attach to the Internet,
this address space is being rapidly depleted.
As far back as 1991, the Internet Engineering Task Force began considering a successor to
the current 32-bit, 4-octet "IPv4" addressing method. After nearly a decade of study and
debate, the IETF has settled on a new standard which has been dubbed "IPv6". The new IPv6
standard uses a 128-bit addressing scheme to exponentially increase the pool of IP addresses.
Unfortunately, IPv6 addresses are also much more cumbersome than our current IPv4
addresses; they are typically represented as a series of eight four digit hexadecimal numbers.
Here's a typical IPv6 address:
CDCD:910A:2222:5498:8475:1111:3900:2020
Fortunately, the transition to IPv6 needn't occur overnight. As long as all the hosts on your
local area network continue to use IPv4, there is no need to upgrade your servers and
workstations to IPv6. The overall transition from IPv4 to IPv6 is expected to proceed
gradually over the course of several years.
HP currently offers an IPv6 developers' toolkit, but full support for IPv6 on HP-UX won't be
available until a future release of the OS.
For more information on IPv6, take a look at Pete Loshin's IPv6 Clearly Explained (ISBN
0124558380), or Christian Huitema's more technical IPv6: the New Internet Protocol (ISBN
0138505055).
The IP Netmask
IP Address:
100000000 00000001 00000001 00000001
128.1.1.1/16
Netmask:
111111111 11111111 00000000 00000000
255.255.0.0 or
0x ff ff 00 00
Netmask 1's identify network bits Netmask 0's identify host bits
Student Notes
When you configure your system's IP address, your system must be told which bits in your IP
address are network bits, and which bits are host bits. These days, the network/host
boundary is usually communicated via the "/" notation introduced on the previous page.
However, UNIX uses a different mechanism to identify the network/host boundary: the IP
netmask.
The netmask, like an IP address, has 32 bits. However, the netmask is formulated somewhat
differently than a standard IP address. To determine your netmask, write a "1" in each
network bit, and a "0" in each of the remaining bits. The resulting value may be written in
binary, dotted-decimal (like an IP address), or even in hexadecimal. The chart below shows
some common netmasks in all three forms:
For other conversions, either consult the binary/hex/decimal conversion chart at the end of
this book, or use the /usr/dt/bin/dtcalc calculator utility.
# lanscan
Hardware Station Crd Hdw Net-Interface NM MAC HP-DLPI DLPI
Path Address In# State NamePPA ID Type Support Mjr#
2/0/2 0x0800094A7334 0 UP lan0 snap0 1 ETHER Yes 119
4/0/1 0x080009707AF2 1 UP lan1 snap1 2 ETHER Yes 119
Next, use the ifconfig command to view each LAN card's netmask:
# ifconfig lan0
lan0: flags=843<Up,BROADCAST,RUNNING,MULTICAST>
inet 128.1.1.1 netmask ffff0000 broadcast 128.1.255.255
Student Notes
The last few slides have covered the basic concepts required to formulate and understand IP
addresses. The next few slides discuss several special IP addresses that you will likely
encounter. The first of these is the IP Network Address.
An IP Network Address is a special address used by routers and other network devices to
reference an entire network of hosts. The network address is formulated by setting all of the
host bits in an IP address to "0."
Consider the examples on the slide. In the 128.1.x.x/16 IP addresses, the last 16 bits (that is,
the bits in the last two octets) define the host portion of the addresses. Setting these 16 bits
to "0" yields the following network address:
10000000.00000001.00000000.00000000 = 128.1.0.0/16
In the 192.1.1.x/24 IP addresses, the last 8 bits (that is, the bits in the last octet) define the
host portion of the addresses. Setting these bits to "0" yields the following network address:
11000000.00000001.00000001.00000000 = 192.1.1.0/24
# netstat –in
Name Mtu Network Address Ipkts Opkts
lan0 1500 128.1.0.0 128.1.1.1 55670 23469
lo0 4136 127.0.0.0 127.0.0.1 3068 3068
Packets sent
to the network
128.1.1.1 128.1.1.2 128.1.1.3 broadcast address
are received by ALL
hosts on the
network.
Formulate the
broadcast address
by setting all
host bits to "1".
# ping 128.1.255.255
Student Notes
The network broadcast address may be used to send a packet to all of the nodes on a host's
network. Some network services take advantage of this broadcast functionality to enable
clients to identify an available server. X-terminals, for instance, may use the broadcast
mechanism to identify all available login servers on the terminal's network. Network
Information Service clients use the broadcast address to identify an NIS domain server
during system startup. These are just a few of the many network services that use an IP
broadcast to send a packet to all hosts on a network.
To formulate the broadcast address, simply set all IP host bits to "1". Consider the example
on the slide. The 128.1.0.0/16 network has 16 host bits in the last two octets. Placing a "1" in
all 16 host bits yields the following broadcast:
10000000.00000001.11111111.11111111 = 128.1.255.255
# lanscan
Hardware Station Crd Hdw Net-Interface NM MAC HP-DLPI DLPI
Path Address In# State NamePPA ID Type Support Mjr#
2/0/2 0x0800094A7334 0 UP lan0 snap0 1 ETHER Yes 119
4/0/1 0x080009707AF2 1 UP lan1 snap1 2 ETHER Yes 119
Next, use the ifconfig command to view each LAN card's broadcast address:
# ifconfig lan0
lan0: flags=843<Up,BROADCAST,RUNNING,MULTICAST>
inet 128.1.1.1 netmask ffff0000 broadcast 128.1.255.255
# ping 127.0.0.1
Student Notes
The IP loopback (or localhost) address is a special IP address that may be used to
reference your local host, without actually sending a packet out on the local network.
Applications sometimes use the loopback address to send network traffic to other
processes on the same machine. The loopback address may be used for troubleshooting
purposes as well. For instance, if a client claims to be having difficulty establishing a telnet
connection to your host, telnet your loopback address. If your telnet attempt to the
loopback address succeeds, there is probably a network connectivity problem between
your host and the client, rather than a problem with the telnet service.
Attempts to access the loopback address should succeed even if your LAN card is down,
disconnected, or misconfigured.
Obtaining an IP Address
Private Public
Intranet Internet
Firewall
Student Notes
Every host on an IP network must have an IP address. The procedure required to obtain an IP
address depends on the network you wish to connect to.
If you, or your organization, wish to have a direct Internet connection, you must obtain a
unique IP address, used by no one else anywhere on the Internet. The International
Committee for Assigned Names and Numbers (ICANN) is the organization that is currently
responsible for determining how IP addresses are allocated and used. ICANN's website is
accessible at http://www.icann.org. ICANN has delegated responsibility for allocating
IP addresses out to several regional authorities:
Instead, many organizations choose to configure a private Intranet that is insulated from
the dangers of the public Internet by some sort of network firewall. Firewalls can be used to
control the type of traffic that passes both in and out of your organization's private Intranet.
There are two ways to obtain and allocate IP addresses in this situation. One approach is to
request a public Internet IP address for each host, then shield those hosts behind your
firewall. If you choose to go this route, you will have to apply for a block of unique, public
Internet addresses from your ISP or the websites listed in the previous section.
10.*.*.*
172.16-31.*.*
192.168.*.*
These addresses are designated specifically for use on private Intranets. Hosts with
addresses within these ranges may not be connected directly to the public Internet, nor are
packets destined for these addresses allowed to pass on or through the public Internet. Since
these addresses are not allowed directly on the public Internet, any organization may use
these addresses without fear of conflicting with other organization's addresses.
Question: If packets destined for these addresses are not allowed on the public Internet, how
can these hosts send email or access web sites outside their private networks?
Intranet hosts that need web access to the outside world may access the Internet via a proxy
server. These hosts can be configured to relay all external web access requests through a
specially configured server with connections both to the private Intranet, and the public
Internet. The proxy server forwards internal clients' access requests to external sites via its
IP address on the public Internet, then relays the responses back to the requesting clients.
Email service may be provided using similar functionality. Hosts on the private Intranet send
and receive email via a specially configured Mail Gateway that straddles both the private
Intranet, and the public Internet.
For even more flexibility, many firewall packages can be configured to provide Network
Address Translation service. Using this functionality, clients on the private Intranet can
relay requests for many different network services through the corporate firewall. HP's
Praesidium product is one of many products designed to provide this type of functionality.
IP Address Examples
192.66.123.4/24
148.10.12.14/16
9.12.36.1/8
163.128.19.9/16
123.45.65.23/8
199.66.55.4/24
Student Notes
The slide above lists six IP addresses in dotted decimal, "/" notation. Using the information
given, compute the netmask, network, and broadcast address associated with each IP
address.
Host Names
/etc/hosts
128.1.1.1 sanfran
I can reference nodes 128.1.1.2 oakland
by host name and let 128.1.1.3 la
HP-UX automatically 128.1.1.4 sandiego
determine the IP
IP?
addresses for me! d's
ak lan .1 .1.2
a t iso
is 128
Wh s IP
land'
oak
Telnet request
To: 128.1.1.2
# telnet oakland
128.1.1.2 (oakland)
Student Notes
Although HP-UX systems and other network devices identify hosts by IP address, users and
applications find IP addresses to be a cumbersome method for identifying network hosts:
• IP addresses are not very memorable. Users that access dozens of network hosts on a
regular basis may have trouble remembering those hosts' IP addresses.
• Anytime you change your network topology, IP addresses are likely to change. Updating
all the scripts and application configuration files that reference the old IP addresses could
quickly become a support nightmare!
For both of these reasons, many users and applications prefer to reference network hosts by
host name rather than IP address. A host name is nothing more than a user-friendly, easily
remembered, "nickname" assigned to each host on a network.
• Host names must only contain letters, numbers, and underscores. Punctuation marks and
other special characters are not allowed.
• Choose meaningful host names. A system's host name may be based on the primary user
(the workstation on Tom's desk might have host name "tom"), function ("mailsvr" or
"filesvr"), geography ("chicago", "tokyo"), or any other scheme that your users find
memorable.
The /etc/hosts file Each system maintains its own file which lists the names and
IP addresses of other nodes on the network. This is used
primarily on small networks.
NIS One system (the NIS server) maintains a list of all the nodes
and IP addresses on the network. When resolving host names
to IP addresses, all systems reference the NIS server. This is
used on medium size networks.
# hostname
sanfran
Converting IP Addresses to
MAC Addresses
Outbound Frame
128.1.1.1 128.1.1.2
(sanfran) (oakland)
080009-000001 080009-000002
Student Notes
As you may recall from an earlier discussion of MAC addresses, every frame of data passed
across a network must include both source and destination MAC addresses.
To allow the system to quickly determine a remote node's MAC address, each local kernel
maintains a real-time, lookup table known as the ARP cache. The ARP cache maps IP
addresses of remote nodes to their corresponding MAC addresses.
The Address Resolution Protocol (ARP) cache is a memory resident data structure whose
content is maintained and managed by the local system's kernel. By default, the ARP cache
contains the IP addresses and corresponding MAC addresses of nodes that the local system
has communicated with in the last five minutes.
Next, sanfran checks the ARP cache to find the MAC address that corresponds to oakland's
IP address.
Finally, sanfran can send the outbound frame on the network using oakland's MAC address
as the destination.
# arp -a
Broadcast
6 3 Packet
4
ARP cache
2 128.1.1.1
128.1.1.2
080009-000001
080009-000002
128.1.1.2 128.1.1.3 128.1.1.4
128.1.1.3 080009-000003 (oakland) (la) (sandiego)
128.1.1.4 incomplete!
128.1.1.4 080009-23EF45
128.1.1.1
5
(sanfran) 1 $ ping sandiego
Example: sanfran pings sandiego
1. sanfran pings sandiego. sanfran resolves sandiego's IP address via /etc/hosts.
2. Search for sandiego's IP in the arp cache — the IP address is not found in ARP cache.
3. Send ARP broadcast on the local network to find the MAC address for 128.1.1.4.
4. System with the specified IP address responds with a packet containing its MAC.
5. The MAC address and corresponding IP address are added to sanfran's ARP cache.
6. The frame specifically addressed to sandiego's MAC address is sent.
Student Notes
Resolving a destination node's IP address to its corresponding MAC address is fairly
straightforward as long as the destination node's MAC address is in the local node's ARP
cache. There are many situations however, when a destination node's MAC address may not
be in the local ARP cache. What happens then?
• One and only one node should respond to the ARP broadcast by sending a reply packet
indicating that it has the requested IP address. The reply packet sent by the remote node
will contain the remote node's MAC address.
• Upon receiving the reply packet, the local node records the remote node's IP/MAC
address information in the local ARP cache.
# ping sandiego
3. Once sanfran determines sandiego's IP address, sanfran checks the ARP cache for
sandiego's IP address. In this example, sandiego's IP address is not present in sanfran's
ARP cache.
4. In order to determine sandiego's MAC address, sanfran sends an ARP broadcast onto the
network requesting a response from the host with IP address 128.1.1.4 (sandiego's IP).
6. After receiving sandiego's response, sanfran adds sandiego's MAC address to the local
ARP cache for future reference.
7. sanfran can now ping sandiego, addressing the packets specifically to sandiego's MAC
address.
#=> ping sandiego
PING sandiego: 64 byte packets
64 bytes from 128.1.1.4: icmp_seq=0. time=18. ms
64 bytes from 128.1.1.4: icmp_seq=1. time=2. ms
64 bytes from 128.1.1.4: icmp_seq=2. time=2. ms
64 bytes from 128.1.1.4: icmp_seq=3. time=2. ms
64 bytes from 128.1.1.4: icmp_seq=4. time=2. ms
64 bytes from 128.1.1.4: icmp_seq=5. time=2. ms
64 bytes from 128.1.1.4: icmp_seq=6. time=2. ms
64 bytes from 128.1.1.4: icmp_seq=7. time=2. ms
Is the
hostname
destination a hostname
or an IP address?
Student Notes
The flow chart above summarizes the actions that have to occur every time hosts
communicate across a local area network.
The flowchart notes that packets sent to hosts outside of the local network must be
forwarded to a router, before being passed to their eventual destination. Routing will be
discussed in detail later in the course.
Retransmit 4 3 Packet
Send
2 3 2 1 1 3 2
Data Packets Acknowledgements 1 5
1
2 2 1
Open Close
3 Segment
2 6 Reassemble
sanfran Data
128.1.1.1
3 oakland
128.1.1.2
Student Notes
Up to this point, we have discussed how:
• What happens when a packet arrives at the destination host? How is the packet passed to
the destination application on that host?
• What happens if a packet is lost? Who is responsible for re-sending the lost packet or
otherwise handling this situation?
The remaining slides in the chapter discuss two protocols that govern how packets are sent
and acknowledged, and the port and socket addresses that ensure that data sent across a
network is passed to the appropriate process or application on the destination host.
TCP is a Reliable protocol. For every datagram sent, an acknowledgment is returned by the
receiver. If an acknowledgment is not received, the transmitting node resends the packet.
2. Before sending the data, the sending node segments the data into smaller datagram
packets.
4. Upon receiving the datagram packets, the destination node sends acknowledgment
packets back to the source node. The sending node automatically retransmits
unacknowledged datagrams.
5. Upon successfully transferring all datagrams to the destination node, the connection
between the two nodes is terminated and closed.
6. Once the destination node has received all datagrams, they are reassembled in their
proper sequence.
NOTE: In some cases, steps 5 and 6 may occur in reverse order.
2 1
2
1 1 2 1 3
128.1.1.1 128.1.1.2
(sanfran) (oakland)
Student Notes
The second common protocol used between two nodes on a network is the User Datagram
Protocol (UDP). UDP requires less network overhead than TCP, but it does not provide an
acknowledgement mechanism. It is therefore considered unreliable. Characteristics of the
UDP protocol are below.
UDP is an Unreliable protocol. The receiving node does not send acknowledgment packets
back to the source node. The source node never knows whether the data packet arrived at
the destination node. For this reason, the protocol is considered unreliable.
2. No connection is established with the destination node. The datagram is simply sent to
the destination address.
Analogy: Sending data via TCP is similar to making a phone call. Before any communications
takes place, a connection is established between the sender and receiver. There is a verbal
acknowledgment that information is being received.
Network Subsystem
128.1.1.2 128.1.1.3 128.1.1.4
(oakland) (la) (sandiego)
telnetd ftpd rlogind
port 23 port 21 port 513
$ telnet sanfran $ ftp sanfran $ rlogin sanfran
128.1.1.1 (sanfran)
Student Notes
MAC addresses, IP addresses, TCP and UDP are all used to get packets from node to node on
a network. Each node, though, may have dozens, if not hundreds, of network services and
applications running simultaneously. When a data packet arrives on a system's LAN interface,
how does HP-UX determine which application should receive that packet?
Port Numbers
Every network application is assigned a unique port number that distinguishes that
application from all others. Network hosts specify which application should receive a packet
by including a destination port number in outgoing packets.
oakland's telnet request is destined for sanfran's telnetd process on port number 23. la's
ftp request is destined for sanfran's ftpd process on port number 21. sandiego's rlogin
request is destined for sanfran's rlogind daemon on port number 513.
As the flood of incoming packets arrives, sanfran ensures that each packet gets to the right
application or service by checking the destination port numbers.
Network Subsystem
128.1.1.2 128.1.1.3
telnetd ftpd
(oakland) (la)
telnetd
telnetd $ telnet sanfran $ telnet sanfran
$ telnet sanfran $ ftp sanfran
128.1.1.1 (sanfran)
Problem: Which network application gets the data when multiple instances are present?
Multiple clients can be executing the same network application.
Multiple instances of the network application can be running on the same client.
Solution: Create a unique socket for each process which runs a network application.
A socket is a port number combined with a node’s IP address.
A socket connection is the coupling of a client socket address with a server socket address.
Student Notes
A packet's destination application can be identified by the packet's destination port number.
What happens, though, if:
• Clients oakland and la both choose to access the telnet service on server sanfran
simultaneously? Both nodes address their packets using port number 23, yet each packet
must be handled by a separate instance of the telnetd daemon.
How does sanfran distinguish between telnet packets from one node versus telnet
packets from another node?
• User1 and user2 on oakland initiate simultaneous telnet sessions to sanfran. Both
telnetd processes on sanfran use the well-known telnet port number, 23.
How do sanfran and oakland determine which telnet packets belong to user1, and which
belong to user2?
Sockets
Sockets provide the solution to both of the problems mentioned above. A socket is simply an
address that identifies a specific network application running on a specific host. A socket
address is formed by appending a destination port number to a destination IP address.
The sockets used by the applications on the slide are listed below:
Socket Connection
A socket connection is defined by the pairing of two sockets together. The first socket
identifies a network program on a client node (128.1.1.2.50001), and the second socket
identifies a network daemon (usually) on the server node (128.1.1.1.23). The socket
connection would then be 128.1.1.2.50001–128.1.1.1.23.
Network Subsystem
telnet telnet
128.1.1.2.5000 128.1.1.2.5000
telnetd telnetd 1 2
128.1.1.1.23 128.1.1.1.23 128.1.1.2 (oakland)
Student Notes
The slide shows how sockets and socket connections can be used to uniquely identify two
telnet service connections between client oakland and server sanfran.
When the first telnet instance is started on oakland, HP-UX assigns a port number for the
telnet client process. Since there is no pre-defined port number for the client side telnet
program, the first available port number is chosen (port number 50001 in the example on the
slide). Thus, the socket created for the first telnet instance on oakland is 128.1.1.2.50001.
Oakland initiates a connection request to sanfran's well-known telnetd port, 23. Sanfran
spawns a telnetd daemon to service the telnet request from oakland. This telnetd
daemon uses port number 23. Therefore, the socket created to represent the telnetd
daemon is 128.1.1.1.23.
The second telnet session shown on the slide is using socket addresses 128.1.1.2.50002-
128.1.1.1.23.
Thus, each of these connections may be uniquely identified by the pairing of the server and
client processes' socket addresses.
4 Transport TCP requires that a socket connection be established; UDP does not.
TCP requires packets be acknowledged; UDP does not.
TCP is streams-based; UDP is message-based.
Student Notes
In this module, we have learned how
• Host names are resolved to IP addresses.
• TCP and UDP protocols are used to allow nodes to communicate on the network.
Directions
Answer the following questions.
1. If a host has two LAN interface cards, will the MAC addresses of the two cards be the
same, or different?
2. Is it possible to determine which network a host is on just by looking at the host's MAC
address?
4. Which of the networks listed in question 3 would allow the fewest hosts?
What is the maximum number of hosts allowed on that network?
5. How many different networks are represented by the list of IP addresses below?
132.1.1.3/16
132.2.1.1/16
132.1.1.2/16
132.1.1.1/16
132.1.2.1/16
132.1.2.2/16
7. What is the difference between a destination port number and a destination IP address?
9. HP-UX provides three different methods for mapping host names to IP addresses. Name
two.
• Describe the role of repeaters, hubs, bridges, switches, routers, gateways, and firewalls in
a local area network.
Transmission Media
Firewall
Interface Cards
Repeaters
Hubs Gateway Router Router
Bridges
Bridge Switch
Switches (chicago office) (london office)
Routers Mainframe
Hub Hub
(sales) (research)
Gateways
Firewalls
Student Notes
Most LANs today are comprised of a variety of hardware components. Weeklong courses
have been written about firewalls, routers, switches, and LAN topologies. Our goal in this
chapter is simply to present an overview of the purpose and function of the most common
hardware components you are likely to encounter as an HP-UX system administrator.
Every LAN usually has a combination of workstation and server nodes, each with one or
more network interface cards (NICs). These nodes may be connected together via a variety
of cable types in a variety of topologies. Different networking standards have different
mechanisms for determining when hosts on the LAN are given the opportunity to transmit
data. Most networks also include a variety of network devices. Some of the more common
network devices include:
• repeaters
• hubs
• bridges
• switches
• routers
• firewalls
Each of these hardware components, devices, and topologies will be discussed in detail later
in the chapter.
Table 1
Instructions
During the lecture, a number of additional protocols and LAN hardware components will be
discussed. Remove this sheet of paper from the workbook, and as your instructor introduces
each new protocol and LAN hardware component, record it in the appropriate layer of the
OSI chart.
Twisted Pair
Coaxial Cable
Woven Metal Shield Central Copper Conduit
Fiber Optic
Glass or Plastic Fiber Cable
Student Notes
Transmission media connects the devices in a local area network and provides the means by
which data signals travel from device to device. Many different types of transmission media
are used on today's networks. When choosing a transmission medium for your network, you
must consider several issues:
• How much data must your network be able to handle? 10 Megabits per second (Mbps)?
100 Mbps? 1000 Mbps?
• Is electrical interference an issue in your environment? Some cable types are susceptible
to data loss because of electrical interference from telephone lines, power cables, heavy
electrical machinery, and fluorescent lights. This tends to be a more critical issue in
manufacturing environments.
• What is the maximum distance between nodes on your network? Signals weaken as they
travel along a cable. As the signals weaken, the effect of external electrical interference
increases, and errors may occur. This signal loss is technically termed attenuation.
Some transmission media types are more susceptible to attenuation than others.
• How much can you afford to spend? Some transmission media types are relatively cheap
to purchase and install, while others are much more expensive.
The notes below describe some of the more common transmission media types used in
today's networks.
Twisted-Pair Cable
Twisted-pair cable consists of two single wires, each encased in color-coded plastic
insulation, and then twisted together to form a pair. Each pair of wires is then bundled with
one to three other pairs, yielding a grand total of four or eight wires per cable. The cabling
used to connect telephones is twisted-pair.
There are several variations on twisted-pair cable. Shielded Twisted-pair (STP) includes a foil
or copper jacket to shield the wires inside the cable from electrical interference.
Unshielded Twisted-pair (UTP), which lacks shielding, is cheaper and much more common
than STP in most networks today. Unshielded twisted-pair cable was originally designed for
wiring telephones, but can be used for data as well. Since unshielded twisted-pair cable is
already required in many buildings to support telephones, using this cable for your data
needs as well can significantly reduce installation costs. UTP cable is available in several
different grades:
Category 1 UTP: Cat 1 UTP is used for doorbells, alarms, and other trivial applications;
it is not appropriate for network applications.
Category 2 UTP: Cat 2 UTP is primarily used for digital and analog phones; it is not
appropriate for network applications.
Category 3 UTP: Cat 3 UTP is used for 4 Mbps Token Ring, 10BaseT Ethernet, and
analog and digital phone systems.
Category 4 UTP: Cat 4 UTP is rare but sometimes used for 16 Mbps Token Ring
networks.
Category 5 UTP: Cat 5 UTP is used for 16 Mbps Token Ring, and 10BaseT, 100BaseT,
and 1000BaseT Ethernet networks.
Category 5e UTP: Enhanced Cat 5e UTP is a slightly higher-grade cable than standard Cat
5. Like Cat 5, Cat 5e can be used for Token Ring, 10BaseT, 100BaseT,
and 1000BaseT Ethernet networks. Future network standards may
require Cat 5e rather than Cat 5.
Standards are currently being developed for Cat 6 and Cat 7 cable grades that will support
even higher data transmission rates in the future.
Cat 5 cable has been the cable of choice for most recent network installations. Cat 5e is an
even better choice to ensure compatibility with future technologies. Twisted-pair cable is
inexpensive, easy to install, and currently supports Token Ring and 10 Mbps through
1000 Mbps Ethernet networks.
Many purchased cables have "Cat 3," "Cat 5," or "Cat 5e" labels printed on the cables
themselves so you can determine which type of cabling your shop uses. Cat 3, Cat 5 and Cat
5e twisted-pair cables all use standard 8-pin RJ-45 connectors that look very similar to
standard telephone cables.
Coaxial Cable
Coaxial cable consists of a single, central conductive wire surrounded by a shield of either
fine copper mesh or extruded aluminum. Between the shield and the center conductor is a
dielectric (non-conducting) material. Cable TV boxes and cable modems both use variations
on coaxial cable.
Two types of coaxial cable have been commonly used for LANs in the past:
Thicknet: (or ThickLAN) — Used a thick, inflexible coaxial cable. Adding a new node on
a thicknet segment required the use of a "vampire tap." Tightening the vampire
tap connector pierced the cable shielding and tapped into the cable's core.
Because thicknet is so difficult to work with, it is very rarely used today.
Thinnet: (or ThinLAN) — Used a thinner, more flexible coaxial cable. Each thinnet
cable has a "Bayonet-Neill-Concelman" (BNC) connector on each end. Nodes
connect to a thinnet cable via a "T" shaped connector on the back of each
node's network interface card. Every thinnet cable must be attached to a T-
connector on both ends, and every open T-connector port must have a "BNC
Terminator" to prevent loss of data. In order to add a node to a thinnet
network, simply run a thinnet cable from an existing node's T-connector to the
new node's T-connector, and connect a terminator if necessary.
Though thinnet coaxial cable is easy to install, it is more expensive than twisted-pair and
does not support the newer 100BaseT and 1000BaseT network technologies. As a result, most
new LAN installations use twisted-pair rather than coaxial cable.
Fiber-Optic Cable
Fiber-optic cable is made of glass or plastic fibers that transmit signals via light pulses. Fiber-
optic cables can support extremely high data rates through a physically small cable. They are
immune to electrical noise and are therefore able to provide a low error rate at a great
transmission distance. The cable is inexpensive, but it is not easily tapped and is therefore
difficult to install. Fiber-optic cable supports a transmission rate of 100 Mbps to 1000 Mbps.
Fiber is often used for network backbones connecting multiple smaller department or
workgroup LANs, since these applications may exceed the 100m segment limit imposed by
twisted-pair. Fiber-optic is also commonly used in heavy industrial environments where
interference poses problems for twisted-pair and for military applications where security is of
paramount importance. There are two major categories of fiber-optic cable:
Multi-mode: Multi-mode fiber-optic cable typically has a 50 or 62.5-micron fiber-optic core
surrounded by a 125-micron protective cladding (this is typically labeled
62.5/125 micron fiber-optic cable). Since multi-mode cable is relatively large, it
is relatively easy to couple a light source to the cable. However, the larger
core diameter allows the light to bounce off the sides of the cable, which leads
to dispersion and signal degradation over distances greater than 2 km. LEDs
are often used as the signal source on interface cards using multi-mode cable.
Single-mode: Single-mode fiber typically has a much smaller 10-micron core. This smaller
core size minimizes dispersion and allows for much longer segment lengths —
100 km or more in some cases! The downside, however, is that single-mode
fiber typically requires a relatively expensive laser, rather than an LED, as a
signal source.
Most HP fiber-optic interface cards require 62.5/125 multi-mode cable with Straight Tip (ST),
Subscriber Connect (SC), or Duplex SC type connectors. ST connectors are round in shape,
while SC connectors are square; a Duplex SC connector is simply a pair of SC connectors in a
single enclosure. Check your documentation to determine the specific cable/connector
combination required for your environment.
Comparison of LAN Transmission Media
Cable Type UTP Twisted-pair Coaxial Fiber-optic
Connector Type RJ-45 or 50 pin BNC Fiber-optic SC
Transmission Rate 10 Mbps to 1000 Mbps 10 Mbps 100 Mbps to 1000 Mbps
Maximum Segment 100m 185 m to 500 m 220 m to 1000 m+
Flexibility Flexible Stiff Flexible
Noise Immunity Good Good Excellent
Security Moderate Moderate Excellent
Ease of Installation Excellent Good Good
Cost per Connection Very Low Moderate Expensive
Reliability Good Good Excellent
LAN Topologies
Ring Bus
Student Notes
Your LAN's topology determines the arrangement of the devices on your network. Three
different topologies are commonly used today:
Bus Topology
Devices connected via a bus topology connect to a single, common, shared cable. Devices
attach to the cable at regular intervals. Nodes attached to a network configured using a bus
topology typically broadcast messages in both directions on the cable simultaneously.
Ethernet standard networks usually use a bus topology when cabled via coaxial cable.
Ring Topology
Ring topology networks are cabled in a ring. Data is passed from node to node around the
ring until it arrives at its destination. Some FDDI networks use a ring topology.
Star Topology
Star topology networks are the most common LAN type today. In a star topology network,
cables radiate outward from a central device (typically called a hub) to each node on the
network. Any time a host wishes to contact another host, it must send the signal to the hub,
which then propagates the signal to the desired destination. Ethernet networks using
twisted-pair cable are cabled in a star topology.
A distinction should be drawn between the terms logical topology and physical topology.
A network's physical topology determines how devices on the network are physically
cabled. A network's logical topology, on the other hand, defines the logical pathway a signal
follows from host to host.
In some cases, the physical topology may be identical to the logical topology, but in some
cases, they may be different. For example, twisted-pair Ethernet networks use a physical star
topology, but use a logical bus topology. Although cables radiate from a central Ethernet hub,
the circuitry within the hub approximates the signal path of a bus topology network. Ethernet
networks are not unique in this respect; Token ring networks are cabled using a star
topology, but use a logical ring topology.
CSMA/CD Method
Student Notes
After you have physically attached two or more nodes to your network, your network
interface cards must determine which node is given an opportunity to transmit data and
when. Several different LAN access methods have been used over the years to control
access to local area networks. The two most common access methods are described below:
CSMA/CD CSMA/CD stands for Carrier Sense Multiple Access with Collision
Detection. Hosts on a CSMA/CD network monitor the network before
transmitting. If a host has data to transmit, and the network is not already
in use, the node transmits its signal on the wire. On a busy network, two
nodes could potentially choose to transmit at the same time, resulting in a
collision. If a collision occurs, the nodes responsible for the collision wait
a random period, then retransmit. The random wait period makes it highly
unlikely that the two nodes will retransmit at the same time again and
create another collision. Ethernet networks use the CSMA/CD access
method.
Token Passing Hosts on LANs that use a token passing access method pass a "token"
from node to node in a circular fashion. Only the node that currently
possesses the token is permitted to access the network. If the node
receiving the token does not have data to transmit, it simply passes the
token along to the next node. Token passing provides guaranteed access
to every node on the network and is efficient under heavy traffic loads.
FDDI and Token Ring networks both use the token passing access
method to manage network access.
T T
Hub/Switch
Student Notes
HP supports a variety of Network Interface Card (NIC) types for the HP 9000 server and
workstation families. The next few slides present an overview of the most common NIC card
types found in HP boxes today. Each of the standards described here define:
Ethernet Standards
The network standards shown on the slide above are all variations on the Ethernet/IEEE
802.3 LAN standard. The first Ethernet network was developed at the Xerox PARC research
lab in the early 1970s. This was among the first networks ever to use the CSMA/CD access
method. In 1980, DEC, Intel, and Xerox banded together to publish what became known as
the "DIX Ethernet Standard,” which was followed by the official IEEE (Institute of Electrical
and Electronic Engineers) 802.3 Standard in 1985; both standards were based on the
CSMA/CD research done at PARC. In the years since 1985, Ethernet has become the most
widely used LAN technology.
The original Ethernet IEEE 802.3 standard was based on ThickLAN, or 10base5 coaxial cable,
and offered a 10 Mbps transmission speed. Since then, as networking technology has
progressed, IEEE has supplemented the original 802.3 standard. The table on the slide lists
the most common Ethernet interface card types that HP supports today.
Note that although the various Ethernet specifications support different cable types,
transmission speeds, segment lengths, and physical topologies, they all share several features
in common. All support the traditional Ethernet frame structure, the CSMA/CD access
method, and a logical bus topology.
10Base5 10 Mbps Ethernet specification using thicknet coaxial cable, with a 500-meter
maximum segment length. HP stopped supporting 10Base5 for HP 9000s in
1998.
10Base2 10 Mbps Ethernet specification using thinnet coaxial cable, with a 185-meter
maximum segment length. 10Base2 networks typically use a physical bus
topology. Since twisted-pair has become the preferred cable type in most
shops, few interface cards today include a built-in 10Base2 port. Instead, you
must attach a 10Base2 LAN "transceiver" to the 15-pin AUI (Attachment Unit
Interface) port on the back of the interface card. Then attach a BNC
T-connector to the transceiver, which then connects to the thinnet cable run.
Be sure to install a thinnet "terminator" on any unused T-connector ports.
100BaseTX 100 Mbps Ethernet specification using Cat 5 twisted-pair cable with a
100-meter maximum segment length. "100BaseTX" is oftentimes used
interchangeably with the abbreviation "100BaseT.” 100BaseTX is physically
cabled in a star topology, with Cat 5 twisted-pair cable radiating out from a
central 100BaseTX hub or switch. The cables attach directly to an RJ45 port
on the back of your LAN interface card.
100BaseFX 100 Mbps Ethernet specification using fiber-optic cable with a maximum
segment length of 412 meters or more, depending on the type of cable and
transceiver (Consult your card's documentation for details). 100BaseFX is
physically cabled in a star topology with fiber-optic cable radiating out from a
central 100BaseFX fiber-optic hub or switch. The cables attach directly to the
LAN interface card via a Subscriber Connector (SC) duplex connector.
1000BaseT 1000 Mbps Ethernet specification using Cat 5 twisted-pair cable with a
maximum segment length of 100 meters. "1000BaseT" is oftentimes used
interchangeably with the term "Gigabit Ethernet.” 1000BaseT is physically
cabled in a star topology with Cat 5 twisted-pair radiating out from a central
switch. Each cable attaches directly to a server's or workstation's LAN card
via an RJ45 jack.
1000BaseSX 1000 Mbps Ethernet specification using fiber-optic cable with a maximum
segment length of 220 meters or more, depending on the type of cable and
transceiver. 1000BaseSX is physically cabled in a star topology with fiber-optic
cable radiating out from a central 1000BaseSX fiber-optic switch. The cables
attach directly to the LAN interface card via an SC duplex connector.
NOTE: When you purchase a new interface card, make sure that the card type you
buy matches the type of network to which you plan to connect your server or
workstation!
Software Requirements
In order to use any of the interface card types listed above, you must install HP's LAN/9000
Link product. You may verify that this product is installed on your system with the swlist
command:
# swlist LAN*
For the 100 Mbps and 1000 Mbps interfaces listed on the slide, other software bundles are
required as well.
NOTE: For the latest list of interface card types supported on your HP 9000, consult
HP's web site: http://www.hp.com. For detailed instructions on installing
all types of LAN interface cards, follow the "Networking & Communications"
link on the http://docs.hp.com website.
The advent of twisted-pair cable and Ethernet switches, however, made it possible to offer
"Full-Duplex" functionality in an Ethernet environment. Hosts could transmit data over two
of the eight wires in a twisted-pair cable, while simultaneously receiving data over two of the
remaining six wires. Thus, full-duplex mode operation essentially doubles the available
bandwidth. Consider 100BaseTX as an example. When operating in half-duplex mode, a
100BaseTX interface card operates at up to 100 Mbps; when operating in full-duplex mode,
the very same card may operate at up to 200 Mbps!
In order to be included in the 802.3 standard, a cabling scheme must include some provision
for half-duplex, bus-based, CSMA/CD operation. All of the 802.3 standards on the slide except
10Base5 and 10Base2 allow full-duplex operation in addition to the required half-duplex
functionality.
• 100BaseTX interface cards use two wires in the twisted-pair cable to transmit and two to
receive when operating in full-duplex mode.
• 1000BaseT cards use four wires to transmit, and four to receive when operating in
full-duplex mode.
• 10BaseFL, 100BaseFX, and 1000BaseSX all use two parallel fiber-optic cables when
operating in full-duplex mode.
In order for full-duplex mode to work properly, both your interface card and the switch to
which your host connects must support full-duplex operation!
Auto Negotiation
In order to simplify connectivity between older 10BaseT devices and newer interface cards,
all HP 100BaseTX interface cards can operate at either 10 Mbps or 100 Mbps. 1000BaseT
interface cards can operate at 10 Mbps, 100 Mbps, or 1000 Mbps. Both card types are capable
of operating in either half- or full-duplex mode.
If you wish, you can allow your interface card to "Auto Negotiate" with the switch to which
you are attached in order to determine a mutually acceptable speed and duplex setting. If
your switch does not support auto-negotiation, HP-UX will automatically sense the link speed
and adjust accordingly. It will default to half-duplex operation — even if your switch
supports full-duplex functionality!
You can ensure that your link is always configured properly by explicitly setting the card's
speed and duplex settings via the lanadmin command. This procedure will be discussed in
detail in the next chapter.
Token Ring
Data Rate 4 or 16 Mbps
Topology (Logical) Ring MultiStation
Topology (Physical) Star Access
Unit
Access Method Token
Cable Types Cat 3/5
Max. Segment 100m
Student Notes
The HP Token Ring/9000 product provides a complete link connection to a token ring
network. It is fully compliant with IEEE 802.5.
Token Ring networks can be cabled using IBM Type 1 Shielded Twisted-pair (STP) cable with
special IBM data connectors, or, more commonly, with standard Cat 3 or 5 Unshielded
Twisted-pair (UTP) cabling with RJ45 connectors. HP's Token Ring interface cards provide
ports for both cable types, and auto sense which port is currently connected. In either case,
the network is connected in a physical star configuration, with cables radiating outward from
a central Multi Station Access Unit (MAU or MsAU).
Software Requirements
In order to use a Token Ring interface card on your HP 9000, you must install the Token
Ring/9000 software product on your system and include the appropriate driver in your kernel.
Check your interface card documentation. Some Token Ring cards require you to configure
the ring speed and duplex settings manually; some cards require you to configure these
settings via switches on the card itself, while others allow you to make the changes via SAM
or the lanadmin command. See your interface card documentation for details!
NOTE: For the latest list of interface card types supported on your HP 9000, consult
HP's web site: http://www.hp.com. For detailed instructions on installing
all types of LAN interface cards, follow the "Networking & Communications"
link on the http://docs.hp.com website.
FDDI Ring
Data Rate 100 Mbps
Single Attachment Stations
Topology (Logical) Ring
Topology (Physical) Dual Ring
Star
Access Method Token
Cable Type Fiber Concentrator
Max. Segment 2000m
Student Notes
The ANSI FDDI standard was developed back in 1986 to provide 100 Mbps, reliable network
technology using fiber-optic cable. Even with the advent of fast Ethernet over twisted-pair
and fiber, FDDI remains a popular choice for network backbones.
The FDDI network consists of two independent 100 Mbps rings: the primary and the
secondary. The dual-ring approach provides redundancy and the ability to reconfigure the
network under fault conditions.
HP supports two different types of FDDI interface cards. Dual-attach (Class A) FDDI
interface cards connect to both rings. Single-attach (Class B) FDDI cards attach to a hub-like
FDDI concentrator, which then attaches to both FDDI rings. The concentrator maintains the
fault tolerant capability if one ring becomes unusable.
Software Requirements
After physically installing an FDDI card on your system, you must install the FDDI/9000
software product to support it.
NOTE: For the latest list of interface card types supported on your HP 9000, consult
HP's web site: http://www.hp.com. For detailed instructions on installing
FDDI interface cards, follow the "Networking & Communications" link on the
http://docs.hp.com website.
Repeaters
Repeaters extend
Repeater the maximum
allowed distance
telnet
between nodes.
Repeaters
• Repeaters repeat a signal from one port to another.
• Repeaters pass all traffic through without error checking or filtering..
• Repeaters pass collisions, too.
• Repeaters are used primarily to overcome maximum segment length restrictions.
Student Notes
As an electrical signal travels further and further from the signal source, the signal strength is
gradually degraded, which may lead to data corruption. Repeaters provide a mechanism for
boosting signal strength and extending the maximum distance between nodes on a network.
Consider the following example: the maximum distance allowed between any two nodes on
an Ethernet thinnet segment is 185 meters. A repeater makes it possible to connect two 185m
segments to create a single, larger, physical network. The repeater automatically propagates
signals from one segment to the other, and vice versa.
Note that repeaters do nothing to mitigate collisions or errors; they simply propagate signals
from port to port.
Question
At which layer of the OSI model does a repeater function?
Hubs
Hub
Hubs make it
very easy to add
and remove hosts
on a network.
telnet
Hubs...
• Hubs propagate a signal received on one port to all other ports..
• Hubs propagate errors and collisions across ports, too.
• Hubs simplify the addition and removal of nodes on a LAN.
• Hubs are also used to connect network segments cabled with different media types.
Student Notes
A hub is simply a multi-port repeater that provides a central connection point for nodes on a
network. When a signal is received on one hub port, the hub immediately propagates that
signal to the other hub ports. Like repeaters, hubs do nothing to manage collisions. However,
they do offer two very important benefits:
• Hosts can be added and removed without disrupting service to other hosts. To add a host,
simply run a cable from an available port to the new node. Nodes can also be
disconnected from the hub without affecting other hosts on the segment.
• Hubs are also used to connect hosts cabled using different media types. For instance, a
hub may have several thinnet cable ports and several twisted-pair ports. Signals arriving
on the twisted-pair ports are automatically propagated to the thinnet ports and vice versa.
Question
At which layer of the OSI model does a hub function?
Bridges
Bridges
• Bridges provide all the functionality of a hub, PLUS ...
• Bridges filter frames by destination MAC, and segment a LAN into multiple collision domains.
• Bridges filter signal and timing errors.
• Bridges can be used to connect segments operating at different speeds.
Student Notes
Bridges, like hubs, can be used to simplify the addition and removal of nodes and pass data
between segments that have been cabled using different media types. However, bridges offer
several advantages over repeaters and hubs:
• Bridges filter frames by destination MAC and segment a LAN into multiple collision
domains.
On an Ethernet network connected exclusively with hubs and repeaters, no two hosts can
transmit simultaneously without causing a collision. All the hosts on the network are
members of a single "collision domain.” As the number of hosts in a collision domain
increases, collisions will likely increase, and performance will be degraded.
Bridges maintain "bridge forwarding tables" that record which MAC addresses are on
each network segment. When a bridge receives a frame, it examines the frame's
destination MAC and forwards only that frame to the segment that the destination host is
on. This filtering mechanism prevents traffic between hosts on one segment from
impacting hosts on other segments and effectively separates a network into two or more
collision domains.
Many Ethernet networks today include a heterogeneous mix of older hosts with 10 Mbps
interface cards and newer servers with 100 Mbps or even 1000 Mbps interface cards.
Bridges use a "store and forward" mechanism to pass data between segments operating at
different speeds.
In the past, bridges were typically used to segment departments within a company into
separate collision domains to reduce collisions and improve performance. Today, bridges are
gradually being replaced by switches, which are described on the next slide.
Question
At which layer of the OSI model does a bridge function?
Switches
Switches are
Switch
similar to bridges,
but offer multiple
parallel communication
channels across ports
for improved
performance.
telnet telnet
Switches
• Switches provide all the functionality of a bridge PLUS ...
• Switches typically offer more ports than bridges.
• Switches allow for multiple, parallel channels of communication between ports.
• Switches sometimes offer “full-duplex” functionality.
• Switches are replacing both bridges and hubs in many modern networks.
Student Notes
A switch offers many of the same benefits that a bridge offers. Like a bridge, a switch can be
used to connect different types of LANs and can filter frames by MAC address in order to
divide a busy network into separate collision domains. However, switches offer several
important advantages over traditional bridges:
• Switches typically offer more ports than bridges. Traditional bridges only had two ports
and were designed to split a network into two separate collision domains. Switches
generally offer multiple ports, each of which functions as a separate collision domain.
• Switches allow for multiple, parallel channels of communication between ports. This can
dramatically improve performance on many networks.
• Switches are replacing both bridges and hubs in many modern networks. The price-per-
switch-port has dropped in recent years to the point that it is now reasonably economical
to provide a dedicated, full-duplex, 100 Mbps switch port for every node on a network.
This eliminates collisions and provides a dedicated 100 Mbps link for every workstation
and server.
Question
At which layer of the OSI model does a switch function?
Mainframe
Student Notes
Routers serve the following functions:
• Routers use IP addresses to route data between networks.
Whereas repeaters, hubs, bridges, and switches are primarily designed to move data
within a network, routers are designed to pass data between networks. For instance, in
order for a packet of data to travel from a host in your Chicago office to a host in your
San Francisco office, the packet must pass through multiple networks. Routers on the
Internet determine which route the packet should take to get to the final destination.
Any HP 9000 system with two LAN cards can serve as a router, but most networks use
dedicated rack-mounted routers instead.
Some switches these days are also able to filter broadcast traffic.
• Gateways are used to connect dissimilar networks over all 7 OSI layers.
Gateways are required when you wish to share data across two very different networks
that are incompatible at all of the OSI layers. For instance, a gateway would be required
in order for HP-UX hosts running TCP/IP over Ethernet to communicate with IBM
mainframes on an SNA-based network. An HP 9000 system can operate as an SNA
gateway with the SNAplus Link product.
Since more and more platforms these days use Ethernet and TCP/IP in OSI layers 1
through 3, today's gateways often function in only the top layers of the OSI model. For
instance, UNIX hosts use the SMTP protocol over TCP/IP to deliver email, while
Microsoft Windows clients use a different email protocol. Since the two platforms use
different email protocols, they must communicate with one another through a mail
gateway. An HP 9000 system can operate as a UNIX/Microsoft mail gateway using HP's
OpenMail product.
NOTE: The terms router and gateway are often used interchangeably. Technically,
however, routers operate only at the lower layers of the OSI model, while
gateways operate in the upper layers of the OSI model.
Questions
At which layer of the OSI model does a router function?
At which layer of the OSI model does a gateway function?
Firewalls
Firewalls make it
possible to control
access to and from
Internet Firewall your local area
network.
Firewalls
• Firewalls determine what traffic is allowed in and out of your network.
• Firewalls may filter packets by IP or port number.
• Firewalls may log what packets are sent to and from whom.
• Firewalls use these and many other features to improve network security.
Student Notes
Almost every network today includes some sort of firewall to control who has access to
specific hosts and when this access can occur. Most firewalls allow the administrator to filter
incoming and outgoing packets based on source and destination IP addresses.
For even more flexibility, most firewalls allow the administrator to control access based on
source and destination port numbers. An administrator can choose to allow incoming traffic
to reach port number 25 (the port that sendmail uses to receive incoming email) but can
prevent incoming traffic from using telnet to reach port number 23.
Some firewalls provide even more sophisticated filtering functionality. For example, they
look at the contents of incoming email to search for dangerous attachments that might
contain viruses.
Most firewalls provide some sort of logging mechanism to track which hosts are initiating
outbound connections, and which hosts are attempting to get into the internal network.
Question
At which layer of the OSI model does a firewall function?
Internet
Firewall
Bridge Switch
(chicago office) (london office)
Mainframe
Hub Hub
(sales) (research)
Student Notes
The slide shows how hubs, bridges, switches, routers, gateways, and firewalls might be used
together in a work environment.
The protocols and devices that were discussed in this chapter are summarized in the
following OSI chart:
• Configure and view the system host name with the hostname command.
• Configure and view the system IP address and netmask with the ifconfig command.
• Configure IP multiplexing.
Student Notes
Several steps are required to configure an HP-UX host to communicate with a local area
network.
First, you must request a valid IP address and host name from your ISP or IT department.
Your organization should maintain an up-to-date network map and information table to
record which IP addresses and host names have been assigned to which hosts. This
minimizes the possibility of duplicate IP addresses, and greatly simplifies network
troubleshooting. In your information table, you should record the following information
about each host and network device:
• Manufacturer
• Model number
• OS type and version
• LAN card type
• Host name
• IP Address
• MAC Address
• Administrator name
After obtaining an IP and host name, you are ready to install and configure your interface
card! The slide above overviews the required steps, and the remaining slides in the chapter
will explain the details.
LAN/9000 Subsystem
LANIC Drivers
Student Notes
The first step in configuring a connection to a local area network is to physically install a
LAN interface card. For the latest list of LAN interface cards supported on your HP 9000,
check the HP web site at http://www.hp.com.
If the Networking product is missing, insert the CoreOS CD that came with your system and
run the swinstall graphical user interface to install the product:
The Networking product includes all of the software necessary to configure and use a
standard Ethernet interface card. If, however, you are using FDDI, Token Ring, 100VG, or
other types of LAN cards, it may be necessary to load additional products on your system.
Consult your LAN card documentation for more information.
SAM provides the easiest method for configuring kernel drivers and subsystems:
Student Notes
Assuming the proper drivers are configured in your kernel, HP-UX should automatically
recognize new LAN interface cards, and auto-configure hardware paths and device files
during the system boot process. You can check the auto-configuration via the
/usr/sbin/ioscan –funC lan command.
• Does the card appear to be CLAIMED? If not, the card’s kernel driver is probably missing.
Return to the previous slide to learn how to configure drivers in the kernel.
• Does the card have the necessary device files? Most LAN cards will not function without
device files. Assuming the LAN card’s driver is configured in the kernel, you can create
device files for your LAN card via /usr/sbin/insf –eC lan. Note that some EISA
LAN cards, such as the 100BT LAN card shown on the slide, do not require device files.
# ll /dev/dlpi*
crw-rw-rw- 1 bin bin 72 0x000077 May 11 15:32 /dev/dlpi
crw-rw-rw- 1 bin bin 119 0x000000 May 11 15:32 /dev/dlpi0
crw-rw-rw- 1 bin bin 119 0x000001 May 11 15:32 /dev/dlpi1
crw-rw-rw- 1 bin bin 119 0x000002 May 11 15:32 /dev/dlpi2
crw-rw-rw- 1 bin bin 119 0x000003 May 11 15:32 /dev/dlpi3
crw-rw-rw- 1 bin bin 119 0x000004 May 11 15:32 /dev/dlpi4
# cd /dev
# insf -d dlpi -e
insf: Installing special files for pseudo driver dlpi
/sbin/init.d
hpbase100 /etc/rc.config.d/hpbase100conf
hpbaset /etc/rc.config.d/hpbasetconf
hpeisabt /etc/rc.config.d/ hpeisabtconf
hpether /etc/rc.config.d/ hpetherconf
Link layer configuration
hpgsc100 /etc/rc.config.d/ hpgsc100conf
hpvgal /etc/rc.config.d/hpvgalconf
hptoken /etc/rc.config.d/hptokenconf
Student Notes
During the system startup process, the /sbin/rc program executes several scripts in the
/sbin/init.d directory. These /sbin/init.d scripts read configuration parameters
from a collection of configuration files in the /etc/rc.config.d directory, and initialize
your network connection. The remaining slides in this chapter will describe the parameters in
each of these configuration files in detail.
/etc/rc.config.d/hpbase100conf
HP_BASE100_INTERFACE_NAME[0]=lan0
HP_BASE100_STATION_ADDRESS[0]=0x080009000001
HP_BASE100_SPEED[0]=100FD
/sbin/init.d/hpbase100 start
lanadmin -A 0x080009000001 0
lanadmin -X 100FD 0
Student Notes
The /sbin/init.d directory contains several scripts that initialize data link layer
parameters associated with your LAN interface cards. Since different interface cards support
different configurable parameters, there are separate scripts for each supported interface
card type. The sample script and configuration file shown on the slide are used to configure
HP 100BaseT PCI interface cards. Check your documentation to determine which
configuration file your LAN card uses.
INTERFACE_NAME Identifies the name of the LAN card defined by the current block of
variables (lan0, lan1, etc.). Use the lanscan command to list the
recognized LAN interfaces on your system.
STATION_ADDRESS Sets the LAN card’s MAC address. If left blank (recommended!), the
card will use the preset MAC address coded on the interface card by
the manufacturer. If you choose to override the preset MAC address,
DUPLEX Many LAN cards can operate in either “full-duplex” mode, which
permits the host to transmit and receive simultaneously, or “half-
duplex” mode, which prevents the host from transmitting and
receiving simultaneously. Check with your IT department to determine
the appropriate setting for your environment and change the DUPLEX
value accordingly. Most cards recognize two values: “FULL” or
“HALF.”
SPEED Some LAN cards may operate at 10 Mbps (if connected to a 10BaseT
network), 100 Mbps (if connected to 100BaseT network), or even 1000
Mbps (if connected to a 1000BaseT network). In most cases, the card
will “auto-sense” and set the appropriate speed setting automatically.
On some cards, however, you may override the default speed via the
SPEED variable and the –X option on lanadmin.
SPEED[0]=100FD
SPEED[0]=100HD
SPEED[0]=10FD
SPEED[0]=10HD
SPEED[0]=auto_on # “autosense”
Here again, you should ask your IT department which setting to use in
your environment.
If you have multiple interface cards on your system, you may replicate the block of variable
definitions in this file, one block for each interface card. Change the index following each
variable in the second block of lines to [1]s, in the third block of lines to [2]s, and so on.
Then fill in the variable values as appropriate.
The list of parameters that may be configured via lanadmin varies from card to card.
Consult your documentation for more information. The general syntax for lanadmin is
consistent. The first option/argument pair determines which parameter you wish to
configure, and the last argument identifies the card you wish to configure. At HP-UX 10.20,
the card is identified by the "Network Management ID (NMID) Number", while HP-UX 11.x
requires you to specify the card to configure by "Physical Point of Attachment (PPA)
Number". Both of these values may be obtained via the lanscan command. Note that the
/etc/rc.config.d/hpbase100conf configuration file simply takes the interface name
as an argument and automatically determines the PPA/NMID numbers as needed. Consider
the following examples. The first example below shows the procedure required at 11.x, while
the second block of lines shows the procedure required at 10.20:
lanadmin may also be used to check the currently defined parameters for one of your
interface cards. Again, lanadmin requires a PPA number at 11.x, or an NMID number at
10.20:
The /sbin/init.d/hptoken startup script uses these variable values as arguments to the
lanadmin command to configure your system’s token ring interface cards fully during the
system boot process.
Other interface cards use other configuration files with different variable parameters. Consult
your documentation for more information.
Configuring IP Connectivity
/etc/rc.config.d/netconf
HOSTNAME=sanfran
INTERFACE_NAME[0]=lan0
IP_ADDRESS[0]=128.1.1.1
SUBNET_MASK[0]=255.255.0.0
BROADCAST_ADDRESS[0]=""
INTERFACE_STATE[0]=""
DHCP_ENABLE[0]="0"
/sbin/init.d/hostname start
uname -S sanfran
hostname sanfran
/sbin/init.d/net start
ifconfig lan0 128.1.1.1 netmask 255.255.0.0 up
Student Notes
/etc/rc.config.d/netconf file is the primary TCP/IP configuration file in HP-UX. This
file is read by several different startup scripts that configure everything from the system host
name to the gated dynamic routing protocol daemon. For now, we will concentrate on the
first half of the file which defines the system host name and IP address.
Modifying /etc/rc.config.d/netconf
The first block of lines in the netconf file defines some general system parameters. Change
the HOSTNAME variable if you wish to change the system host name. The other two
parameters, OPERATING_SYSTEM and LOOPBACK_ADDRESS, should never be changed.
HOSTNAME="sanfran"
OPERATING_SYSTEM=HP-UX
LOOPBACK_ADDRESS=127.0.0.1
Further down in the file, look for the following block of lines:
If you have multiple LAN cards, copy this block of lines and change the variable indices. Then
change the variable values as appropriate. Appending the sample block of lines below to the
netconf file would assign IP address 192.1.1.1 to the lan1 interface card:
INTERFACE_NAME[1]=lan1
IP_ADDRESS[1]=192.1.1.1
SUBNET_MASK[1]=255.255.255.0
BROADCAST_ADDRESS[1]=""
DHCP_ENABLE[1]="0"
Technically, UNIX systems may be identified by two different host names. The “UNIX-to-
UNIX copy” (UUCP) service identifies hosts by UUCP host name. The UUCP host name may
be both set and verified via the uname command:
Most other network services identify hosts by their internet host names. You may set and
view the Internet host name via the hostname command:
Theoretically the uucp host name may be different from the Internet host name. However, HP
strongly recommends that the two host names be identical. The /sbin/init.d/hostname
startup script guarantees this by using the HOSTNAME variable as an argument to both uname
–S and hostname during the system startup process.
If you specify ifconfig interface with no other parameters, ifconfig displays the
name of the enabled network interface, the IP address, subnet mask, broadcast address, and
other flags.
# ifconfig lan0
lan0: flags=863<UP,BROADCAST,RUNNING,MULTICAST>
inet 128.1.1.1 netmask ffff0000 broadcast 128.1.255.255
Watch particularly for the UP flag in the ifconfig output. If ifconfig doesn’t explicitly
state that a card is UP, the card will neither send nor receive any IP traffic!
CAUTION: Many applications (including CDE!) are dependent on the IP address and the
host name. Ideally, you should shut down all applications before changing
your IP address or host name. Perhaps the simplest approach is to make the
desired changes in /etc/rc.config.d/netconf, then reboot to restart all
of your applications.
Configuring IP Multiplexing
/etc/rc.config.d/netconf
INTERFACE_NAME[0]=lan0:0
Internet
IP_ADDRESS[0]=129.1.1.1
SUBNET_MASK[0]=255.255.0.0
INTERFACE_NAME[1]=lan0:1
129.1.1.1 ijunk.com
IP_ADDRESS[1]=129.2.1.1
SUBNET_MASK[1]=255.255.0.0
129.2.1.1 bigcorp.com
129.3.1.1 estuff.com
INTERFACE_NAME[2]=lan0:2
IP_ADDRESS[2]=129.3.1.1
SUBNET_MASK[2]=255.255.0.0
/sbin/init.d/net start
ifconfig lan0:0 129.1.1.1 netmask 255.255.0.0 up
ifconfig lan0:1 129.2.1.1 netmask 255.255.0.0 up
ifconfig lan0:2 129.3.1.1 netmask 255.255.0.0 up
Student Notes
HP-UX version 11.00 introduced “IP Multiplexing” to its TCP/IP protocol stack. This new
functionality makes it possible to assign multiple IP addresses to a single physical interface
card.
The example on the slide shows one application of this feature. The web server shown in the
graphic has a single physical interface card connected to the Internet. However, this single
physical interface card has three different “logical” interfaces. Each logical interface has a
different IP address, each associated with a different host name and a different instance of
the WWW server software. This makes it possible for a server with a single LAN card to host
multiple web sites with different IP addresses and host names.
In a multiplexed environment, a single physical interface may have several logical interfaces.
Each logical interface is identified by an index number appended to the physical LAN
interface name.
The first index assigned to an interface card is always “0”, resulting in logical interface name
lan0:0 (or simply lan0). Once you have configured lan0:0, subsequent index numbers
may be assigned in any order desired. The physical interface card shown on the slide has
three logical interfaces configured: lan0:0, lan0:1, and lan0:2. Each logical instance
may be assigned a different IP address, and a different host name.
INTERFACE_NAME[3]=snap0:0
IP_ADDRESS[3]=128.4.1.1
SUBNET_MASK[3]=255.255.0.0
The following ifconfig command would execute automatically at boot time as a result of
the lines shown above:
NOTE: Each logical interface must have a unique IP address. Logical interfaces that
use the same encapsulation method may have IPs on the same subnet. Logical
interfaces that use different encapsulation methods, however, must be on
different subnets.
Configuring /etc/hosts
# vi /etc/hosts
127.0.0.1 localhost loopback
# other servers
129.1.1.1 mailsvr
130.1.1.1 filesvr
Student Notes
The /etc/hosts file is one of several mechanisms HP-UX hosts use to resolve host names
into IP addresses. Each /etc/hosts file entry must have an IP address and an associated
host name. Each entry may also contain one or more optional host name aliases, and an
optional comment preceded by a "#" sign.
NOTE: The /etc/hosts file should be owned by bin and should have 0444
(-r--r--r--) access permission.
Directions
This lab will configure a new host name and IP address for each system in your classroom.
Preliminary Steps
1. Just in case something goes wrong during this lab, make a backup copy of all of your
network configuration files. There is a shell script in your labs directory designed
specifically for this purpose. The shell script will save a tar archive backup of your
network configuration files in the file you specify. Add the –l option to verify your
backup.
# /labs/netfiles.sh -s ORIGINAL
# /labs/netfiles.sh –l
# /labs/netfiles.sh –l ORIGINAL
2. Portions of this lab may disable your lan0 interface card. If you are using remote lab
equipment, login via the GSP/MP console interface for the duration of the lab.
3. Changing your host name and IP on a running system can wreak havoc on CDE and other
applications. Kill CDE before going any further:
# /sbin/init.d/dtlogin.rc stop
1. How many LAN cards does your system have, and what are their Hardware paths?
Answer
2. Verify that the "Networking" product is installed on your machine. Is any additional
networking software installed on your machine to support your LAN interface cards?
Answer
3. Does your kernel contain the drivers necessary to support your LAN cards? Which
command will tell you if a driver has CLAIMED your LAN cards? If your LAN card is
UNCLAIMED, install the necessary drivers.
Answer
Answer
5. List the current MAC address, IP address, netmask, and broadcast address for each of
your LAN cards.
Answer
The first two octets in the IP addresses will vary from classroom to classroom, but should be
consistent across all hosts within your classroom. Ask your instructor what the first two
octets should be set to. The last two octets must be set in accordance with the table below.
1. There should be a script in the /labs directory called netsetup.sh. This script will
ask you for your instructor-assigned hostname, and the first two IP octets that your
instructor should also provide. After you enter the requested information, the script will
display your assigned IP address and a variety of other network settings that you will use
later in the class. The script will also create a new hosts file in /tmp/hosts. Run the
script, then review the /tmp/hosts file. By default, the script doesn’t actually change
your network configuration.
# /labs/netsetup.sh
# cat /tmp/hosts
2. From the command line, change your IP to the address suggested in /tmp/hosts. Be
sure to change your netmask, too!
Answer
3. Is your new IP address set properly? How can you find out?
Answer
4. Modify the appropriate startup file to make your IP address change permanent. Allow the
system to default the broadcast address. Also, permanently change your host name in this
startup file. If a default route is currently defined, delete it. You will have a chance to
configure a new default route in the next chapter.
Answer
5. Copy the /tmp/hosts file into place as the default /etc/hosts file.
# cp /tmp/hosts /etc/hosts
6. Define a host name alias for each of the host names in your row. Use the first name of the
user sitting at each station as the alias.
Answer
Answer
Answer
2. The hostname command will display your system host name. Check to ensure that your
host name is set properly.
Answer
3. Based on your Answers to questions 1 and 2 above, what commands did the
/sbin/init.d/net script appear to execute on your behalf during the boot process?
Answer
Answer
Answer
6. Try to ping the a neighboring machine using an alias you defined in your /etc/hosts
file. Does this seem to work?
Answer
Routing Concepts
Router
Student Notes
The Internet is composed of many physical networks. Network devices known as routers and
gateways interconnect these networks. A network router is a device that is physically
connected to two or more networks, and is capable of passing packets between these
networks. Any HP 9000 host may be configured as a router, though companies these days
more typically use dedicated, specially configured, rack-mounted routers instead.
The example on the slide shows several networks interconnected by routers. The host at the
top left of the picture wishes to send a packet to the host at bottom right. Since the two hosts
are on different networks, the packet must pass through several routers en route to its
destination.
The sending host starts by sending the packet to a router on its local network. When the
packet reaches the first router, it checks the packet's destination IP to select the next router
along the path toward the destination. Packets pass from router to router until they reach a
router that can ultimately deliver them directly to the destination host.
IP routing is considered "address-only" routing. This means that packets traveling across the
Internet contain only source and destination IP addresses. Along the way, the packet is "told
where to turn" by routers.
Routing Tables
RouterA RouterB
Net 128.1.0.0 Net 129.1.0.0 Net 130.1.0.0
Student Notes
Routers check routing tables maintained in memory to determine where packets should be
sent. Each routing table entry contains a pair of addresses.
The first element in each entry identifies a destination network address. When a router
receives a packet, it compares the packet's destination IP address to the destination network
and addresses in the routing table until a matching entry is identified.
Each routing table entry also identifies the next "hop" required to get to the associated
destination network. If the router has a direct connection to the destination network, the
"hop" field specifies the IP address of the router LAN card connected to that network. If the
router does not have a direct connection to the destination network, the "hop" field identifies
the IP address of the next router along the way to that destination.
In either case, the "hop" field must identify an IP address that the router can access directly.
Host-Specific Routes
Although routes are usually defined to entire networks, it is possible to define a route to a
specific host. The ability to specify a route for an individual machine is especially useful in
troubleshooting.
Examples
The slide shows the routing tables for RouterA and RouterB. However, individual hosts
maintain routing tables, too. Complete the routing tables below:
# netstat -rn
Dest Gateway Flags Refs Interface Pmtu
127.0.0.1 127.0.0.1 UH 0 lo0 4136
128.1.1.1 128.1.1.1 UH 0 lan0 4136
127.0.0.0 127.0.0.1 U 0 lo0 0
128.1.0.0 128.1.1.1 U 2 lan0 1500
129.1.0.0 128.1.0.1 UG 0 lan0 1500
130.1.0.0 128.1.0.1 UG 0 lan0 1500
Flags:
Destination H = Route is for a single host
Next Hop
Network U = Route is "Up"
G = Route requires a hop across a gateway
Student Notes
You can view your system's routing table via the netstat command. Each entry in the
resulting table includes a "Destination" network or host address, the "Gateway" used to
access that destination, and several fields identifying the route usage.
The “Flags” field identifies the following: the route is up (U), the route uses a gateway (G),
the destination is a host or network (with or without H), the route was created dynamically
(D) by a redirect or by Path MTU Discovery, and a gateway route has been modified (M).
The “Refs” field shows the current number of active uses of the route. Connection-oriented
protocols normally use a single route for the duration of a connection, while connectionless
protocols obtain a route only while sending a particular message.
The “Interface” field displays the name of the network interface used by the route.
The "Pmtu" field displays the maximum transmission unit size allowed on the interface card
used by the route.
# netstat -rn
Dest/Netmask Gateway Flags Refs Interface Pmtu
127.0.0.1 127.0.0.1 UH 0 lo0 4136
128.1.1.1 128.1.1.1 UH 0 lan0 1500
127.0.0.0 127.0.0.1 U 0 lo0 4136
128.1.0.0 128.1.1.1 U 2 lan0 1500
129.1.0.0 128.1.0.1 UG 0 lan0 1500
130.1.0.0 128.1.0.1 UG 0 lan0 1500
The –n option causes netstat to display IP addresses rather than host names. If you prefer
to view host names in your routing table, leave off the –n.
When executed with the –v option, netstat also displays the netmask associated with each
destination in the routing table.
Use the route command to dynamically add and remove route table entries.
Student Notes
You can add and remove entries in your routing table via the route command. Consider a
few examples.
# route –f
These four routes must be present in order for your system to function properly!
128.1.0.1
Add a default route:
# route add default 128.1.0.1 1
To the Intranet
Delete the default route: and beyond!
Student Notes
Individual hosts on a network generally maintain routing tables with very few entries. Every
host, of course, can directly deliver frames to other hosts on the same network. To reach
other networks, most hosts define the nearest dedicated router as the default route in the
routing table. The default route is used whenever there is no specified route in the routing
table to a destination.
At HP-UX 11.0, it became possible to define multiple default routes on a single host. Defining
multiple default routes offers two advantages. First, HP-UX provides some load balancing by
sending some packets via the first default router, and others via the second in a round-robin-
like fashion. Defining multiple default routes also offers improved reliability. HP-UX monitors
the status of the routers; if a router fails to respond, HP-UX uses the alternate default route
defined in the routing table.
The example below configures a proxy ARP default route for host 128.1.1.1. Note that the hop
count variable should be null, or set to 0.
Configuring Routes in
/etc/rc.config.d/netconf
/etc/rc.config.d/netconf
ROUTE_DESTINATION[0]="net 129.1.0.0"
ROUTE_MASK[0]="255.255.0.0"
ROUTE_GATEWAY[0]="128.1.0.1"
ROUTE_COUNT[0]="1"
ROUTE_ARGS[0]=""
ROUTE_DESTINATION[1]="default"
ROUTE_MASK[1]=""
ROUTE_GATEWAY[1]="128.1.0.1"
ROUTE_COUNT[1]="1"
ROUTE_ARGS[1]=""
/sbin/init.d/net start
route add net 129.1.0.0 netmask 255.255.0.0 128.1.0.1 1
route add default 128.1.0.1 1
Student Notes
During the system boot process, the /sbin/init.d/net script consults the
/etc/rc.config.d/netconf file to determine which routes need to be configured. To
permanently configure multiple routes, simply replicate the block of ROUTE variables in the
netconf file, increment the index for each block of lines, and set the variable values
accordingly. The slide shows some sample netconf route entries, and the route
commands that execute as a result of those entries.
You may notice that some of the routes listed in your routing table don’t appear in the
/etc/rc.config.d/netconf file. Each time you set or change your IP address, HP-UX
automatically creates a route to your own IP and your local network. Similarly, when you
remove an IP address, HP-UX automatically removes the route entries associated with that IP
address.
The routes to the loopback address (127.0.0.1) and the loopback network (127.0.0.0) are also
created automatically.
Directions
Record the commands you use to perform the tasks suggested below.
Your instructor has configured host corp as a router with two LAN interfaces. Record corp’s
IP and network addresses here. The first IP should be a /16 address whose first two octets
match your first two octets. The second IP address should be a /24 address that is entirely
different from your system’s IP address.
corp's first interface’s IP: ___ . ___ . _ 0 . 1 /16 (should be on your net)
corp's second interface’s IP: ___ . ___ . __ _ . _1__ /24 (should be on another net)
Verify that your instructor has configured corp’s second interface before proceeding.
Preliminary Steps
1. Portions of this lab may disable your lan0 interface card. If you are using remote lab
equipment, login via the GSP/MP console interface for the duration of the lab.
2. Modifying IP connectivity on a running system can wreak havoc on CDE and other
applications. Kill CDE before going any further:
# /sbin/init.d/dtlogin.rc stop
Answer
Answer
3. From the command line, add a route to the second network via corp’s first LAN interface.
Then check your routing table again to verify that you were successful.
Answer
Answer
5. Delete the route that you just added. Then check the routing table to verify that you were
successful.
Answer
6. Now, define corp’s first IP as your default route. Then check your routing table again to
be sure this worked.
Answer
7. Can you ping the second IP now, even though you do not have an explicit route to the
second network?
Answer
8. How can you ensure that your default route is defined after every system boot? Make it
so.
Answer
9. Reboot your machine. When your machine comes back up again, check the routing table
to verify that the default route is defined.
Answer
2. If you ping corp, which of corp's IP addresses does your system appear to choose?
Watch your ping output carefully.
Answer
Answer
Answer
# /labs/netfiles.sh –s NEW
# /labs/netfiles.sh –l
# /labs/netfiles.sh –l NEW
...
packet
...
65,000 hosts
Student Notes
Although a /8 network address allows for 16 million host addresses, in reality, it is impractical
to have that many hosts sharing a single physical network.
Topological Limitations Many LAN topologies don't allow 16 million nodes on a single
physical network.
Excessive Collisions If any two nodes on an ethernet network transmit at the same
instant, a collision results and both nodes must attempt to
retransmit. As the number of nodes on the network increases,
the likelihood of collisions increases as well.
Administrative Challenges Simply keeping track of who has which IP address in a 16-
million node network would be an administrative challenge for
even the best network administrator.
Poor Network Performance All of these issues result in degraded network performance as
more and more hosts compete for limited bandwidth on a
network.
One solution to all of these issues would be to simply leave many of the IP host addresses on
/8 networks unused. The rapid depletion of the IP address space however, makes this
solution impractical. "Subnetting" provides a much better solution to these problems.
Subnetting Concept
Subnet 128.1.1.0
Router
(254 hosts)
Network 128.1.0.0/16
Subnet 128.1.2.0
(65,535 hosts) (254 hosts)
Router
Subnet 128.1.3.0
(254 hosts) Router
Student Notes
Subnetting makes it possible to divide a large network IP address space into several smaller,
more manageable "subnets."
The example on the slide shows a subnetted /16 network. Without subnetting, the 128.1.0.0/16
network would have 65 thousand hosts on the same physical network, which could easily
lead to excessive collisions.
This network, however, has been subdivided into 254 subnets. Each of these subnets could
potentially have up to 254 hosts.
Subnet Addresses
----------------
128.1.1.0
128.1.2.0
...
128.1.253.0
128.1.254.0
Subnets are separated from one another by routers, which overcome both the collision and
topological issues discussed on the previous slide.
Subnetting also makes it easy for the network administrator to delegate authority for
portions of the IP network address space to other entities within the organization. Simply
assign each department a separate subnet. Each network administrator then becomes
responsible for a subnet within the larger corporate network.
128 . 1 . 0 . 0
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Network Host
128 . 1 . 1 . 0
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0
Student Notes
In a non-subnetted network, each IP address has just two components. A portion of the IP’s
bits identifies the network to which a host is attached, and the remaining bits uniquely define
individual hosts on the network.
Subnetted IP addresses have a third component as well: a portion of the IP address’s host bits
is used to define the subnet to which the host belongs.
Returning to the 128.1.0.0/16 network example: Normally, a host on a /16 network has 16 host
bits. When implementing subnetting, 8 of those bits are used to define the host's subnet,
leaving 8 remaining bits to define the individual host address.
The number of subnet bits may vary. Increasing the number of subnet bits allows more
subnets, but fewer hosts on each subnet. Decreasing the number of subnet bits decreases the
number of addressable subnets, but allows more hosts on each subnet.
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 = 255.255.0.0
Student Notes
The text on the previous page noted that the number of subnet bits can vary. So how do
routers and other network devices determine where the network/subnet portion of an IP
address ends, and where the host portion of an IP address begins on a subnetted network?
In printed form, the boundary between the network/subnet portion of the IP and the host
portion of an IP is typically indicated via the "/" suffix on the end of the IP. The number
following the "/" indicates the total number of network/subnet bits. All remaining bits are
assumed to be host bits. Consider the example on the bottom of the slide. The IP address in
the example has 16 network bits and 8 subnet bits. Since 16+8=24, IP addresses on these
subnets would be represented as x.x.x.x/24 addresses.
UNIX identifies the network/ subnet host boundary in an IP address via the IP netmask. On a
non-subnetted network, the 1s in the netmask identify network bits. On a subnetted network,
the 1s in the netmask mask both network and subnet bits.
The example on the slide shows a netmask that consists of 24 "1" bits, followed by 8 "0" bits.
Thus, the network/subnet portion of the IP addresses on this network appears to span the
first three octets, while the final octet represents the host portion of each IP address.
Since the number of subnet bits varies from network to network, the netmask varies from
network to network as well. In a subnetted network, you must define the netmask for each
LAN interface card.
Subnet Addresses
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1st subnet
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 2nd subnet
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 3rd subnet
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 4th subnet
. . . .
. . . .
. . . .
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 11 1 11 1 10 0 0 0 0 0 0 0 0 254th subnet
Netmask = 255.255.255.0
Student Notes
A single network may contain multiple subnets. The network bits for all hosts on all of the
subnets within a network will be the same. However, each subnet is assigned a unique subnet
address. The subnet address is defined in the subnet bits specified by the netmask.
Continuing the example started in the previous slides, this slide shows the subnet addresses
for the 128.1.0.0/16 network. The 255.255.255.0 netmask tells us that the third octet defines
the subnet portion of the IP addresses on this network.
Although it is possible to represent 256 subnet addresses with 8 subnet bits, some devices
and do not allow all-0 or all-1 subnets. Eliminating these addresses leaves the following
subnet addresses:
128.1.1.0/24
128.1.2.0/24
...
128.1.253.0/24
128.1.254.0/24
By default, this parameter is set to 0, and all-0 and all-1 subnet addresses are allowed.
Changes made via ndd are lost at reboot time, unless they are recorded in the
/etc/rc.config.d/nddconf file:
# vi /etc/rc.config.d/nddconf
TRANSPORT_NAME[1]=ip
NDD_NAME[1]=ip_check_subnet_addr
NDD_VALUE[1]=0
This is just one of many parameters that may be tuned via the ndd command. For a full list of
tunable ndd parameters, type ndd -h.
• The host address with all 0s represents the address for the entire subnet.
• The host address with all 1s represents the broadcast address for the subnet.
• All other addresses within the subnet may be used for hosts.
• Examples: IP addresses for subnet 128.1.1.0/24:
Netmask = 255.255.255.0
Student Notes
Each subnet may contain multiple hosts. Within a subnet, all network and subnet bits must
be identical for every host. However, each host must have a unique sequence of host bits to
distinguish it from all the other hosts on the subnet.
Consider the 128.1.1.0/24 subnet from the previous page. Each host on this subnet will have
an IP address that begins with 128.1.1. This leaves eight host bits.
00000000 = 0
00000001 = 1
00000010 = 2
00000011 = 3
...
11111101 = 253
11111110 = 254
11111111 = 255
The address formed by setting all the host bits to 0 is used to define routes to the subnet in
the network routing tables. This address should not be assigned to a specific node.
The address formed by setting all the host bits to 1 is a reserved address as well. It is the
subnet broadcast address.
All remaining addresses may be assigned to hosts in the subnet. Valid addresses for hosts on
the 128.1.1.0/24 subnet, then, include:
128.1.1.1/24
128.1.1.2/24
128.1.1.3/24
...
128.1.1.253/24
128.1.1.254/24
Student Notes
The example discussed thus far in the chapter used a simple netmask that placed the
subnet/host boundary on an octet boundary. Although this makes it easy to determine which
subnet a given IP address is on, subnetting on an octet boundary may not provide the
flexibility you need as you design your subnets.
Octet-boundary subnetting is not even an option in a /24 network. Since /24 addresses have
just one host octet, using that octet to define an IP's subnet would not leave any host bits!
Octet boundary subnetting may prove limiting on a /16 network, too. What happens if you
have a /16 network, and need exactly six subnets? Octet-boundary subnetting would break
your network into 254 subnets. This is many more than you actually need.
For these reasons, octet-boundary subnetting rarely offers the flexibility needed to subnet a
large network.
1 1 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 0 1st subnet
1 1 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 2nd subnet
1 1 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 1 1 0 0 0 1 1 0 0 0 0 0 3rd subnet
1 1 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 1 1 0 0 10 0 0 0 0 0 0 4th subnet
1 1 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 1 1 0 0 10 1 0 0 0 0 0 5th subnet
1 1 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 1 1 0 0 11 0 0 0 0 0 0 6th subnet
Student Notes
Subnetting on a non-octet boundary simply means that the subnet/host boundary does not
fall on an octet boundary. The example on the slide shows a /24 network, 192.6.12.
Recall that the subnet address is defined by setting all of the remaining host bits to 0. Thus,
the subnet addresses on this network are:
192.6.12.00100000 = 192.6.12.32
192.6.12.01000000 = 192.6.12.64
192.6.12.01100000 = 192.6.12.96
192.6.12.10000000 = 192.6.12.128
192.6.12.10100000 = 192.6.12.160
192.6.12.11000000 = 192.6.12.192
11111111.11111111.11111111.11100000 = 255.255.255.224
Recall that the broadcast address for a subnet is formulated by setting all the host bits to 1.
The subnet address is formulated by setting all the host bits to 0.
The chart below shows all of the IP addresses for the 192.6.12.0/16 network example from the
previous page:
Student Notes
Subnets on the network are separated by routers. In the example on the slide, the facilities
subnet is the network backbone. The other three subnets all connect to the facilities subnet
via routers.
Although each subnet has a different subnet address, all share the same netmask.
The next slide describes the steps required to configure subnetting of the hosts on the
"manufacturing" subnet.
Configuring Subnetting
192.6.12.129/27
192.6.12.33/27
Manufacturing subnet (192.6.12.32/27)
Student Notes
This slide shows the steps required to configure subnetting on each of the hosts on the
manufacturing subnet. When configuring the interface card on a host connected to a
subnetted network, you must specify the subnet mask as an argument to the ifconfig
command. All of the hosts on the subnet must have the same subnet mask.
To ensure that your host has access to other subnets and networks, define a default route to
your nearest router. If you wish to make your configuration permanent, modify
/etc/rc.config.d/netconf. For HostA, the netconf file should contain the following:
HOSTNAME=HostA
IP_ADDRESS[0]=192.6.12.34
SUBNET_MASK[0]=255.255.255.224
INTERFACE_NAME[0]=lan0
ROUTE_DESTINATION[0]=default
ROUTE_GATEWAY[0]=192.6.12.33
ROUTE_COUNT[0]=1
Allowing all-0 and all-1 subnet addresses changes the first formula slightly:
The tables below show the number of subnets and hosts available for various netmasks on
/16 and /24 networks, excluding the all-0 or all-1 subnets.
Directions
Answer all of the questions below. Assume that your network contains some older devices
that don't support all-0 or all-1 subnet addresses.
Part 1
1. Your company's network address is 128.20.0.0/16, but your netmask is set to
255.255.255.0. Given this netmask, how many bits are in the subnet portion of your
IP address?
2. Given your answer to the previous question, how many host addresses may be configured
on each subnet?
4. What are the lowest and highest host addresses on the first subnet?
Part 2
Your company's network address is 192.30.40.0/24, and you need to create two subnets.
Part 3
Your company's network address is 132.40.0.0/16. You need to configure nine subnetworks.
5. What is the complete address for the first host on the first subnet?
6. What would be the complete address for the last host on the first subnet?
7. Fill in the variable values you would expect to see in the /etc/rc.config.d/netconf
file for the last host on the first subnet. Record the variable values below, but do not
actually modify the /etc/rc.config.d/netconf file on your system.
INTERFACE_NAME[0]=lan0
IP_ADDRESS[0]=
SUBNET_MASK[0]=
− lanscan
− lanadmin
− linkloop
− arp/ndd
− ping
− netstat -i
− netstat -a
− netstat -r
− hostname
− nslookup
Student Notes
Connectivity problems are not always clearly and directly shown by the tools. Often you get
only hints, which you have to interpret. You will have to use several tools in logical steps;
therefore, you must be knowledgeable about the networking concepts and the capabilities of
each networking tool.
Student Notes
• LAN terminators not connected properly.
Many times users do not terminate their LAN cables properly. You must have two
terminators on your network—one at each end.
The ifconfig command fails if the LAN interface is defective. You may inadvertently
introduce syntax errors into the configuration files if you modify these files with an editor
such as vi.
Someone may have made a mistake when configuring the IP_ADDRESS within the
/etc/rc.config.d/netconf file.
Someone may have made a mistake when configuring the SUBNET_MASK within the
/etc/rc.config.d/netconf file.
Sometimes someone connects his or her system to the network without asking the
network administrator for a unique IP address.
Someone may have made a mistake when configuring the ROUTE parameters within the
/etc/rc.config.d/netconf file.
Sometimes a system must be shut down. If you are shutting down a router, you should
announce the shutdown at least one day in advance.
If coaxial cables were installed a long time ago without using a cabling map, it is possible
that the cables have become too long. When a new system is added to the segment, if the
cable is extended beyond the segment length limitation, problems will eventually arise.
There are cable testers to measure cable lengths.
If your system cannot resolve a host name to the correct IP address, you probably have a
problem in your hosts table. When using /etc/hosts, the first match working down
from the top of the file is used. If two IP addresses are in /etc/hosts (for example, for
a gateway), gethostbyname() will always return the first IP address, which may not be
the desired one. You should check your hosts file regularly to make sure the entries for
your machines are correct.
Application 7
Presentation 6
Session 5
Transport 4
Networking 3
Data Link 2
Physical 1
# lanscan
Hardware Station Crd Hdw Net-Interface NM MAC HP-DLPI DLPI
Path Address In# State NamePPA ID Type Support Mjr#
8/16/6 0x0060B0A39825 0 UP lan0 snap0 1 ETHER Yes 119
8/20/5/1 0x0060B058A8C6 1 UP lan1 snap1 2 ETHER Yes 119
Student Notes
Any user can execute this simple and quick command. It provides the most efficient way to
determine the link level address of the interface card.
Hardware path HP-UX hardware address of the LAN interface, also displayed
by ioscan.
Crd IN# Card instance number, which is a logical number for the
hardware path (displayed by ioscan -f).
Net-Interface Name PPA The network interface Name and the PPA number are
concatenated together. A single hardware device may have
multiple NamePPA identifiers, which indicates multiple
encapsulation methods may be supported on the device.
MAC type Specifies the medium access control (MAC) standard of the
LAN link.
HP DLPI support Indicates whether or not the LAN device driver will work with
HP's Common Data Link Provider interface. It must be yes to
use diagnostics linkloop and lanadmin.
Syntax of lanscan
/usr/sbin/lanscan [-aimnpv]
in which
v Provides verbose output. The output consists of additional lines per interface, and includes
the encapsulation method (IEEE and/or ETHER).
NOTE: Before HP-UX 10.30, lanscan displays the interface state of each networking
device. This will no longer be the case. LAN drivers no longer maintain the
interface state. The Network Interface State field has been removed from the
lanscan output. Instead, the netstat command can be used to determine
the state of the interface.
Application 7 Application 7
Presentation 6 Presentation 6
Session 5 Session 5
Transport 4 Transport 4
Networking 3 Networking 3
Data Link 2 Data Link 2
Physical 1 Physical 1
# linkloop 0x0060b007c179
Link connectivity to LAN station: 0x0060b007c179
-- OK
Student Notes
/usr/sbin/linkloop tests the physical and data link layers (layers 1 and 2) of the OSI
model.
linkloop uses IEEE 802.3 link test frames to check connectivity within a LAN. You must be
root to execute the linkloop command.
NOTE: linkloop requires the device file /dev/dlpi and the dlpi kernel driver.
The linkloop command is a quick way to test your own LAN interface. If you provide
linkloop with the link level address of the machine for which you want to test connectivity,
linkloop will report whether or not the connectivity is OK. The link level address can be
obtained with the commands lanscan and lanadmin.
Before HP-UX 10.30, LAN drivers maintained the interface state. Beginning with HP-UX 10.30,
the physical point of attachment (PPA) number for DLPI is no longer equivalent to the
network management identifier (NMID). The PPA number has been changed to be the same
as the card instance number.
The linkloop syntax, shown on the slide, has the following parameters:
-i PPA Specifies the PPA to use. If this option is omitted, linkloop uses the first
PPA it encounters in an internal data structure.
(For releases earlier than HP-UX 10.30, this option will refer to the nmid,
which refers to the network management ID as displayed by lanscan.)
-v Verbose option.
Application 7
Presentation 6
Session 5
Transport 4
Networking 3
Data Link 2
Physical 1
Student Notes
lanadmin allows you to do the following:
-e Echos the input commands on the output device. This is useful if you want to
redirect your output to a file.
-t Suppresses the display of the command menu before each command prompt.
This is the same as the test selection mode terse command.
When executed in the most common way, without parameters, the following menu is
displayed:
# /usr/sbin/lanadmin
When you invoke lanadmin, you are in the test selection mode. From here, you have only
one choice. Either enter the diagnostic by entering lan or just the first letter, l.
The LAN interface diagnostic allows you to test your LAN hardware (layers 1 and 2 of the OSI
model).
NOTE: lanadmin requires the device file /dev/dlpi and the kernel driver dlpi.
Example lanadmin
# lanadmin
LOCAL AREA NETWORK ONLINE ADMINISTRATION, Version 1.0
Wed, Aug 12,1998 23:03:30
Copyright 1994 Hewlett Packard Company.
All rights are reserved.
lan = LAN Interface Administration
menu = Display this menu
quit = Terminate the Administration
terse = Do not display command menu
verbose = Display command menu
Student Notes
To enter the LAN interface test mode, type lan while in the test selection mode. The LAN
interface test mode allows you to test the physical and data link layers (layers 1 and 2) of the
OSI model. Specifically, you can gather LAN interface statistics, reset the interface card, and
execute the interface self-test to check for hardware problems.
clear Clears the LAN interface card network statistics registers to zero. This
command requires superuser status to execute.
display Displays the local LAN interface card status and statistics registers. Allows
you to find out how busy the network is.
reset Resets the local LAN interface card, causing it to execute its self-test. Local
access to the network is interrupted. This command requires superuser
status to execute. Resetting the card may be necessary when the host has been
disconnected from the LAN cable for a long time.
NOTE: If you have a second LAN interface, you must create the proper device files for
the interface (for example, /dev/lan1) in order to use this diagnostic.
PPA Number = 0
Description = lan0 Hewlett-Packard LAN Interface Hardware Rev 0
Type (value) = ethernet-csmacd(6)
MTU Size = 1500
Speed = 10000000
Station Address = 0x80009707445
Administration Status (value) = up(1)
Operation Status (value) = up(1)
Last Change = 100
Inbound Octets = 2887895
Inbound Unicast Packets = 23560
Inbound Non-Unicast Packets = 6382
Inbound Discards = 0
Inbound Errors = 833
Inbound Unknown Protocols = 5813
Outbound Octets = 1673233
Outbound Unicast Packets = 20981
Outbound Non-Unicast Packets = 12
Outbound Discards = 0
Outbound Errors = 0
Outbound Queue Length = 0
Specific = 655367
The output of lanadmin is tremendous. Detailed knowledge about the data link layer
protocols is necessary to understand all of the information offered by lanadmin. The
following are only a few tips on how to use and interpret the information that lanadmin
displays:
PPA -Physical Point of Attachment The Physical Point of Attachment (PPA) number of the
LAN interface.
To interpret all other values, look for lines with terms like Discards, Errors, Collision,
Deferred, and Too Long.
Lines with values that are not equal to 0 are not necessarily a problem. If you have a real
problem in OSI layer 1 or 2, lanadmin will show some lines with very high values.
Produce an output listing of lanadmin when you do not have any problems with your
network and keep this listing. Compare this listing with the lanadmin output you get when
problems occur. This information is very helpful when troubleshooting your network.
quit
!
Application
Presentation
Session
Transport
Networking
Data Link
Physical
Student Notes
The /usr/sbin/arp command displays or modifies the entries in the ARP kernel table that
relate Internet (level 3) to Ethernet (level 2) addresses used by the ARP protocol. It has
several options, some of which can only be used by a superuser.
Syntax:
arp -a [system][core] Displays all current ARP entries by reading the table from
file core (default /dev/kmem) based on the kernel file
system (default /stand/vmunix).
arp -d hostname If an ARP entry exists for the host called hostname, then
delete it. This requires superuser privileges.
arp -s [parameter] Create an ARP entry for a host with a new Ethernet
address. This requires superuser privileges.
arp -f filename Read file filename and set multiple entries in the ARP
tables. Entries in the file should be of the form hostname
address [temp] [pub] [trail]. This requires
superuser privileges.
If a defective LAN interface is replaced by a new one, remember that the new unit will have a
new link level address. Any remote host that has still the old link level address in its ARP
table will not be able to communicate with this replacement interface. You must delete the
wrong entry from the ARP tables on these remote hosts.
If you want to know the link level address of a remote host in your network, you can send a
ping to this host and read then your ARP table.
For more information, see the man pages for arp(1M) and arp(7).
Application 7 Application 7
Presentation 6 Presentation 6
Session 5 Session 5
Transport 4 Transport 4
Networking 3 Networking 3
Data Link 2 Data Link 2
Physical 1 Physical 1
Student Notes
ping tests up through the network layer (layer 3) of the OSI model. Any user can execute
ping.
When you encounter a network problem, it is typically a good idea to execute the ping
command first. If ping is successful in transferring packets, you can typically rule out
problems below layer 3 (hardware problems such as bad cables or transceivers), and you can
run tests on the upper layers. If ping fails, you should use lanadmin or lanscan to
diagnose your LAN hardware.
Use ping
Syntax
ping hostname [packet_size] [-n [num_packets]]
in which
NOTE: If you use ping on your local host (loopback), you test just the network layer
(layer 3). The test could be successful even if the LAN hardware is down.
Application
Presentation
Session
Transport
Networking
Data Link
Physical
Student Notes
The netstat command reports network and protocol statistics regarding traffic and the
status of the local LAN interface. Any user can execute netstat.
There are many options to netstat. The most useful options are those that display
information that is not available through other commands (such as ping, lanscan, and
lanadmin). Within this module, we will discuss only the following options, which display
information about OSI layers 1, 2, and 3:
-i Shows the state of the network interfaces. This includes both primary and
logical interfaces.
-r Lists all routes in the local routing tables. When -v is used with the -r option,
netstat also displays the network masks in the route entries. The -r -s
combination is not supported in HP-UX 11.0.
The netstat -i command shows information about the status of all LAN interfaces as well
as a table of cumulative statistics regarding packets transferred. In version 10.20 and earlier,
there was information on collisions and errors as well. The cumulative statistic starts with
powering up the interface. It can be reset by the reset functionality of the lanadmin
command.
• ni0 and ni1 are two built-in RS 232 interfaces. They are possible network
interfaces. You can configure them with the serial line interface protocol
(SLIP) to use the IP protocol in a point-to-point serial network. For more
information, see the man page pppd(1).
The asterisk (*) shows that the interface was not activated.
Mtu Maximum transmission unit shows the biggest possible size of a frame. With
IEEE 802.3 it is 1500 Bytes.
Network Shows the IP address or the name of the network to which this interface
belongs. If there is a name, the file /etc/networks is configured. none
indicates that the interface is not powered up.
Address Shows the IP address or the name of the interface. If there is a name, the IP
address was translated by the hosts file, NIS, or BIND. none indicates that
the interface is not powered up.
To determine the number of packets going over the network, use the netstat interval
option. Network traffic through the local network interface will be reported every interval
seconds. The first line and every 24th line thereafter show cumulative statistics since the
system was powered up or the statistics were reset with lanadmin. The slide shows the
number of packets transmitted and received, the number of packets with errors, and the
number of collisions.
Most of this information can also be gathered with lanadmin. The difference is that
lanadmin provides a snapshot view (a single sample), whereas netstat is continuously
sampling.
Application
Presentation
Session
Transport
Networking
Data Link
Physical
• The netstat -r command displays all routes defined in the route table.
• The netstat -rn command displays IP addresses instead of hostnames.
Example
# netstat -rn
Routing tables
Dest/Netmask Gateway Flags Refs Interface Pmtu
127.0.0.1 127.0.0.1 UH 0 lo0 4136
192.6.30.2 192.6.30.2 UH 0 lan0 4136
192.6.30.0 192.6.30.2 U 2 lan0 1500
127.0.0.0 127.0.0.1 U 0 lo0 4136
default 192.6.30.1 UG 0 lan0 1500
Student Notes
netstat -r shows your host's routing tables. By default, netstat resolves IP addresses to
hostnames. If you wish to view IP addresses in the routing table, use the -n option in
addition to -r.
• The Dest/Netmask field identifies the destination host or network for each table entry.
• The Gateway field identifies the next hop required to get to each of the destinations.
If you have only one LAN interface, you should have a minimum of four entries in your
routing table:
Each time you configure an additional logical interface via the ifconfig command, HP-UX
automatically adds that IP address to your routing table, as well as a route to the network to
which your new interface is attached.
Entries can be added to and removed manually from the routing table via the route
command.
Application
Presentation
Session
Transport
Networking
Data Link
Physical
Name: mickie
Address: 192.6.30.3
Student Notes
The nslookup command checks how the local system resolves host names to IP addresses:
$ nslookup
Default Name Server: chris.hp.com
Address: 192.6.21.2
> Ctrl + d
$ nslookup darren
Default Name Server: chris.hp.com
Address: 192.6.21.2
Name: darren.hp.com
Address: 192.6.21.4
> host server Looks up information for host using name server
>ls -d domain > file Lists all information for domain and redirect it to file
>set all Prints the current values of the various options that have been
set
Preliminary Steps
1. Portions of this lab may disable your lan0 interface card. If you are using remote lab
equipment, login via the GSP/MP console interface for the duration of the lab.
2. Disabling the LAN card can cause problems for CDE, too. Before starting the lab, shut
down CDE:
# /sbin/init.d/dtlogin.rc stop
Answer
Answer
3. Given a host name, how can you determine that hostname’s corresponding IP address?
Which IP address is associated with corp’s first interface?
Answer
4. Can you determine the MAC address associated with corp’s first interface, too? Record
this MAC address for future reference.
Answer
Answer
2. Can you still ping other hosts if your LAN interface is "DOWN"? Change the IP
configuration state of your lan0 interface to "DOWN.” Which field in the netstat –in
output indicates that the interface is down?
Answer
Answer
4. Now try linkloop'ing to your corp's MAC address. Does this work? Explain.
Answer
5. Based on your answer to the previous question, when might linkloop be useful?
Answer
Answer
2. There should be a shell script in your /labs directory called /labs/corrupt.sh. Run
the script. When prompted, enter a number between 1 and 5. Based on your response, the
script will corrupt your LAN configuration in one of five different ways. When the script
terminates, your task is to fix your LAN configuration so the command ping corp
succeeds. Take advantage of all the tools we discussed in this chapter.
3. Once you successfully troubleshoot and fix your configuration, run the script again,
choose a different number, and again fix the resulting problem. If time permits, try each
of the five options provided by the script.
Good luck!
Part 4: Cleanup
Before moving on to the next chapter, restore your network configuration to the state it was
in before this lab.
# /labs/netfiles.sh –r NEW
• Create custom startup and shutdown scripts to start additional services during the boot
process.
NFS NTP
DNS
Student Notes
In earlier chapters, we walked through the process of configuring a LAN interface and
connecting an HP-UX system to a network. After configuring a LAN interface, numerous
services can be configured to use the system's LAN connection. The slide above lists just a
few examples:
• NFS: Makes it possible to access file systems across the network.
Each boot disk contains a boot area that includes an "Initial System Loader" executable. The
ISL calls the HP-UX kernel loader, which then loads the kernel in memory. The kernel does a
sanity check on the root file system, and then calls the init daemon. The init daemon is
responsible for bringing the system to a fully functional state. The init daemon performs
some of the system initialization tasks itself. It checks for corruption in the file systems listed
in /etc/fstab, initializes the system console, and performs several other tasks defined in
/etc/inittab.
init calls on the /sbin/rc program, however, to start most of the system services such as
NFS, DNS, and NTP that are required to bring the system to a fully functional state.
Run Levels
Shutdown
Startup
2 syncer, NFS
1 syncer
0
Student Notes
Numerous services must be started to bring an HP-UX system up to a fully functional state.
There may be some dependencies to consider as all of these services are starting. For
example, it would not make sense to start Networked File System functionality until the LAN
cards have been configured. So how does init guarantee that these dependencies are met?
Introduction to Run-Levels
The init daemon brings the system up to a fully functional state in stages known as "run
levels.” A run level is a system state in which a specific set of processes is allowed to run. The
run level your system is at determines what functionality and services are available.
• More services are available at higher run levels.
Run-level 1 Similar to single-user, but file systems are mounted and the syncer is
running. This run level can also be used to perform system
administrative tasks.
Run-level 2 Multiuser state. This run level allows all users to access the system.
Run-level 3 For HP CDE users, HP CDE is active at this run level. Beginning with
HP-UX release 10.20, CDE is the default user desktop environment.
Also, at run-level 3, NFS file systems are exported; this capability is
called Networked Multiuser state.
Run-level 4 For HP VUE users. In this mode, HP VUE is active, providing the
operating system release is 10.30 or below. As of HP-UX 11.00, HP VUE
is no longer supported.
At system shutdown, then, init brings the system down to run-level 0 one run-level at a
time. At each run-level, /sbin/rc has an opportunity to kill whatever services are no longer
needed.
Questions
1. Try the init command to change run-levels a few times. What happened when you
moved up to run-level 4? Did any additional services appear to start?
2. What happened when you moved from run-level 4 to run-level 2? Did any services
disappear?
/sbin/rc*.d Directories
rc3.d K100dtlogin.rc
K900nfs.server
rc2.d
S340net
rc1.d S430nfs.client
S500inetd
rc0.d S660xntpd
Student Notes
At each run level, the init daemon calls /sbin/rc to start any necessary system and
network services. The /sbin/rc program determines which services to start and stop at the
new run level by consulting one of the /sbin/rc*.d directories.
There is one /sbin/rc*.d directory for each defined system run level:
/sbin/rc0.d
/sbin/rc1.d
/sbin/rc2.d
/sbin/rc3.d
The /sbin/rc*.d directories contain "S" and "K " scripts. “ S" scripts start services, while
"K" scripts stop (kill) services. Most services started by /sbin/rc have both an "S" script
and a "K" script in the /sbin/rc*.d directories. You can use the ls command to see which
services are started at each run level:
# ls /sbin/rc*.d/*
Questions
1. Do an ls /sbin/rc*.d/*. At which run level are the majority of the system services
and daemons started? Which rc*.d directory contains the most kill scripts?
2. If a service's "S" script is in /sbin/rc2.d, where would you expect to find its "K" script?
Do an ls /sbin/rc*.d/* to see if your hypothesis is true.
/sbin/rc2.d/S730cron
Run Level
Type
Sequence Number
Service Name
Student Notes
There are several components to each S/K script name.
The first character in each script name simply indicates whether the script should be called
to start a service (S) or kill a service (K).
The second component of each script name is a "sequence number.” When init brings the
system to a higher run-level, /sbin/rc executes the "S" scripts in the appropriate
/sbin/rc*.d directory in ascending order by sequence number. When init brings the
system to a lower run-level, /sbin/rc executes the "K" scripts in the appropriate
/sbin/rc*.d directory in ascending order by sequence number. This allows /sbin/rc to
accommodate dependencies within a run level.
The final component of each script name simply identifies the service or daemon with which
the S/K script is associated.
For example, assume there are four services, W, X, Y, and Z. The S/K script names for these
services would likely be:
/sbin/rc3.d: /sbin/rc2.d:
------------ ------------
S200W K800W
S300X K700X
S400Y K600Y
S500Z K500Z
What appears to be the relationship between start and kill sequence numbers?
NOTE: S/K sequence numbers may range in value from 100 to 900.For custom S/K
startup scripts that you create, HP recommends that you use the generic start
and kill sequence numbers:
Questions
Consider the following sample S/K scripts and answer the questions that follow:
/sbin/rc2.d/K900nfs.server
/sbin/rc2.d/S340net
/sbin/rc2.d/S430nfs.client
/sbin/rc2.d/S500inetd
/sbin/rc2.d/S660xntpd
1. When moving up to run-level 2, which services would be started, and in which order?
2. When moving down to run-level 2 from run-level 3, which services would be stopped, and
in which order?
3. Write the full path names for the "K" scripts that you would expect to be associated with
each of the "S" scripts shown above.
4. Write the full pathname of the S script that would correspond to the nfs.server kill
script shown above.
/sbin/init.d/* Scripts
/sbin
services.
z /sbin/rc*.d/* scripts are just symbolic links to
/sbin/init.d scripts!
Student Notes
If you do a long listing of the /sbin/rc*.d directories, you will note that the S/K scripts
aren't really scripts at all.
Each service started by /sbin/rc has a shell script in the /sbin/init.d directory. These
scripts contain the commands necessary to both start AND stop their associated services.
The files in the /sbin/rc*.d directories are actually nothing more than symbolic links to
scripts in the /sbin/init.d directory.
/sbin/init.d/cron:
case $1 in
start_msg) echo “Start clock daemon”
stop_msg) echo “Stop clock daemon”
start) # Commands to start cron
stop) # Commands to kill cron
esac
Student Notes
All of the scripts in the /sbin/init.d directory have essentially the same structure. All are
built around a case statement that evaluates the first argument passed to the script ($1). The
scripts recognize four valid values for this first argument:
stop_msg The stop_msg has much the same purpose as the start_msg
argument. /sbin/rc calls the /sbin/init.d scripts with
stop_msg to generate the shutdown checklist that appears on the
console during system shutdown.
start When called with the start argument, the /sbin/init.d scripts
execute whatever commands are necessary to actually start the
associated service.
stop When called with the stop argument, the /sbin/init.d scripts
execute whatever commands are necessary to actually stop the
associated service.
# /sbin/init.d/cron start
# /sbin/init.d/cron stop
/etc/rc.config.d/* Files
• You may wish to disable a service that’s not needed, or enable a new
service.
• Services may be enabled or disabled via control variables.
• Control variables are defined in files under /etc/rc.config.d.
z /sbin/init.d/ scripts source /etc/rc.config.d/* files to
determine control
variable values.
/etc/rc.config.d/cron
Student Notes
In addition to an /sbin/init.d script, most services also have an associated configuration
file in the /etc/rc.config.d directory. These configuration files allow the administrator
to:
• Disable unneeded daemons/service
The control variable usually takes the name of the service it controls.
• Control variable for /sbin/init.d/cron: CRON.
The values of these control variables are set in the configuration files under the
/etc/rc.config.d directory. Some /sbin/init.d scripts have their own, dedicated
configuration files in /etc/rc.config.d, but some services share a common configuration
file.
Examples
/sbin/init.d script /etc/rc.config.d file control variable
------------------- --------------------- ----------------
cron /etc/rc.config.d/cron CRON
nfs.client /etc/rc.config.d/nfsconf NFS_CLIENT
nfs.server /etc/rc.config.d/nfsconf NFS_SERVER
Many configuration files set other parameters used by the startup script, too. Recall that the
/etc/rc.config.d/netconf file, for example, defined the system host name, IP
address, and routing information.
net netconf
/sbin/rc2.d inetd netdaemons
start_msg
K900nfs nfsconf
start nfs.server
S340net
namesvrs
S500inetd
nis.client .
. .
. .
/sbin/rc3.d .
S100nfs.server
Student Notes
The above slide summarizes all the files and directories involved in starting and shutting
down processes/daemons at startup and shutdown, and shows how the files and directories
interact.
The graphics recap the concepts presented on the five previous slides, including:
The /sbin/rc*.d These directories, also known as run level directories, contain
directories the names of scripts to execute when transitioning to the
various run levels.
The S/K naming convention Within the /sbin/rc*.d directories (run-level directories),
all scripts followed a pre-defined naming convention which
indicated whether to Start or Kill a daemon, and the order in
which the scripts were to execute.
The contents of the Each executable script contained instructions for starting and
init.d scripts stopping the processes/daemons associated with the
subsystem.
The /etc/rc.config.d This directory contained customization files for all the
directory executable scripts in /sbin/init.d. Because the
executables should NOT be modified directly, the
customization for these scripts were kept in separate files
located under this directory.
Student Notes
During the transition from one run-level to another, a checklist of all the actions to be
performed during the transition will appear on the screen. The /sbin/rc program creates
the checklist by calling each execution script with an argument of start_msg (if
transitioning to a higher run level) or stop_msg (if transitioning to a lower run level).
Once the checklist is created, the /sbin/rc program calls each execution script again, this
time with an argument of start or stop. This invocation attempts to either start or stop the
subsystem. The outcomes of this second invocation is indicated on the checklist screen (the
far right side) with one of the following status:
FAIL The execution script was unable to start (or stop) the subsystem. When an
execution script fails, a message will appear at the bottom of the screen,
stating:
* - An error has occurred!
* - Refer to the file /etc/rc.log for more information
N/A The execution script did not try to start (or stop) the subsystem because it
was disabled in the /etc/rc.config.d configuration file.
1. cp /sbin/init.d/template /sbin/init.d/myservice
2. vi /sbin/init.d/myservice
a. Edit start_msg statement
b. Edit stop_msg statement
c. Edit start statement
i. Change CONTROL_VARIABLE to MYSERVICE
ii. Add command to start your service
iii. Add command set_return
d. Edit stop statement
i. Change CONTROL_VARIABLE to MYSERVICE
ii. Add command to stop your service
iii. Add command set_return
3. vi /etc/rc.config.d/myservice
a. Add single line, MYSERVICE=1
4. ln -s /sbin/init.d/myservice /sbin/rc3.d/S900myservice
ln -s /sbin/init.d/myservice /sbin/rc2.d/K100myservice
Student Notes
Although most services and applications provide standard startup/shutdown scripts, it may
occasionally be necessary to create a custom /sbin/init.d script on your system. This
slide presents a cookbook approach for creating these scripts.
1. HP-UX includes a template /sbin/init.d startup script that you can copy, then modify
for your particular service. Make a copy of the template using your service name as the
new script name.
# cp /sbin/init.d/template /sbin/init.d/myservice
# vi /sbin/init.d/myservice
a. Scroll down to the case statement towards the middle of the script. Look for the
following:
'start_msg')
# Emit a _short_ message relating to running this script
# with the "start" argument; this message appears as part
# of the checklist.
echo "Starting the <specific> subsystem"
;;
b. Scroll down to the stop_msg portion of the case statement that looks like this:
'stop_msg')
# Emit a _short_ message relating to running this script
# with the "stop" argument; this message appears as part
# of the checklist.
echo "Stopping the <specific> subsystem"
;;
'stop_msg')
# Emit a _short_ message relating to running this script
# with the "stop" argument; this message appears as part
# of the checklist.
echo "Stopping the myservice subsystem"
;;
c. Scroll down to the start argument in the case statement that looks like this:
Customize the CONTROL_VARIABLE to match your service name, and add the
command necessary to start the service. If you are starting a daemon that should run
perpetually on your system, be sure to start it in the background. Also add a call to
the set_return function to notify /sbin/rc if the daemon successfully starts:
d. Next, scroll down to the stop argument in the case statement that looks like this:
Change the CONTROL_VARIABLE, and add the command necessary to kill the
service. Some applications include a script that should be used to kill their daemons.
Otherwise, just use the kill command. In either case, be sure to add a call to the
set_return function to notify /sbin/rc if the daemon successfully starts:
3. Create start and kill links for the new service. You may use any sequence number
you wish, but the “don’t care” sequence numbers (S900 and K100) are recommended.
# ln –s /sbin/init.d/myservice /sbin/rc3.d/S900myservice
# ln –s /sbin/init.d/myservice /sbin/rc2.d/K100myservice
4. Test your new startup script by executing both the start and kill links interactively. After
running each script. Use ps to verify that the scripts succeed.
# /sbin/rc3.d/S900myservice start
# ps –ef | grep myservice
# /sbin/rc2.d/K100myservice stop
# ps –ef | grep myservice
5. Finally, try changing run levels a few times, and watch the checklist to verify that your
scripts succeed.
# init 2
# init 3
# init 2
Note that the first init 2 may fail. Can you explain why?
Directions
Work on your own to perform the following tasks.
Preliminary Step
1. Portions of this lab may disable your lan0 interface card. If you are using remote lab
equipment, login via the GSP/MP console interface for the duration of the lab.
# ls /sbin/rc*.d/S*
Answer the questions below, using the output from the ls command above.
3. At which run level does your system set its host name?
4. At which run level does the net script set your IP address?
5. At which run level does the sendmail daemon begin delivering mail?
7. At which run level does the system enable access to ftp, telnet, and other Internet
services? HINT: Internet services are started by the inetd Internet daemon.
Setting a control variable to "1" enables that service at next boot, while setting the control
variable to "0" disables the service at next boot. Control variables are set in configuration files
in /etc/rc.config.d/*. Sometimes the configuration file matches the name of the
service. You can always use the grep command to find the proper configuration file for a
service. For instance, the output from the following grep command suggests that the
sendmail control variable is defined in /etc/rc.config.d/mailservs.
See if you can find the /etc/rc.config.d configuration files for each of the services
below, and determine which of those services are enabled on your system.
nfs.client
nis.server
nis.client
sendmail
named (DNS)
xntpd
b. Scroll down to the stop_msg portion of the case statement that looks like this:
'stop_msg')
# Emit a _short_ message relating to running this script
# with the "stop" argument; this message appears as part
# of the checklist.
echo "Stopping the <specific> subsystem"
;;
c. Scroll down to the start argument in the case statement that looks like this:
# Check to see if this script is allowed to run...
if [ "$CONTROL_VARIABLE" != 1 ]; then
rval=2
else
# Execute the commands to start your subsystem
:
fi
;;
3. Create a configuration file and a control variable for your new startup script:
# vi /etc/rc.config.d/pfs_mountd
PFS_MOUNTD=1
4. Create a start link to start the new service at run level 3 using the “don’t care” 900
sequence number, and a kill link to kill the new service with sequence number 100 at run
level 2:
# ln –s /sbin/init.d/pfs_mountd /sbin/rc3.d/S900pfs_mountd
# ln –s /sbin/init.d/pfs_mountd /sbin/rc2.d/K100pfs_mountd
5. Test your new startup script by executing both the start and kill links.
# /sbin/rc3.d/S900pfs_mountd start
# ps –ef | grep pfs_mountd
# /sbin/rc2.d/K100pfs_mountd stop
# ps –e
6. Assuming the previous test succeeded, try changing run levels a few times to further test
your scripts.
# init 2
# init 3
# init 2
Note that the first init 2 may fail. Can you explain why?
• Compare and contrast the NFS PV2 and NFS PV3 protocols.
What Is NFS?
Student Notes
• NFS is a service for sharing files and directories across a LAN.
The first module in this course noted that the primary purpose of a LAN is to provide a
mechanism for sharing resources. Disk space is one of the most commonly shared
resources on LANs today. Although many file sharing solutions have been developed over
the years, Sun's Network File System (NFS) protocol is by far the most common in the
UNIX world today. Using NFS, administrators can share executables, data files, and even
home directories across multiple systems on Local- and Wide-Area Networks.
NFS was first released by Sun in the early 1980s and was ported to HP-UX in 1986. Today,
nearly every UNIX platform available supports NFS. In fact, the client portion of NFS has
even been ported to the Microsoft and Macintosh operating systems! File systems shared
from an HP-UX NFS server can be mounted on any one of these NFS clients.
• NFS allows transparent access to files from any node on the LAN.
NFS is virtually transparent to users and applications on the NFS clients. The same file
manipulation commands (cp, mv, ls, cat, and so on) and system calls (open(),
write(), read(), and so on) that are used to access files on a local HFS or VxFS file
system can also be used to access files on an NFS file system. When users cd to
/home/user1, they may be accessing a directory physically stored on a local logical
volume, or on a disk attached to an NFS server elsewhere on the network.
The remainder of this chapter introduces some key NFS concepts and terminology, while the
next two chapters discuss NFS configuration issues.
Student Notes
NFS can be used to share almost any file on an HP-UX system. However, some files and
directories are better candidates than others.
• Storing home directories on an NFS server offers many advantages. Users can log in on
any workstation on the LAN and have access to their home directory. Administrators are
saved the drudgery of scheduling backups on individual workstations if users store all
their files on a central server. Disk space management is simplified since users store files
on the server rather than their local disks. However, there are disadvantages to this
approach. If the server goes down, users will be able to login, but will be placed in the /
directory rather than their normal home directories. Storing home directories on an NFS
server may also dramatically increase network traffic. The root home directory should
always be stored in a local file system to ensure that it is available even when the network
is inaccessible.
• Application directories under /opt can be stored on the NFS server. Doing so provides a
central point of administration and saves disk space on users' desktop machines. If you
choose to share executables via NFS, make sure you do not mount a file system full of
Solaris executables on your HP-UX box, or vice-versa! Although NFS provides
transparent access to files across platforms, the code contained in those files may be
platform-specific!
• When disk space was more expensive, some administrators stored the /usr/lib,
/usr/share, /usr/local, and /usr/contrib on NFS servers. As disks have become
cheaper, most administrators have chosen to store these directories on users' local disks
to minimize network traffic.
• Data files shared by multiple nodes are ideal candidates for sharing via NFS, too.
• System-specific configuration files under /etc should not be shared via NFS.
• With the exception of the email directory, /var/mail, /var is rarely shared.
• /sbin contains executables used in the early stages of the boot process. Since these
programs run before network connectivity is established, /sbin should always be stored
on a local disk.
/ /
Student Notes
Hosts in an NFS environment can be configured as NFS servers, NFS clients, or both.
NFS Servers
A host on which a shared file system physically resides is known as an NFS server. The NFS
server administrator can choose which directories and files should be made available to
other hosts.
• The administrator can choose to share an entire file system, such as /home, or /opt.
• The administrator can choose to share only one or more subdirectories within a file
system. For instance, instead of sharing the entire /home file system, the administrator
can simply choose to share the /home/user1 and /home/user2 subdirectories.
• The administrator can even choose to share a single file, such as /home/user1/data!
File systems, directories, and files that have been made available to other hosts via NFS are
said to be "exported.”
NFS Clients
Hosts that access NFS file systems from an NFS server are called NFS clients. NFS file
systems must be mounted on a local mount point directory in much the same way that a local
logical volume is mounted on a mount point directory. After an NFS file system is mounted
on a mount point directory, all attempts to access files and directories below that mount
point are automatically forwarded to the NFS server.
The NFS client administrator may choose to mount all or part of an exported file system. For
instance, if the NFS server administrator exports /home, the client administrator may choose
to mount the entire /home file system via NFS, or a single subdirectory from within /home.
Client
executes
Student Notes
The NFS remote mount capability is implemented via "Remote Procedure Calls" (RPCs)
developed by Sun Microsystems.
The RPC mechanism makes it possible for a client system to execute a procedure remotely
on an NFS server. Most of the system calls that applications use to access local file systems
have closely related RPC calls. For instance, applications use the read() system call to read
from a file; NFS clients use a read() RPC to read from a file on an NFS server. Applications
use the write() system call to write data to a local file; NFS clients use a write() RPC to
write data to a file stored on an NFS server. These are just a couple of the RPCs recognized
by an NFS server.
When an application executes a file access system call, the kernel automatically determines if
the target file is on a local device that can be accessed directly, or an NFS file system that
may require an RPC call. If the target file is on an NFS file system, the client's kernel
automatically sends an appropriate RPC request to the NFS server. Thus, NFS is transparent
to your applications and processes.
• All data passed to and from RPC procedures is encoded using a platform-independent
format called the External Data Representation (XDR) standard. This makes it possible
for hosts using different byte ordering, size, and word alignments to pass data back and
forth successfully.
• Although NFS is the most common service that uses Sun's remote procedure calls, other
services, such as NIS, use RPCs, too.
Ports
To: Prog#100003 (nfs)
111 rpcbind
2049 nfsd
The portmap/rpcbind
daemons are responsible for
routing all incoming RPC rpc.mountd
requests to the appropriate RPC 4955
daemons on the NFS server. 6
Student Notes
RPCs use sockets and the TCP/UDP transport protocols to pass data between NFS clients
and servers. At boot time, the NFS server launches several RPC programs to handle incoming
RPC requests from clients. Each RPC program listens for requests on a separate, randomly
chosen port number.
If the RPC programs listen for incoming requests on randomly chosen port numbers, how do
the clients know to which port number to address their requests? When the RPC programs
start up, the rpcbind daemon registers which RPC programs are running on which ports.
RPC clients simply send their RPC requests to the rpcbind daemon, which always runs on
port number 111. rpcbind then forwards the incoming RPC requests to the appropriate port
numbers.
Clients specify the RPC program they wish to contact by "Program Number.” The /etc/rpc
file associates RPC programs with their well-known program numbers. Although an RPC
program's port number may vary from system to system, and reboot to reboot, the RPC
program numbers are consistent across all platforms and hosts. This ensures that Solaris NFS
clients can successfully communicate with HP-UX NFS servers, and vice versa.
This mechanism for dynamically binding RPC programs to port numbers is desirable because
the range of reserved port numbers is very small, and the number of potential RPC programs
is very large.
If rpcbind aborts or terminates on SIGINT or SIGTERM, it will write the current list of
registered services to /tmp/portmap and /tmp/rpcbind.file. Starting rpcbind with
the -w option instructs it to look for these files and start operation with the registrations
found in them. This allows rpcbind to resume operation without requiring all RPC services
to be restarted.
Example /etc/rpc
##
# file of rpc program name to number mappings
##
When my clients request access to a file, I just send back a “file handle”.
I don’t keep track of which files my clients are using.
lookup(/home/user1/data)
Implications
Improved performance
NFS servers can reboot with minimal impact on their clients
NFS clients can reboot with minimal impact on their servers
Stale file handle errors may occur if a client removes a file being used by other clients
File locking, and other “stateful” operations are more complicated
Student Notes
One key difference between NFS and local disk-based file systems is that NFS operates in a
"stateless" manner, while local file systems operate in a "statefull" manner.
When applications open files on a local disk-based file system, the kernel uses "file
descriptors" to track which processes are using which files. When a user removes a file from
a local file system, the file's data blocks are not actually de-allocated until the last user using
the file is finished. Similarly, if the administrator attempts to unmount a local file system that
is still being used by a user, the umount command fails with a "device busy" message. In
other words, local file systems are accessed in a "statefull" manner; the kernel tracks which
files and directories are being used by whom, and prevents one user's requests from
interfering with others' requests.
NFS, on the other hand, operates in a "stateless" manner. When a client opens a file on an
NFS server via the lookup() RPC, the server sends the client a "file handle" derived from
the requested file's inode number. The server does not record the fact that the file is in use,
nor does it create a file descriptor to record which portion of the file the client is currently
accessing. Since the server does not maintain state, a client may possibly remove a file that
another client still has open for reading. An NFS client can even remove another client's
present working directory! Both of these situations result in "stale file handles": file handles
that reference files or directories that no longer exist.
• Advantage: NFS servers can reboot with minimal impact on their clients. After a reboot,
NFS servers can immediately resume processing as if nothing had happened. Client file
handles should remain unchanged, and each client simply re-transmits any access
requests that went unanswered while the server was down. If NFS were a statefull
protocol, some sort of complicated recovery process would be required to determine
which clients had files open at the time of the reboot.
• Advantage: NFS clients can reboot with minimal impact on their servers. Since the server
does not attempt to track which clients have open files, a downed client requires no
action on the part of the server.
• Disadvantage: Stale file-handle errors may occur if a client removes a file being used by
other clients. Since the NFS server does not attempt to track which files are being used
by its NFS clients, NFS allows clients to remove files that are still in use by other clients.
• Disadvantage: File locking and other “stateful” operations are more complicated. Some
applications use file locks to ensure that only one process at a time may access critical
files. Since NFS does not track which files are in-use, file locking becomes more
complicated. File locking is, however, possible via two daemons that are included with
NFS: rpc.lockd and rpc.statd. Clients that wish to lock a region of a file may send a
request to the server's rpc.lockd daemon. rpc.lockd uses a "semaphore" to mark the
requested file region "locked.” The server's rpc.statd daemon begins polling the client
at regular intervals; if the client reboots unexpectedly, the server removes the lock so
other clients can access the file.
NFS only implements "advisory" locks. When an application attempts to access a file, the
onus is on the application to check for existing advisory locks on the file; NFS does not
forcefully prevent other processes from accessing a locked file region.
Student Notes
HP supports two different NFS protocol versions. HP-UX version 10.20 supported NFS
Protocol Version 2 (PV2). HP-UX version 11.00 introduced support for NFS Protocol Version
3 (PV3), but retained backward compatibility with PV2. Servers running PV3 still accept
mount requests from PV2 clients, and PV3 clients can still successfully mount file systems
from PV2 servers. Some PV3 features have been back-ported to HP-UX 10.20.
• Improved performance. The NFS caching algorithms were enhanced for PV3, which may
lead to significant performance gains in some environments.
• Large file support. One of the most beneficial features of NFS PV3 is its ability to support
large files. NFS Version 2 supported a 32-bit file size, while NFS Version 3 supports a
64-bit file size. The maximum file size on NFS PV2 is only 2 Gigabytes, while NFS PV3
supports a maximum file size of 128 Gigabytes.
• AutoFS support. NFS PV2 included a service called "automounter,” which automatically
mounted and unmounted NFS file systems on an as-needed basis. NFS PV3 includes a
more flexible, more robust version of automounter called AutoFS. Automounter and
AutoFS will be discussed in detail later in the course.
• NFS over TCP support. NFS PV2 and the initial release of NFS PV3 used the UDP
protocol to transmit RPC traffic between NFS servers and clients. UDP functions well on
local area networks, but often generates excessive timeouts and other performance
problems on wide area networks. In February 2000, HP released a patch for 11.0 NFS PV3
that supports NFS over TCP (see the text below for details). TCP is the default NFS
transport protocol at HP-UX 11i. The NFS over TCP functionality is not available for
HP-UX 10.20.
1. Look on the http://www.itrc.hp.com website for the latest 11.00 NFS over TCP
patch. Install the patch and all its dependencies according the .text file included with
the patch.
# vi /etc/rc.config.d/nfsconf
NFS_TCP=1
# /sbin/init.d/nfs.server stop
# /sbin/init.d/nfs.client stop
# /sbin/init.d/nfs.client start
# /sbin/init.d/nfs.server start
After going through this procedure, your host will attempt to use TCP whenever possible.
If a server or client does not support NFS over TCP, your host will automatically revert to
NFS over UDP.
NFS CIFS
CIFS
NFS
UNIX Windows
Unix Windows
CIFS
CIFS/9000 provides an easier, more
flexible mechanism for sharing files UNIX Windows
and directories between
HP-UX and Windows PC’s using CIFS
Microsoft’s CIFS protocol
Windows Windows
Student Notes
NFS is the de facto standard for file sharing among UNIX systems, and NFS client
functionality has even been ported to the Microsoft Windows. However, since NFS is not a
native Windows protocol, an NFS server does not provide all of the functionality provided by
a regular Windows NT file server:
Finally, NFS provides no functionality for exporting Windows file systems back to UNIX
clients.
CIFS/9000
Now there is an alternative for administrators who wish to share file and print services in a
heterogeneous environment. HP-UX 11.x supports a new product called CIFS/9000 that
provides a full implementation of Microsoft's "Common Internet File System" protocol, which
is used by Windows 95, Windows 98, Windows 2000, and NT for sharing file and printer
resources. Using CIFS/9000, HP-UX, and Microsoft Windows systems can seamlessly and
transparently share resources.
• HP includes CIFS client software in the CIFS/9000 product. This software makes it
possible to mount file shares from any Samba or Microsoft server on an HP-UX client
using the /etc/fstab file and the standard UNIX mount command. File systems
mounted via the CIFS client software may be accessed using all the standard UNIX
utilities and system calls.
• Finally, the CIFS/9000 product includes a Pluggable Authentication Module (PAM) library
to allow users to log onto their HP-UX systems using their Windows domain usernames
and passwords.
CIFS/9000 is not available for HP-UX 10.x but is included for no additional charge on the
HP-UX 11.x Applications CD.
The remaining notes on this slide describe the steps required to configure a simple CIFS
server and client. For more information on Samba and CIFS, sign up for one of HP's UNIX/NT
integration courses, read HP's CIFS documentation on http://docs.hp.com, or purchase
O'Reilly and Associates, Using Samba (ISBN 1-56592-449-5).
1. Install the CIFS/9000 server bundle from the HP-UX 11.x Applications CD.
# mkdir /cdrom
# mount /dev/dsk/cxtxdx /cdrom #use your CDROM's device file
# swinstall -s /cdrom
2. Configure the SAMBA control variable to enable the Samba daemons after every reboot.
# vi /etc/rc.config.d/samba
RUN_SAMBA=1
Replace the hostname parameter with your server's hostname. Replace the
WORKGROUP parameter with your clients' workgroup name or NT domain name.
Replace the 128.1. parameter with a space separated list of subnets that need access to
the shares on this server.
# vi /etc/opt/samba/smb.conf
[global]
netbios name = hostname
workgroup = WORKGROUP
server string = Samba Server
hosts allow = 128.1.
security = user
encrypt passwords = yes
[homes]
comment = Home Directories
writeable = yes
browseable = yes
[tmp]
comment = Temporary Directory
path = /tmp
writeable = yes
browseable = yes
4. Run the Samba testparm program to search for syntax errors in your configuration file.
This will also list all of the default parameters that will be set for you automatically.
# /opt/samba/bin/testparm
5. Create a Samba password file. This file determines which client users will be able to
access your CIFS shared directories.
# touch /var/opt/samba/private/smbpasswd
# chmod 500 /var/opt/samba/private
# chmod 600 /var/opt/samba/private/smbpasswd
6. Add a few of the users from your UNIX password file to the Samba password file. The
usernames specified must already exist in the /etc/passwd file.
# /opt/samba/bin/smbpasswd -a user1
# /sbin/init.d/samba start
8. Use the smbclient utility to verify that your Windows domain/workgroup and username
are set properly and to list the shares that have been made available to clients. You can
replace the "%" sign with a specific username if you wish to see which shares are
available for a specific Windows user.
1. Install the CIFS/9000 Client bundle from the HP-UX 11.x Applications CD.
# mkdir /cdrom
# mount /dev/dsk/cxtxdx /cdrom #use your CDROM's device file
# swinstall -s /cdrom
# vi /etc/opt/cifsclient/cifsclient.cfg
domain = "WORKGROUP"
3. Configure the RUN_CIFSCLIENT variable to ensure that the client daemon starts after
every system boot, then run the startup daemon to start the daemon.
# vi/etc/rc.config.d/cifsclient
RUN_CIFSCLIENT=1
# /sbin/init.d/cifsclient start
# mkdir /homes
5. Add the CIFS file system(s) to your /etc/fstab file. (Replace "server" with your Samba
server's hostname.)
# vi /etc/fstab
server:/homes /homes cifs defaults 0 0
6. Mount the new CIFS file systems. If you choose to use CIFS on a production box, you
would probably include this mount command in the same startup script that you use to
execute the cifsclient start command.
7. CIFS behaves somewhat differently than NFS. Once an NFS file system is mounted, any
user on the system can access that file system. In CIFS, access to file shares is granted on
a user-by-user basis. Thus, even though you have already mounted your CIFS file systems,
users cannot access those mounted file systems without providing a valid CIFS password.
Log in as a CIFS user using one of the usernames and passwords you added to the
smbpasswd file on the server.
8. List the CIFS shares to which you have access now that you are logged in. Explore one of
the shares with the cd and ls commands.
# cifslist -A
# ls /homes
9. When you are done with the CIFS file systems, terminate your connection to the CIFS
server with the cifslogout command. Then unmount the CIFS file systems.
# /opt/cifsclient/bin/cifslogout server
# umount -aF cifs
2. Verify that you are a member of the same workgroup as your SAMBA server.
Start -> Settings -> Control Panel -> Network -> Identification
3. Launch the Network Neighborhood tool from the Desktop, an icon should appear for
your SAMBA server's hostname. Double click on the SAMBA server icon.
4. A username dialog box should pop up. Enter one of the usernames and passwords that
you created on the SAMBA server. When you click OK, your SAMBA server shares should
appear!
• Export file systems and determine access privileges for those file systems.
Student Notes
If you decide to implement NFS, the first step is to decide exactly which file systems should
be shared. The slide above highlights several issues you should consider.
• Which files and directories should be shared? Do you want to manage home directories,
executable directories, data directories, or all of the above? Remember that disk-based
file systems generally provide better performance than NFS file systems. Also, note that
NFS can place a tremendous strain on your network infrastructure. The more file systems
you share via NFS, the greater the load NFS will place on your NFS servers and network
infrastructure.
• What is the client-to-server ratio? Generally speaking, as the number of NFS clients
increases, the load on the NFS server grows. If you have many clients, it may be
necessary to configure multiple NFS servers to share the load. The characteristics of your
applications should be considered when making this decision. If the application tends to
be disk-use intensive, and performance is important, you should aim for a lower client-to-
server ratio. If the application is less disk-intensive, it may be possible for many more
clients to share the same server.
• Which system should be used as the NFS server? Ideally, choose the biggest, fastest
system you have to be your NFS server. An underpowered NFS server may prove to be a
bottleneck for all of the NFS clients. Your HP Sales representative should be able to help
you size your NFS server appropriately.
• What are the implications if the server goes down? NFS provides a single point of
administration; however, that single point of administration becomes a single point of
failure if the NFS server crashes! If the NFS server does go down, what impact will that
have on your clients? If all of your users' home directories are stored on the NFS server,
no clients will be able to use their workstations effectively until the server comes back up
again! Ideally, you should prevent server downtime by administering the server carefully
and implementing HP's MC ServiceGuard and MirrorDisk/UX high availability solutions.
• What superuser access will be allowed? By default, the administrator of an NFS client is
not allowed root access to the files stored on an NFS server. However, this security
feature can be disabled on a client-by-client basis. Which clients require root access to
your NFS file systems? Are the root users on those clients properly trained?
All of these questions need to be answered before you begin configuring NFS!
Student Notes
This slide overviews the steps that are required to configure NFS servers and clients. The
remaining slides in the chapter discuss each step in detail. Note that NFS can be configured
entirely via the SAM GUI/TUI interface. In order to understand better how NFS functions,
the slides and notes in this course concentrate on the command-line configuration method.
server:/etc/passwd client:/etc/passwd
user1:…:101
user1:…:101:100:…:/home/user1:… user1:…:103:100:…:/home/user1:…
user2:…:102:100:…:/home/user2:… user2:…:102:100:…:/home/user2:…
user3:…:103:100:…:/home/user3:… user3:…:101:100:…:/home/user3:…
user3:…:101
Student Notes
Before you begin sharing files via NFS, it is critical to ensure that your UID and GID numbers
are consistent across all the hosts in your NFS environment.
UNIX file systems identify file owners by UID number, not by username. In the example on
the slide, UID 101 owns user1’s home directory. UID 102 owns user2’s home directory.
UID 103 owns user3’s home directory. These username/UID pairings are reflected in the
server's /etc/passwd file.
Unfortunately, the NFS client's /etc/passwd file disagrees with the NFS server's
username/UID assignments. As far as the client is concerned, all files owned by UID 101 are
associated with user3, and all files owned by UID 103 are associated with user1. In this
situation, it is very likely that user1 would be able to access the /home/user3 home
directory but not his or her own /home/user1 directory.
This configuration must be avoided! Users who have logins on multiple systems must have
the same UID and GID on all of those systems.
There are two ways to maintain consistent UIDs and GIDs across the network.
A cron job can be scheduled on each client to automate the propagation process:
# vi /root/cppasswd
#!/usr/bin/sh
# This script is used to copy files from the master machine
# to the localhost.
MASTER=masterhost
echo "Copying files from $MASTER:"
echo group; rcp -p $MASTER:/etc/group /etc/group
echo passwd; rcp -p $MASTER:/etc/passwd /etc/passwd
# chmod +x /root/cppasswd
# crontab -e
0 1 * * * /root/cppasswd | /usr/bin/mail root
The script above assumes that the master server's ~root/.rhosts file allows password
free access from all other hosts on the network.
• All updates must be made on the master server. If a user changes his or her password on
any other host on the network, the change will be overwritten the next time the script
executes.
• The same root password must be used on all hosts in the NFS environment, since the root
account /etc/passwd entry is propagated out to all the hosts every morning. Many
administrators prefer to assign unique root passwords on each system to improve
security.
If you do this, your backups will become obsolete (since recovered files will have wrong
ownership). Make sure you save a copy of /etc/passwd to passwd.old.
Student Notes
The LAN/9000 (networking) subsystem and the NFS subsystem must be compiled into the
server's kernel in order for NFS to work. There are several ways to verify whether the
subsystems are present in the kernel.
If either subsystem is missing, use SAM to reconfigure the kernel, then reboot.
/sbin/init.d/nfs.client /etc/rc.config.d/nfsconf
NFS_CLIENT=1
NFS_SERVER=1 #Required!
NUM_NFSD=16 #Required!
/sbin/rc3.d/* NUM_NFSIOD=16
PCNFS_SERVER=1
PCNFS_SERVER=1 #Optional!
START_MOUNTD=1
START_MOUNTD=1 #Required!
/sbin/init.d/nfs.server NFS_TCP=1
NFS_TCP=1 #Optional!
Student Notes
After configuring the NFS subsystem in the kernel, you must ensure that the required NFS
server daemons are started automatically during the boot process. NFS daemons, like most
daemons in HP-UX, are started via startup links in the /sbin/rc*.d directories, which
point to the actual startup scripts in the /sbin/init.d directory.
All three of these startup scripts share a common configuration file called
/etc/rc.config.d/nfsconf. The NFS startup scripts read this configuration file at
startup time to determine how and if NFS functionality is configured on your system. The
slide above highlights the variables in /etc/rc.config.d/nfsconf that relate to NFS
server functionality. A later slide will discuss the variables used to configure NFS client
functionality.
NFS_SERVER=1 Set this variable to "1" in order to enable NFS server functionality. If
this variable is set to "0,” the NFS server daemons will not be started
during the boot process.
NUM_NFSD=16 Every NFS client request to open, read, write or otherwise access a file
or directory on an NFS file system is processed by an nfsd daemon
running on the NFS server. Most NFS server administrators run several
nfsd daemons in parallel to enable the server to process multiple
client requests simultaneously. Generally speaking, as the number of
NFS clients increases, the number of nfsd daemons required to
service those clients will increase as well. The NUM_NFSD variable
determines how many nfsd daemons should be started at boot time.
In HP-UX 10.20 and standard HP-UX 11.00, the variable defaults to "4".
HP-UX 11i systems and HP-UX 11.00 systems that have the "NFS over
TCP" patch installed, function a bit differently. TCP NFS requests are
handled by a single, multi-threaded nfsd daemon. UDP NFS requests
are still handled by multiple independent nfsd processes. On these
systems that support NFS over TCP, the number of nfsd daemons
started to handle UDP NFS requests will be set equal to the greater of
either (a) four times the number of active CPUs or (b) the value of the
NUM_NFSD variable in /etc/rc.config.d/nfsconf. In either case,
one additional nfsd will be started to handle TCP NFS requests. In
HP-UX 11i, the default value of the NUM_NFSD variable is 16, which
yields 17 nfsd's in the process table.
PCNFS_SERVER=1 Although NFS was originally developed to share files among UNIX
systems, several vendors now offer NFS client software for the
Microsoft Windows operating systems. Sharing files with Windows
clients is complicated by the fact that Windows usernames and IDs are
entirely different from UNIX usernames and UIDs. By default, the NFS
server finesses this issue by granting all Windows clients the access
rights associated with UNIX UID -2, user "nobody.” Typically, this UID
has very few access rights on a UNIX system.
Windows client for a UNIX username and password each time they
mount an NFS file system. Note that rpc.pcnfsd is not required in
order for Windows clients to mount NFS file systems; it is required
only if the Windows users need to have regular user access rights to
the files on the NFS server. If your server does not have any Windows
clients, set PCNFS_SERVER default to 0.
NFS_TCP=1 If you are running HP-UX 11.00 and have installed the NFS over TCP
patch, the TCP functionality must be enabled by setting NFS_TCP=1 in
/etc/rc.config.d/nfsconf (if the variable doesn't yet exist, add
it to the end of the file). After making this change, both the NFS server
and client daemons must be stopped and restarted. At HP-UX 11i, this
variable is no longer used; NFS over TCP is enabled by default.
NOTE: If your system requires client and server functionality, you must configure
both the server variables described here, and the client variables described
later in the chapter.
Student Notes
After configuring the /etc/rc.config.d/nfsconf file as described on the previous page,
you can either reboot your system or manually run the NFS server startup script to stop and
restart the NFS server daemons:
# /sbin/init.d/nfs.server stop
# /sbin/init.d/nfs.server start
portmap This daemon, used in HP-UX 10.20 and earlier releases, converts RPC
program numbers into port numbers. When an RPC server program
starts, it registers the following information with portmap:
All RPC requests from clients are initially sent to the portmap daemon
on port number 111. portmap compares the "RPC Program Number"
in the incoming packet against the list of registered program numbers
rpcbind This daemon is used in HP-UX 11.00 and beyond as a replacement for
portmap.
nfsd The NFS server daemons respond to clients' file system access
requests. When a client program needs to interact with a remote file
system, it sends a request to one of the server's nfsd processes.
rpc.mountd This RPC daemon answers clients' file system mount requests. Users
may also query this daemon to determine which file systems have been
exported or mounted.
rpc.statd When an NFS client places a lock on a file via rpc.lockd, the server's
rpc.statd daemon is responsible for periodically verifying that the
client is still functioning. If the client reboots unexpectedly,
rpc.statd automatically removes all locks placed by the client to
allow other processes to again access the client's locked files.
Student Notes
After starting the NFS server daemons, you must configure the /etc/exports file to
specify which file systems you want to share with your NFS clients.
Each line in the /etc/exports file has two fields. The first field identifies a file system,
directory, or file that should be made available to NFS clients. NFS provides a great deal of
flexibility. If the first field identifies a directory that serves as a mount point for a local file
system, that entire file system is made available to clients. If you only want to share a
subdirectory tree within a file system, specify that subdirectory path in the first field. In fact,
you can even export a single file!
The second field determines which clients can mount the file system and what those clients
are allowed to do in the file system. Clients that are granted "read-only" access can view the
files and directories in the file system, but cannot make changes. Clients that are granted
"read-write" access can both view and modify the files and directories in the file system. Note
that the options in /etc/exports never mention "execute" permission. As far as the export
options are concerned, clients that have "read" access should be allowed to read executable
code into memory and execute it.
The export options supplement, but do not replace normal UNIX file permissions. If the
permissions on a file are set to "000", none of the clients will be allowed to view, modify, or
execute the file regardless of the export options specified in /etc/exports.
The table below shows the most common export option combinations. The first column
shows several common combinations of export options. The remaining three columns
indicate which clients would be able to access each file system, and how, given the access
option listed on the left (rw="read and write access allowed", ro="read-only access allowed").
Look at the table, then see if you can guess which clients will be able to mount each file
system on the slide. (The slide examples are explained at the end of the notes accompanying
this slide.)
Table 1
By default, root on the client systems is treated as user nobody when processing files on
NFS servers. In order to grant NFS clients root access, the root option to the export
command must be used. If a file system is exported to a client with the root option, then the
user root on that client will have root permission on that file system. The table below
shows several examples using the root export option:
Table 2
Syntax of /etc/exports
A more formal description of the /etc/exports follows below. Export options in
/etc/exports are preceded with a dash, and are separated by commas. Some export
options require a list of hostnames as arguments. Hostnames in these lists must be separated
by colons.
nobody:*:-2:-2::/:
2. /home -access=oakland:la
Exports /home with read-write access for oakland and la. Other hosts will not be allowed
to mount the file system at all.
3. /opt/games -ro
Exports the games directory with read-only access for all hosts.
4. /opt/appl -access=oakland:la,ro
Exports with read-only access for oakland and la. No other clients will be allowed to
mount the file system.
5. /usr/local -rw=oakland
Exports with read-write access for oakland, and read-only access for all other hosts.
6. /etc/opt/appl -root=oakland,access=la
Grants root on oakland UID 0 access to the file system. Also allows read-write access
for host la. Other hosts will not be allowed to mount the file system at all.
CAUTION: Export directories and file systems on an as-needed basis only. Always use
export options to restrict access rights.
# exportfs
rpc.mountd Client
on server
Student Notes
Simply adding a file system or directory to /etc/exports does not immediately make that
file system available to clients. Any time the /etc/exports file is modified, the
administrator must notify the rpc.mountd daemon that a change has occurred by executing
the exportfs command:
# exportfs -a
The superuser can execute the exportfs command at any time to alter the list or
characteristics of exported directories. It must be invoked every time /etc/exports is
modified.
If an NFS mounted directory is unexported via exportfs -u, clients that have already
mounted the file system will receive NFS file handle errors when they attempt to access the
unexported file systems. The client administrators can remove the "stale" file system from the
mount table via the umount command.
Internally, the exportfs command functions by simply adding and removing entries from a
file called /etc/xtab which the rpc.mountd daemon uses to determine which file systems
have been made available to which clients. Exporting a file system adds a line to
/etc/xtab, and unexporting a file system disables or removes an entry from the
/etc/xtab file. Executing exportfs without any options simply displays the contents of
the /etc/xtab file.
NOTE: The server must have the directory locally mounted before it can be exported.
/usr/share/man
/opt/games -ro
Which clients currently have file systems mounted from the server?
# showmount -a [server]
client:/usr/share/man
client:/opt/games
Student Notes
After completing the NFS server configuration, check your work.
# rpcinfo -p [servername]
At a minimum, make sure that you see mountd and nfs in the resulting list. If either of these
programs is missing, you may need to restart the NFS server functionality:
# /sbin/init.d/nfs.server stop
# /sbin/init.d/nfs.server start
Look in the second column of the output to determine which versions are supported. Does
your server's nfs program support NFS PV3? The third column indicates which transport
protocol(s) your nfs daemon supports. Does your system support NFS over TCP?
# showmount -e
The command should list all exported file systems, and the clients that have access to each
file system. If file systems or clients are missing, you may need to re-execute the exportfs
command.
# exportfs
Which clients currently have file systems mounted from the server?
If you want to determine which clients are actually using your NFS file systems, execute the
showmount -a command:
# showmount -a
This command displays the contents of the /etc/rmtab (remote mount table) file in a
human-readable format. Every time a client mounts a file system, the rpc.mountd daemon
adds a line to the remote mount table in /etc/rmtab. Theoretically, the rpc.mountd
daemon then removes clients from rmtab as file systems are unmounted. However, if a client
crashes or loses connectivity to the NFS server, showmount -a may list clients that no
longer have your file systems mounted. You can purge all entries from the /etc/rmtab file
by executing:
# > /etc/rmtab
Student Notes
NFS clients, like NFS servers, must have the LAN and NFS subsystems configured in the
kernel.
SAM provides the simplest mechanism for viewing and modifying the kernel:
On HP-UX 10.x systems, you can use the following command to view the contents of the
kernel:
/etc/rc.config.d/nfsconf
/sbin/init.d/nfs.client
NFS_CLIENT=1 #Required!
NFS_SERVER=1
NUM_NFSD=16
NUM_NFSIOD=16 #Optional!
/sbin/rc3.d/*
PCNFS_SERVER=1
START_MOUNTD=1
/sbin/init.d/nfs.server NFS_TCP=1
NFS_TCP=1 #Optional!
Student Notes
After configuring NFS client functionality in the kernel, there are several variables in the
/etc/rc.config.d/nfsconf file that may need to be modified to enable and configure
your NFS client:
NFS_TCP=1 If you are running HP-UX 11.00 and have installed the NFS over TCP
patch, the TCP functionality must be enabled by setting NFS_TCP=1 in
/etc/rc.config.d/nfsconf (if the variable doesn't yet exist, add
it to the end of the file). After making this change, both the NFS server
and client daemons must be stopped and restarted. At HP-UX 11i, this
variable is no longer used; NFS over TCP is enabled by default.
NOTE: If your system requires client and server functionality, you must configure
both the client variables listed here and the server variables described earlier
in the chapter.
Student Notes
After modifying the /etc/rc.config.d/nfsconf file, you can either reboot or manually
execute the NFS client startup script to stop and restart the NFS client daemons:
# /sbin/init.d/nfs.client stop
# /sbin/init.d/nfs.client start
portmap This daemon, used in HP-UX 10.20 and earlier releases, converts RPC
program numbers into port numbers. When an RPC server program
starts, it registers the following information with portmap:
All RPC requests from clients are initially sent to the portmap daemon
on port number 111. portmap compares the "RPC Program Number"
in the incoming packet against the list of registered program numbers
to determine which port the RPC request should be forwarded to.
portmap must be the first RPC program started, and the last to die. If
the portmap daemon dies at any point, then it, as well as all of the
registered RPC programs, must be restarted.
rpcbind This daemon is used in HP-UX 11.00 and beyond as a replacement for
portmap.
biod The asynchronous block I/O daemons are used by NFS clients to
handle buffer cache read-ahead and write-behind.
rpc.statd When an NFS client places a lock on a file via rpc.lockd, the server's
rpc.statd daemon is responsible for periodically verifying that the
client is still functioning by periodically attempting to contact the
client's rpc.statd daemon. If the client reboots unexpectedly, the
server's rpc.statd daemon automatically removes all locks placed
by the client to allow other processes to again access the client's
locked files.
Student Notes
After enabling NFS client functionality, you must specify which NFS file systems you wish to
mount. You can manually mount and unmount NFS file systems via the mount and umount
commands, or you can ensure that your NFS file systems mount automatically at boot time
by adding them to the /etc/fstab file. This slide concentrates on /etc/fstab; the next
slide details some of the options available on the mount and umount commands.
NFS /etc/fstab entries are very similar to VxFS and HFS entries in the /etc/fstab file:
Server and Exported FS: Identifies the NFS server hostname and the pathname on the
server for the file system you wish to mount. The hostname
must be separated from the pathname by a colon (:).
Mount Point: Identifies the mount point that should be used on the NFS
client. The client's mount point need not match the pathname
used on the NFS server side. If any local files reside under the
specified mount point directory, the local files will be hidden as
long as the NFS file system is mounted. Ideally, the mount
point directory should be an empty directory. Be sure to use a
full pathname when specifying the mount point directory!
File System Type: Set to nfs for NFS file systems. During the system startup
process, the /sbin/init.d/nfs.client startup script
mounts all nfs type file systems that are listed in
/etc/fstab. Other startup scripts also use the fstab file,
too: /sbin/init.d/localmount mounts all hfs and vxfs
file system entries, and /sbin/init.d/swap_start enables
all of the swap type entries.
Backup Frequency: This field is unused currently in HP-UX, but requires a "0"
placeholder.
Student Notes
The same mount and umount commands that you have used in the past to mount and
unmount local file systems can also be used to mount and unmount NFS file systems.
Mount Examples
The slide shows the most common permutations of the mount command:
1. mount server:/home /home
2. mount /home
Mounts all NFS type file systems that are listed in the /etc/fstab file.
4. mount -a
5. mount -v
Umount Examples
In order to unmount NFS file systems, use the umount command. The umount command
recognizes several options and arguments:
1. umount server:/home
2. umount /home
Unmounts the NFS file system mounted under the directory /home.
4. umount -a
Unmounts all file systems, including NFS and locally mounted file systems.
The examples on the slide show the most common mount options and arguments, but NFS
also supports several other options. Some of the other NFS mount options are summarized in
the remaining sections below.
rw/ro Allow/deny users on this client the ability to make changes on the NFS
file system. The default is rw.
suid/nosuid Enable/disable "Set User ID" execution functionality in the NFS file
system. SUID functionality makes it possible for regular users to gain
temporary root privileges when executing programs that have the
SUID bit set. SUID executables have been known to cause security
problems in the past, so many NFS administrators choose to disable
this functionality wherever possible by mounting NFS file systems
nosuid. The default is suid.
quota/noquota Enable/disable quota checking. See the quota(5) man page for more
information. The default is quota.
There are two very distinct issues to consider when an NFS server crashes or loses
connectivity to its clients: (1) What happens to new clients that attempt to mount from the
downed server? (2) What happens to existing clients that attempt to access files and
directories in an already mounted file system? The table below summarizes the mount
options that determine the answers to these questions. Note that some mount options affect
mount request behavior, while others affect file access attempt behavior.
By default, NFS file systems are mounted with the fg,retry=1,hard,intr options from
the table above.
If the client supports NFS PV3, it will attempt to mount file systems using
the PV3 protocol. If a queried server does not support PV3, the client
mounts using NFS PV2. Most administrators allow the client and server to
automatically negotiate a mutually acceptable protocol version. However,
you may force a file system to mount using PV2 by specifying the vers=2
mount option if you know that your server does not support PV3.
proto=tcp/udp When NFS was originally released for HP-UX, it used the UDP protocol
and was supported only on local area networks, not WANs. HP-UX 11i
introduced support for NFS over TCP to enable WAN access to NFS file
systems. This functionality has been backported by patch to HP-UX 11.00.
You can determine if your NFS file systems are mounted using NFS over
TCP by executing the netstat -a | grep nfs command. If your file
systems are mounted via NFS over TCP, you should see an
ESTABLISHED TCP connection between the client and server.
By default, if NFS over TCP is enabled on a client, the client will attempt
to mount all NFS file systems via TCP. If the queried server does not
support NFS over TCP, the client automatically reverts to NFS over UDP.
You can force the client to use UDP by including the proto=udp mount
option. On a local area network, UDP may be slightly more efficient, but
most administrators simply accept the default TCP behavior on clients
that support NFS over TCP.
rw,suid,quota,fg,retry=1,hard,intr
Thus, the following three commands all have the same effect (assuming the /etc/fstab file
uses the defaults mount option):
Are the NFS client daemons running? 1.Keep UIDs and GIDs consistent.
2. Configure the NFS server.
# ps -e | grep -e rpc -e biod a. Ensure the NFS subsystem is in the kernel.
b. Edit the server’s configuration file.
1000 ? 0:00 biod c. Start NFS server daemons.
1010 ? 0:00 rpcbind d. Create the /etc/exports file.
e. Export the directories.
1020 ? 0:00 rpc.lockd f. Check the server configuration.
1030 ? 0:00 rpc.statd 3. Configure the NFS client.
a. Ensure the NFS subsystem is in the kernel.
b. Edit the client’s configuration file.
What file systems are available from the server? c. Start NFS client daemons.
d. Create a new entry in /etc/fstab.
# showmount -e server e. Mount the NFS file system.
/usr/share/man (everyone) f. Check the client configuration.
4. Keep the time synchronized with all other nodes.
/opt/games (everyone)
/home oakland,la
Student Notes
Several commands are available for checking your NFS client configuration.
If you set the NUM_NFSIOD variable to a value greater than zero, you should also see several
biod daemons running, too.
# showmount -e server
# mount -v
Student Notes
This slide is a review of all of the NFS configuration steps that we have already discussed.
Student Notes
NFS has proven to be a stable, reliable mechanism for sharing files between UNIX hosts for
over 15 years. However, most NFS administrators still inevitably need to do some NFS
troubleshooting at some point. This slide highlights some of the most common NFS problems
and misconfigurations.
• /etc/exports is missing, incomplete, or erroneous. Verify that the file system your
client is trying to mount is included in the /etc/exports file with appropriate export
options. Watch for invisible characters (control sequences) and invalid combinations of
export options. If possible, use only the tested combinations of export options that were
discussed in Tables 1 and 2 earlier in the chapter.
• /etc/exports contains the alias of an NFS client instead of its official host name. NFS
uses reverse name resolution to resolve clients' IP addresses into hostnames, then looks
for the clients' hostnames in the export list. Be sure to use official hostnames in
/etc/exports, not hostname aliases!
• The portmap/rpcbind daemon was accidentally killed. NFS uses RPC calls, and RPC
calls are all handled initially by the portmap/rpcbind daemon. Without this daemon,
NFS will not function properly! Check the process table to verify that the daemon is
running. If the daemon is missing from the process table, you will have to stop and restart
the NFS server and client daemons with /sbin/init.d/nfs.server and
/sbin/init.d/nfs.client.
• The rpc.mountd daemon is not running on the server. Clients cannot mount file
systems if rpc.mountd is not running on the server. Try running the
/sbin/init.d/nfs.server program with the start argument to restart the daemon.
• The NFS server is down. Try to ping the remote system to check for network
connectivity. If you can ping the system, but you cannot mount, the remote system may
not have the proper daemons running. Try stopping and restarting NFS on the remote
system. If you cannot ping the remote system, turn back to the Troubleshooting Network
Connectivity chapter earlier in this book.
• The NFS server is heavily loaded. NFS performance will be degraded as the client/server
ratio increases. Eventually, the server's performance may be degraded so much that
client requests time out and fail. You can check this with the nfsstat command. There
are several possible solutions to this problem:
# nfsstat -s
Server rpc:
Connection oriented:
calls badcalls nullrecv badlen xdrcall dupchecks dupreqs TCP
50505334 0 0 0 0 16826459 0
Connectionless oriented:
calls badcalls nullrecv badlen xdrcall dupchecks dupreqs UDP
11 0 0 0 0 0 0
Server nfs:
calls badcalls
38543 0
Version 2: (0 calls)
null getattr setattr root lookup readlink read
0 0% 0 0% 0 0% 0 0% 0 0% 0 0% 0 0%
wrcache write create remove rename link symlink PV2
0 0% 0 0% 0 0% 0 0% 0 0% 0 0% 0 0%
mkdir rmdir readdir statfs
0 0% 0 0% 0 0% 0 0%
Version 3: (50505345 calls)
null getattr setattr lookup access readlink read
4 0% 118 0% 2007 0% 33678605 66% 106 0% 0 0% 0 0%
write create mkdir symlink mknod remove rmdir PV3
49 0% 16822390 0% 0 0% 0 0% 0 0% 1921 0% 0 0%
rename link readdir readdir+ fsstat fsinfo pathconf
46 0% 0 0% 0 0% 0 0% 0 0% 4 0% 0 0%
Student Notes
Over time, you may wish to monitor the volume and type of NFS/RPC traffic on your
network. This may help you troubleshoot performance problems and plan for future growth.
You can use the nfsstat command to view the contents of several NFS registers
maintained by the kernel. The -z option makes it possible to reinitialize these registers.
-n Displays NFS information, but excludes general RPC statistics from the
report.
-m Displays statistics for each NFS mounted file system. This includes the server
name and address, mount flags, current read and write sizes, the
retransmission count, and the timers used for dynamic retransmission.
-z Prints the current statistics, then reinitializes them (resets them to zero).
Combine -z with any of the options to reinitialize particular sets of statistics
after printing them. The user must have write permission on /dev/kmem for
this option to work.
The packet traffic via NFS is cumulatively monitored. Look especially for non-zero entries in
the following fields. They indicate errors, called failures or timeouts:
badcalls
nullrecv
badlen
retrans
badxid
timeout
Directions
In this lab, you will work with a partner to experiment with some of the features of NFS. One
of you will function as an NFS server, and the other will function as an NFS client. You
should work together throughout the lab to ensure that you feel comfortable with both the
client and server functionalities of NFS. At this point, decide between yourselves who will be
the server and who will be the client.
Preliminary Steps
1. Portions of this lab may disable your lan0 interface card. If you are using remote lab
equipment, login via the GSP/MP console interface for the duration of the lab.
2. (client)
You should now have two new user accounts defined in your /etc/passwd file: "mickie"
and "minnie.” The passwords for the new accounts are "mickie" and "minnie" respectively.
Note that neither user has a home directory on your machine. You will mount their home
directories from your partner's NFS server.
3. (server)
This tarball creates several new files and directories, and two new user accounts in your
/etc/passwd file for users "mickie" and "minnie.” The passwords for the new accounts
are "mickie" and "minnie" respectively. The tarball also creates home directories for
mickie and minnie.
In order for NFS to function properly, the InternetSrvcs and Networking products must
be installed on your machine. Check to ensure that both of these products have been
installed on your machine. Also, ensure that the NFS subsystem is configured in the
kernel.
# swlist -l product Networking InternetSrvcs NFS
# grep nfs /stand/system
3. (client)
Use ps -e on the client to ensure that the necessary daemons are actually running.
4. (server)
Use ps -e to ensure that the server has the necessary daemons running.
Your clients need to access several files on your server machine. Export the following
with the export options set as noted. Make the file systems available to clients
immediately, but also ensure that they will be available after the next system boot by
adding them to /etc/exports.
/home rw for your partner's machine, no access for other hosts
/opt/phone rw for your partner's machine, read only for all others
/opt/fun read only for everyone on the LAN
2. (server)
What command can you use to see what file systems you have made available? Can you
tell which export options you used?
What command can you use to see what file systems other servers have made available?
Choose another machine in the classroom and see what it has exported. Can you tell
which export options were used?
3. (client)
Create mount points for the file systems your neighbor exported, and mount them:
/home/mickie
/home/minnie
/opt/fun
/opt/phone
4. (client)
What file needs to be modified to ensure that these NFS file systems are automatically
mounted after every system boot? Make it so. (For now, use the "defaults" mount option.)
Syntax errors in the /etc/fstab file may cause the next system boot to fail. Do a
mount -a to ensure that you did not make any mistakes in fstab file.
Finally, use mount -v to ensure that all the NFS file systems actually mounted properly.
5. (server)
What command lists the remote machines that have your exported file systems mounted?
From your client, try executing some of the programs mounted from the NFS server to
verify that this is true:
client# /opt/fun/melt
client# /opt/fun/xroach -speed 1
Another benefit of NFS is that files created in an NFS file system instantly become
available to multiple client machines. Do the following experiment to verify that this is
true:
client# ls /home/mickie
server# touch /home/mickie/data
client# ls /home/mickie
Does the client see the new file that was created on the server?
Why did this command succeed when executed on the server, but not when executed on
the client? (hint: look at /etc/exports)
3. (client)
Let's try a variation on the experiment you did back in Q#1 of this part of the lab.
client# touch /home/mickie/memo
We saw in the previous question that root on an NFS client does not (by default) have the
same file access as root on the NFS server. If a single administrator manages several
systems, however, it may be useful to allow root on NFS clients to have true root access
to exported file systems.
What would you have to do on the NFS server side to allow root on the client to have the
same full root access to the /home file system? Make it so.
Did this seem to work? While logged in as root on the client, try touching a file in mickie's
home directory. Did you have to do anything on the client side to recognize the change in
the server's exports file?
Use mount -v to see which file systems remain in the client's mount table. Also do an
ls of /home/mickie, and note that the memo and data files that were under
/home/mickie no longer appear since the file system has been unmounted.
2. (client)
Let's try a more complicated scenario. Can the client unmount an NFS file system if one
of the client's users is accessing that file system?
On the client machine, open two windows. In one of the windows, cd to the
/home/minnie directory. In the other window, issue the umount command to unmount
the minnie file system. Did this work?
The fuser command can tell you who is currently using a file system. Try the following
to see who is currently using /home/minnie.
client# fuser -cu /home/minnie
Try a fuser -cuk on /home/minnie, and see what happens. Then try unmounting the
file system again.
3. (server)
In Part 2, Question 5, we saw a command that the server administrator could use to
determine which of the exported file systems were actually mounted on client hosts. Now
try executing that command again. Was the NFS server notified when the client
unmounted mickie and minnie?
Did you successfully unmount the file system? Any errors? What happened to the client
process that was using your exported /opt?
Try the following commands on the client and note the output.
client# pwd
client# ls
client# cd ..
client# cd /
client# umount /opt/fun
On the client, could you unmount /opt/fun, even after the server was unmounted?
# /sbin/init.d/dtlogin.rc stop
Now take the server's LAN card down and note what happens to the client:
What happens when the client regains connectivity to the NFS server?
What happens if the client tries to remount that file system again while the server is still
down? Try it.
What happens when a process on the client tries to hit a file system on the downed server
(assuming the default mount options are used)? Do they hang indefinitely or time out?
What happens when a client tries to mount a file system from a downed server? (Again,
assume that the default mount options are used.)
By default, HP-UX mounts NFS file systems "hard,intr.” If the NFS server goes down with
these default mount options, we saw client attempts to access the NFS files and
directories hang indefinitely. Can the user abort a command if they get tired of waiting?
Try it.
server# ifconfig lan0 down
client# ls /opt/fun # can the user abort the ls with ^C?
server# ifconfig lan0 up
Alternatively, you can mount an NFS file system nointr. How would the nointr mount
option affect the experiment above? Try it.
The client can also override the hard option with mount -o soft. If a client has
mounted an NFS file system "soft" and the NFS server goes down, what happens to client
requests to the server? Try it.
client# mount /opt/fun # After you see the error, hit ^C.
3. (client)
From the client, try your connectivity test commands again:
client# ping server
client# rpcinfo -p server
Part 8: Cleanup
1. Before moving on to the next chapter, restore your network configuration to the state it
was in prior to this lab.
# /labs/netfiles.sh –r NEW
AutoFS Concepts
Student Notes
• Maintaining complex NFS mounts in the /etc/fstab files on multiple clients can
quickly become a support nightmare.
• Only root can mount NFS file systems. If a user needs to temporarily mount an NFS file
system on a client, the user must ask the administrator to mount and unmount the file
system for them.
• The AutoFS configuration files, known as the AutoFS “maps,” can be managed via NIS.
Instead of managing /etc/fstab files on hundreds of individual hosts, the
administrator can easily modify the NFS configuration from the central NIS server that
stores the NIS AutoFS maps.
• AutoFS only mounts NFS file systems on an as-needed basis. Thus, a downed NFS server
will only delay a client’s boot if the client references the downed server’s file systems
during the boot process.versusversus
• AutoFS may be configured to allow users to automatically mount available NFS file
systems without root’s assistance.
• By default, if an AutoFS file system is left unused for five minutes, AutoFS automatically
unmounts the file system.
• AutoFS provides some primitive load balancing across multiple replicated NFS servers. If
an NFS file system is available from several different servers, AutoFS will automatically
mount the file system from the server that provides the best response time.
NOTE: AutoFS simply generates NFS mount and unmount requests on behalf of an
NFS client. AutoFS can only mount file systems that have been exported by an
NFS server.
AutoFS Maps
Student Notes
NFS file systems may be mounted via the mount command, or via AutoFS. When
/sbin/init.d/nfs.client executes the mount command during the boot process, it
immediately mounts all of the NFS file systems listed in /etc/fstab.
AutoFS, however, mounts NFS file systems on an as-needed basis. In order to do this, AutoFS
must be told:
The AutoFS map files answer all three questions. The map files are ASCII configuration files
managed by the system administrator. You may use the ls command to view the AutoFS
maps (if there are any!) on your system:
# ls /etc/auto*
Some AutoFS map files on your systems may be managed via NIS. These NIS-managed map
files won’t appear in the ls output.
AutoFS recognizes several different kinds of map files. Each of these maps will be discussed
in detail in the slides that follow.
users NFS
/net Server
and
/drawings
processes
/home
mount/umount
file access
requests
automount
requests
Kernel
mount table:
autofs mount requests
/stand HFS
/net AutoFS automountd
/drawings AutoFS
/home AutoFS autofs_proc umount requests
Student Notes
AutoFS requires several different daemons and commands:
1. The first step required to configure AutoFS is to create the AutoFS map files. The next
few slides discuss the configuration of these files in detail.
2. Anytime you modify the AutoFS map files, you must execute the automount command.
This command reads the AutoFS maps, then adds and removes AutoFS entries in the
/etc/mnttab mount table accordingly. Note that automount doesn’t actually mount
any file systems; it is simply responsible for ensuring that the AutoFS entries in the mount
table match the AutoFS maps.
3. When processes attempt to access the AutoFS file systems recorded in the mount table,
AutoFS contacts the automountd daemon.
4. When AutoFS notifies the automountd daemon that an NFS file system is required,
automountd sends an NFS mount request to the appropriate NFS server.
5. Once automountd mounts the needed file system, the requesting process can access the
file system as it would any other NFS file system.
6. The autofs_proc kernel daemon monitors all NFS file systems mounted by AutoFS. If
an NFS file system managed by AutoFS is idle for 5 minutes, autofs_proc notifies
automountd, which then unmounts the idle file system. The allowed idle time is
configurable. This prevents unnecessary NFS file systems from cluttering the mount
table.
# /etc/rc.config.d/nfsconf
NFS_CLIENT=1
AUTOMOUNT=1
AUTOFS=1
AUTOMOUNT_OPTIONS=""
AUTOMOUNTD_OPTIONS=""
# /sbin/init.d/nfs.client start
# /sbin/init.d/nfs.client stop
Student Notes
AutoFS is an NFS client-side service. No additional server-side configuration is required,
beyond enabling the nfsd and rpc.mountd daemons, and exporting the desired file
systems.
NFS_CLIENT=1
Next, verify that the AUTOMOUNT variable is set to "1". Although the AUTOMOUNT variable was
traditionally used to enable the old automount daemon, it is still required if you wish the
newer AutoFS daemons to start during the system boot process.
To specify that you wish to use AutoFS rather than the traditional Automounter, scroll to the
bottom of the file and set the AUTOFS variable, too.
AUTOMOUNT=1
AUTOFS=1
The last couple of variables may be used to define additional options for the AutoFS
daemons:
AUTOMOUNT_OPTIONS=””
AUTOMOUNTD_OPTIONS=””
A table describing some of the commonly used options available for these variables is
included below. For more information, see the automount(1m) and automountd(1m)
man pages.
Starting AutoFS
If the AUTOFS variable is set to “1” in /etc/rc.config.d/nfsconf, then AutoFS is
normally started automatically by the /sbin/init.d/nfs.client script at run level 2 of
the system startup process. You may re-execute this script at any time:
# /sbin/init.d/nfs.client start
Running the script with the start argument mounts all NFS file systems in /etc/fstab
and starts the AutoFS daemons.
# /usr/lib/netsvc/fs/autofs/automountd
# /usr/sbin/automount
The first command starts the automountd daemon that generates mount requests to the
NFS server. The second command copies the AutoFS map information into /etc/mnttab so
automountd knows which file systems it is responsible for mounting.
NOTE: AutoFS and Automounter cannot run concurrently on an NFS client. If you are
currently using Automounter, modify the /etc/rc.config.d/nfsconf
configuration file as shown on the slide, then reboot to stop the currently
running Automounter daemon and start AutoFS.
Stopping AutoFS
Usually, AutoFS is terminated by /sbin/init.d/nfs.client during system shutdown:
# /sbin/init.d/nfs.client stop
Alternately, you can manually shutdown AutoFS with the following commands:
# ps –e | grep automountd
# kill 1234 Use the automountd daemon’s PID here!
# /usr/sbin/umountall –F nfs
# /usr/sbin/umountall –F autofs
If a file system mounted by AutoFS is still in use when the stop script is executed, that file
system remains mounted and must be manually unmounted later by issuing the umountall
commands shown above.
NOTE: Never kill the automountd daemon with the –9 signal! This will leave AutoFS
in an inconsistent state, and may eventually require a reboot.
Checking AutoFS
If AutoFS is functioning properly, two daemons should appear in your process table:
automountd and autofs_proc:
# ps –e | grep automountd
# ps –e | grep autofs_proc
Also, check the mount table via the mount –v command. There should be an entry for each
of the file systems managed by AutoFS. If not, check your map files! The sample mount –v
output below was taken from a host that uses AutoFS extensively. Note: Local file systems
and mount timestamps have been truncated to save space.
# mount –v
-hosts on /net type autofs ignore,indirect,nosuid,soft
/etc/auto.direct on /usr/contrib/games type autofs ignore,direct
/etc/auto.direct on /opt/tools type autofs ignore,direct
/etc/auto.direct on /var/mail type autofs ignore,direct
/etc/auto.drawings on /drawings type autofs ignore,indirect
/etc/auto.home on /home type autofs ignore,indirect
/etc/auto_master /
/net -hosts -soft,nosuid
drawings autofs
/drawings /etc/auto.drawings
/home /etc/auto.home home autofs
/- /etc/auto.direct
net autofs
opt
Student Notes
The AutoFS maps determine which file systems AutoFS should mount from which NFS
servers. /etc/auto_master is a special map: it contains a catalog of mount point
directories, followed by the names of the maps AutoFS should consult to determine what
should be mounted under those directories.
The sample /etc/auto_master file on the slide references several other AutoFS maps:
• Attempts to access anything under /net will be handled by the special –hosts map.
• Attempts to access anything under /home will be handled by the /etc/auto.home map.
• The /- entry at the end of /etc/auto_master refers AutoFS to the “direct map” in
/etc/auto.direct.
Each of these referenced maps will be discussed in detail in the slides that follow.
# ll /net/svr1
svr1
/etc/auto_master
Configuring the -hosts map allows
/net -hosts -soft,nosuid users to automatically mount
file systems from any NFS server
just by accessing /net/servername!
Student Notes
One of the most useful maps recognized by AutoFS is the –hosts special map. If
/etc/auto_master is configured as shown on the slide, then accessing
/net/any_NFS_server causes AutoFS to automatically mount all NFS file systems
available to the client from the specified server. This makes it possible to mount all available
NFS file systems from any NFS server without explicitly executing the mount command or
modifying /etc/fstab!
Example
If the –hosts special map is configured as shown on the slide, you would see the following
entry in your client’s mount table initially (note that local file systems and the mount time
stamps have been omitted for the sake of clarity).
# mount –v
-hosts on /net type autofs ignore,indirect,soft,nosuid
# ll /net
total 0
See what happens, though, if a user accesses a specific host name within /net:
# ll /net/svr1
dr-xr-xr-x 3 root sys 1024 Mar 28 08:50 home
dr-xr-xr-x 44 bin bin 1024 Mar 29 13:54 opt
dr-xr-xr-x 18 bin bin 1024 Mar 24 12:17 var
The output suggests that host svr1 has exported three NFS file systems: /home, /opt, and
/var. Look what appears in the mount table as a result (again, the mount –v output has
been truncated for the sake of clarity):
# mount –v
-hosts on /net type autofs ignore,indirect,soft,nosuid
svr1:/home on /net/svr1/home type nfs nosuid,soft,size=32768,NFSv3
svr1:/opt on /net/svr1/opt type nfs nosuid,soft,rsize=32768,NFSv3
svr1:/var on /net/svr1/var type nfs nosuid,soft,rsize=32768,NFSv3
# vi /etc/auto_master
/net –hosts –soft,nosuid
The –soft NFS mount option prevents users' access attempts from hanging if the client is
the NFS server is unreachable. The nosuid mount option is a security feature that disables
the SUID bit execution for programs accessed from the NFS server.
• If a user attempts to use /net to access an unreachable NFS server, or an NFS server
that hasn’t exported any NFS file systems, AutoFS generates a “not found” error
condition, which may confuse your users.
• Because the -hosts map allows NFS access to any reachable system, a user may
inadvertently cause an NFS mount over a WAN link, or through a slow router or gateway.
NFS mounts over slow links may cause excessive retransmissions and degrade
performance for all users on the network.
/etc/auto_master
/- /etc/auto.direct
/etc/auto.direct
/usr/contrib/games -ro gamesvr:/usr/contrib/games
/opt/tools -ro toolsvr:/opt/tools
/var/mail -rw mailsvr:/var/mail
Student Notes
A direct map may be used to automatically mount file systems on any number of unrelated
mount points.
Example
If the /etc/auto_master and /etc/auto.direct are configured as shown on the
slide, you would see the following entry in your client’s mount table initially (note that local
file systems and the mount time stamps have been omitted for the sake of clarity).
# mount –v
/etc/auto.direct on /usr/contrib/games type autofs ignore,direct
/etc/auto.direct on /opt/tools type autofs ignore,direct
/etc/auto.direct on /var/mail type autofs ignore,direct
At this point, games, tools, and mail haven’t been mounted yet. However, AutoFS does
display the mount points for these file systems:
The first time a user accesses one of the directories managed by the direct map, AutoFS
automatically mounts the file system associated with that directory:
# ll /usr/contrib/games
-r-xr-xr-x 3 root sys 1024 Mar 28 08:50 tetris
-r-xr-xr-x 44 root sys 1024 Mar 29 13:54 xpilot
-r-xr-xr-x 18 root sys 1024 Mar 24 12:17 chess
# mount –v
/etc/auto.direct on /usr/contrib/games type autofs ignore,direct
/etc/auto.direct on /opt/tools type autofs ignore,direct
/etc/auto.direct on /var/mail type autofs ignore,direct
gamesvr:/usr/contrib/games on /usr/contrib/games type nfs
ro,rsize=32768,wsize=32768,NFSv3
# vi /etc/auto_master
/- /etc/auto.direct
Next, create the /etc/auto.direct file. Each entry in the direct map has three fields:
• The first field identifies the full pathname of a mount point directory that AutoFS should
monitor.
• The second field lists the mount options AutoFS should use when mounting the file
system. This field is optional.
• The third field identifies the file system to mount on the mount point identified in the first
field.
In order to mount /usr/contrib/games, /opt/tools, and /var/mail via AutoFS, the
following entries would be required in /etc/auto.direct:
# vi /etc/auto.direct
/usr/contrib/games -ro gamesvr:/usr/contrib/games
/opt/tools -ro toolsvr:/opt/tools
/var/mail -rw mailsvr:/var/mail
# /usr/sbin/automount
/etc/auto_master
/drawings /etc/auto.drawings
/etc/auto.drawings
gizmos -ro gizmosvr:/drawings/gizmos
gadgets -ro gadgetsvr:/drawings/gadgets
widgets -ro widgetsvr:/drawings/widgets
Student Notes
An indirect map proves useful when you want AutoFS to mount several NFS file systems
under a common parent directory.
Example
If the /etc/auto_master and /etc/auto.drawings are configured as shown on the
slide, you would see the following entry in your client’s mount table initially. (Note that local
file systems and the mount time stamps have been omitted for the sake of clarity.)
# mount –v
/etc/auto.drawings on /drawings type autofs ignore,indirect
At this point, none of the drawing file systems have been mounted yet. In fact, the mount
points have not even been created yet! Users that list the contents of the /drawings
directory may be somewhat perplexed by the fact that the directory appears to be empty!
# ll /drawings
total 0
The first time a user accesses one of the directories managed by the indirect map, AutoFS
creates the necessary mount point directory and mounts the associated file system.
# ll /drawings/gizmos
-r-xr-xr-x 3 root sys 1023 Mar 30 08:50 gizmo1
-r-xr-xr-x 44 root sys 405 Mar 30 13:54 gizmo2
-r-xr-xr-x 18 root sys 789 Mar 30 12:17 gizmo3
# mount –v
/etc/auto.drawings on /drawings type autofs ignore,indirect
gizmosvr:/drawings/gizmos on /drawings/gizmos type nfs
ro,rsize=32768,wsize=32768,NFSv3
The other file systems under /drawings will only be mounted as needed.
# vi /etc/auto_master
/drawings /etc/auto.drawings
# /usr/sbin/automount
Next, create the indirect map /etc/auto.drawings file. Each entry in the indirect map
has three fields:
• The first field identifies the relative pathname of a mount point directory that AutoFS
should monitor.
• The second field lists the mount options AutoFS should use when mounting the file
system. This field is optional.
• The third field identifies the file system to mount on the mount point identified in the first
field.
# vi /etc/auto.drawings
gizmos -ro gizmosvr:/drawings/gizmos
gadgets -ro gadgetsvr:/drawings/gadgets
widgets -ro widgetsvr:/drawings/widgets
Direct mounted file system mount points are always visible to users
Direct mounted and local file systems may co-exist in the same parent directory
Large direct maps quickly lead to cluttered mount tables
The automount command must be executed every time the direct map changes
Indirect mounted file systems only become visible after being accessed
Indirect mounted and local file systems may not coexist in the same parent directory
Each indirect map yields just one entry in the mount table
AutoFS automatically recognizes indirect map changes
Student Notes
Determining when to use direct versus indirect maps is one of the most confusing issues
faced by AutoFS administrators. The slide above and table below compare and contrast these
two different AutoFS map types. The table references the sample direct and indirect maps
shown below:
# cat /etc/auto_master
/hosts -hosts –soft,nosuid
/drawings /etc/auto.drawings
/- /etc/auto.direct
# cat /etc/auto.direct
/usr/contrib/games -ro gamesvr:/usr/contrib/games
/opt/tools -ro toolsvr:/opt/tools
/var/mail -rw mailsvr:/var/mail
# cat /etc/auto.drawings
gizmos -ro gizmosvr:/drawings/gizmos
gadgets -ro gadgetsvr:/drawings/gadgets
widgets -ro widgetsvr:/drawings/widgets
Advantage: Direct mounted AutoFS file Disadvantage: Indirect mounted and local file
systems and local file systems may coexist in systems may not co-exist in the same parent
the same parent directory. For example, the directory. For example, files stored locally
/usr/contrib directory on the sample under the /drawings directory on the
system above contains both locally stored sample system above would be hidden by the
directories (such as /usr/contrib/bin) /etc/auto.drawings indirect map.
and an AutoFS direct map file system
(/usr/contrib/games).
Disadvantage: Large direct maps quickly lead Advantage: Each indirect map yields just one
to cluttered mount tables. Each entry added entry in the mount table. The sample indirect
to the direct map adds an entry to the mount map shown above would create one mount
table, too. Thus, the sample system shown table entry for /drawings.
above would have three AutoFS entries in the
mount table as a result of the direct map.
Disadvantage: The automount command Advantage: AutoFS automatically recognizes
must be executed every time the direct map indirect map changes. If you modify a
changes. directory’s entry in an indirect map, AutoFS
will see the changes the next time it mounts
the directory; there is no need to execute the
automount command.
/home/sales /home/accts
sales accts
/etc/passwd
user1:x:101:101::/home/sales/user1:/usr/bin/sh
user2:x:102:101::/home/sales/user2:/usr/bin/sh
user3:x:103:101::/home/accts/user3:/usr/bin/sh
user4:x:104:101::/home/accts/user4:/usr/bin/sh
/etc/auto_master /etc/auto.home
/home /etc/auto.home sales sales:/home/sales
accts accts:/home/accts
Student Notes
User home directories are among the most commonly exported directories in NFS
environments. If all of your home directories are on a single NFS server, then it might make
sense for clients to mount /home from the server via an entry in /etc/fstab. NFS
mounting home directories via /etc/fstab becomes more complicated, however, if your
home directories are stored on multiple NFS servers across your local area network. If your
home directories are scattered across multiple NFS servers, use AutoFS!
Consider the example on the slide. This organization has two NFS home directory servers.
The “sales” server stores home directories for all members of the “sales” department, and the
“accts” server stores home directories for all members of the “accts” department. The
following configuration greatly simplifies home directory management in this type of
environment. Better yet, it guarantees that any user may log onto any AutoFS client and have
access to their home directory!
1. On each NFS server, create a subdirectory under /home that matches the server’s host
name. On host “sales” create a directory called /home/sales. On host “accts,” create a
directory called /home/accts.
If you are migrating existing systems to NFS mounted home directories, you may need to
move users’ home directories from the clients’ local disks to the new NFS servers.
clients# vi /etc/auto_master
/home /etc/auto.home
5. Create the /etc/auto.home map. Create one entry in the map for each server that
exports home directories. For instance, the “sales” home directories should be mounted
from sales:/home/sales. The “accts” home directories should be mounted from
accts:/home/accts.
clients# vi /etc/auto.home
sales sales:/home/sales
accts accts:/home/accts
6. Update the home directory pathnames in the clients’ /etc/passwd files. The home
directory pathnames must be updated to reflect the new
/home/servername/username directory naming convention. Note that all of the
clients’ /etc/passwd files must be updated.
Questions
1. What type of map is being used in the example on the slide to automatically mount user
home directories?
2. Why is this type of map preferable to its alternative? (Hint: What must be done each time
a client’s direct map file changes?)
/home/sales /home/accts
sales accts
/etc/passwd
user1:x:101:101::/home/sales/user1:/usr/bin/sh
user2:x:102:101::/home/sales/user2:/usr/bin/sh
user3:x:103:101::/home/accts/user3:/usr/bin/sh
user4:x:104:101::/home/accts/user4:/usr/bin/sh
/etc/auto_master /etc/auto.home
/home /etc/auto.home * &:/home/&
Student Notes
The previous slide showed how AutoFS indirect maps can be used to automatically mount
user home directories. The example on the slide showed a simple /etc/auto_home file that
included references to just two NFS home directory servers:
With just two NFS servers, the /etc/auto.home file is easy to manage. Larger
organizations, however, oftentimes have complex /etc/auto.home files that reference
four, eight, sixteen, or even more NFS servers. Worse yet, changes made to
/etc/auto.home must be propagated out to every one of your NFS clients!
Fortunately, AutoFS key substitution can simplify the administrator’s life considerably in
large NFS environments by replacing references to specific servers and file systems with two
special wild card characters.
The first of these special characters is the ampersand (&). Consider the improved
/etc/auto.home file below:
Each & in the map will automatically be replaced by the key value shown in the first field of
the AutoFS map entry. Thus, the ampersands in the first line will be replaced by “sales,”
and the ampersands in the second line will be replaced by “accts.” This abbreviated map
saves the NFS client administrator a few keystrokes, while still providing the same
functionality as the /etc/auto.home map on the previous slide.
The map file may be further condensed to a single line by replacing the key field in
/etc/auto.home with an “*” wildcard. Assuming that /etc/auto.home is an AutoFS
map mounted on /home, then any attempt to access anything under /home matches the “*”
entry.
This simple, single-line map allows AutoFS to mount home directories from any NFS home
directory server on the network. Furthermore, the administrator can add additional home
directory servers to the environment without modifying AutoFS maps on the NFS clients.
Replicated servers
provide load
balancing and toolsvr1 toolsvr2 toolsvr3
high availability
for read-only
file systems! I'll poll all three
servers and mount
/opt/tools from
/etc/auto_master
the first server
/- /etc/auto.direct that responds!
/etc/auto.direct
/opt/tools -ro toolsvr1:/opt/tools \
toolsvr2:/opt/tools \
toolsvr3:/opt/tools
Student Notes
All of the map files discussed in the chapter so far have listed exactly one NFS server for
each AutoFS mount point. However, it turns out that the AutoFS direct and indirect maps can
actually list two, three, or even more NFS servers for each AutoFS mount point. This
Replicated Server functionality can dramatically improve performance for AutoFS clients
that mount executables and other read-only file systems via AutoFS.
The example on the slide shows three NFS servers: toolsvr1, toolsvr2, and toolsvr3. All three
servers have identical copies of the /opt/tools application directory, which is made
available to clients via NFS.
Note that the direct map file responsible for mounting /opt/tools is a bit different than the
maps discussed up to this point: instead of listing one server as a source for mounting
/opt/tools, the map lists all three servers!
# cat /etc/auto.direct
/opt/tools -ro toolsvr1:/opt/tools \
toolsvr2:/opt/tools \
toolsvr3:/opt/tools
# cat /etc/auto.direct
/opt/tools -ro toolsvr1,toolsvr2,toolsvr3:/opt/tools
When a user accesses the/opt/tools directory, automountd polls all three servers and
mounts the file system from the server that responds first. This functionality provides several
advantages:
• Minimized network traffic. Since servers on the local network segment can respond more
quickly to AutoFS client polls than servers on other segments, clients are more likely to
choose a replicated server on the local network. This minimizes NFS traffic across your
routers and gateways.
• Load balancing. Since heavily-loaded servers can’t respond to client polls as quickly as
lightly-loaded servers, new clients will likely choose to mount replicated file systems
from the lightly-loaded servers.
• Reliability. Even if one of the NFS servers is down at the time of the request, the client
will still be able to mount the file system from one of the other replicated servers. Note,
however, that once AutoFS chooses a server, the selection is static. If a server becomes
unavailable after a client has mounted a file system, automountd will not dynamically
switch to one of the remaining servers.
CAUTION: To ensure data consistency regardless of the NFS server chosen by the
AutoFS client, the replicated server functionality should only be used
for read-only file systems.
The configuration on the slide shows a very simple replicated server configuration. In more
complex NFS environments, you can choose to assign weights to each replicated server. The
lower a server’s weight value, the more likely it is that that server will be chosen by AutoFS.
Servers without an explicitly assigned weight value have a weight value of 0. In the example
shown below, toolsvr1 takes precedence of toolsvr2, and toolsvr2 takes precedence over
toolsvr3.
# cat /etc/auto.direct
/opt/tools –ro toolsvr1(1):/opt/tools \
toolsvr2(2):/opt/tools \
toolsvr3(3):/opt/tools
Server proximity is more important than the weights you assign. A server on the same
segment as the client is more likely to be selected than a server on the other side of a
gateway, regardless of the assigned weights.
Troubleshooting AutoFS
Student Notes
If AutoFS appears to be misbehaving, try the following:
# cat /etc/rc.config.d/nfsconf
NFS_CLIENT=1
AUTOMOUNT=1
AUTOFS=1
Verify that DNS Resolves the NFS Server's Host Name Properly
Since AutoFS maps reference NFS servers by host name, DNS problems can cause problems
for AutoFS. Use nsquery to verify that your client is able to resolve each of the NFS server
names to IP addresses.
# ping server
Verify that the NFS Server has Exported the File Systems in Question
AutoFS can only mount file systems that have been exported by the NFS server. Use the
showmount command to verify that the file systems you need have been properly exported.
# showmount –e server
# /sbin/init.d/nfs.client stop
# /sbin/init.d/nfs.client start
# /usr/lib/netsvc/fs/autofs/automountd
# /usr/sbin/automount
# vi /etc/rc.config.d/nfsconf
AUTOMOUNT_OPTIONS="-v"
AUTOMOUNTD_OPTIONS="-v -T"
# /sbin/init.d/nfs.client stop
# /sbin/init.d/nfs.client start
# more /var/adm/automount.log
Student Notes
AutoFS has only been supported in HP-UX since 1998. Prior to the release of AutoFS, HP-UX
provided similar functionality via the Automounter service. Automounter is still supported in
HP-UX 10.20 and 11.x, but is quickly being supplanted by AutoFS for several reasons:
• Automounter will not be supported in future releases of HP-UX. Although both
Automounter and AutoFS are supported in 10.20 and 11.x, HP has announced that future
releases of the OS will not support the older Automounter service.
• Automounter direct maps may cause "mount storms.” If an Automounter direct map
referenced several file mount points under a common parent directory, doing an ll on
the parent directory caused all of the file systems below that directory to mount
immediately – whether they were needed or not! This placed an unnecessary burden on
the NFS servers. AutoFS direct maps don't cause mount storms.
• Automounter mounts file systems under /tmp_mnt. The traditional Automounter always
mounted file systems under the /tmp_mnt directory, then used a complex web of
symbolic links to make it appear as if the file systems were mounted in the normal /usr,
/opt, /home, etc. file systems. This oftentimes confused users and administrators alike.
Automount and AutoFS can – and usually do -- coexist on a system simultaneously, but may
not be running concurrently on the same system. To determine which daemon you are
running, check the /etc/rc.config.d/nfsconf file. If the AUTOFS variable is set to "1",
you are running AutoFS rather than the traditional Automounter.
Fortunately, transitioning from the traditional Automounter to the newer AutoFS is a simple
procedure. See HP's Installing and Administering NFS Services with 10.20 ACE and HWE
manual for details.
Preliminary
Portions of this lab may disable your lan0 interface card. If you are using remote lab
equipment, login via the GSP/MP console interface for the duration of the lab.
This lab assumes that the classroom has been configured with the 128.1.*.* IP addresses
configured earlier in the course. The instructor station must be assigned IP address 128.1.0.1.
Execute the following preliminary setup steps on both the student and instructor
workstations in preparation for the lab:
# /labs/autofs.lab.setup.sh
These scripts added several entries to the /etc/passwd and /etc/hosts files on both the
instructor and student workstations. When executed on the instructor station, the script also
configures several additional IP addresses via IP multiplexing, and creates and exports
several directories.
1. Verify that the NFS product is installed on your system, and that the NFS client
functionality is configured in /etc/rc.config.d/nfsconf.
2. AutoFS was not included in the NFS product that was initially shipped with 10.20 and
11.00. Verify that AutoFS is included in the version of the NFS product installed on your
system by checking for the existence of the /usr/lib/netsvc/fs/autofs directory.
3. HP-UX 10.20 and 11.x support both AutoFS and the older Automounter. Is either of these
services configured on your machine? Which one, if any?
4. Enable AutoFS in /etc/rc.config.d/nfsconf, but don't try to start the daemon yet.
5. Automount and AutoFS should never run concurrently on a system. Technically, you
should be able to switch from one service to the other by tweaking the control variables
in /etc/rc.config.d/nfsconf. Realistically speaking, however, it is often difficult to
shut down automounter without rebooting since the daemon won't die until all of the
automounted file systems are unmounted. The cleanest solution is to reboot. Make it so!
# shutdown –ry 0
6. When your system comes back up again, verify that the AutoFS daemons are running.
2. Does the mount table reflect the fact that AutoFS is managing the /net mount point?
3. Test your –hosts map! What happens when you access /net/corp? Try it!
# ls /net/corp
5. Will AutoFS recognize a host referenced by IP address rather than name? Try it!
# ls /net/128.1.0.1
# mount -v
# ls /net/10.1.1.1
3. What must be done to make this change take effect? Make it so!
4. What appears in the mount table to indicate that AutoFS has recognized the new direct
map?
5. Does the games mount point appear when you list the contents of /usr/contrib? Does
listing the /usr/contrib directory cause AutoFS to mount the games file system from
the NFS server?
# ls /usr/contrib
# mount –v.
# cd /usr/contrib/games
# ls
# /usr/contrib/games/oneko/bin/X11/oneko &
# mount –v.
7. Add another entry to your direct map to mount the /data/contacts directory from the
corp NFS server. Users will need both read and write access to this file system. Don't
execute the automount command yet.
# umount /home
2. Add an indirect map entry for /home to /etc/auto_master. This map entry should
reference the /etc/auto.home map file.
3. What must be done anytime the master map changes? Make it so!
4. Now create the /etc/auto.home map file. The map file should configured such that:
5. Check the mount table. How many mount table entries were created as a result of the
new indirect map? How many entries would have been created in the mount table if this
had been configured as a direct map?
6. Do an ls of /home. Can you explain the result? Did AutoFS mount any file systems?
7. Now access a specific user's home directory and see what happens to the mount table:
# ls /home/finance/user1
# mount –v
8. Will this configuration automatically mount a user's home directory at login time? Try it!
Try logging in as user "user3.” Then check the mount table to verify that the user's home
directory was in fact mounted from the proper location.
# su – user3
$ pwd
$ ls -a
$ exit
# mount -v
9. Can you shorten the /etc/auto.home file to a single line? How? Make it so! Then test
your solution:
# vi /etc/auto.home
# ls /home/sales/user5
# mount -v
Part 5: Cleanup
Before moving on to the next chapter, run the netfiles.sh cleanup script:
# /sbin/init.d/nfs.client stop
# mount -a
# /labs/netfiles.sh –r NEW
/etc/hosts
/etc/passwd
Clients
/etc/group
.
.
.
others
All clients share a common
Server set of configuration files.
Student Notes
Every UNIX-based node on a network requires a certain amount of maintenance in order to
stay current and up-to-date. For example, if a new node is added to the network, every UNIX-
based system should have its /etc/hosts file updated to contain the name of the new node.
Or, if a new user is added, and the user requires potential access to all nodes, every system
will need its /etc/passwd file updated. With a few systems to update, this may seem
reasonable. As the number of nodes increases however, the administration for these types of
updates becomes time consuming and tedious.
Rather than manage the host names and user accounts on each individual system, a software
tool called Network Information Service (NIS) was developed by Sun Microsystems to allow
these files to be maintained on a single system (an NIS server) and referenced by other
systems configured as NIS clients. With NIS, when a new host is added to the network, a
single system's files are updated and these changes are propagated out to the other nodes on
the network.
Another major advantage of NIS (besides central administration), is consistency across all
nodes on the network. Because all systems reference the same set of files (referred to an NIS
database files), users do not have to worry about which systems have which login accounts
setup, or if they will be able to reference a new node by its host name on all machines.
In HP-UX, the NIS software is bundled with the NFS product and the default operating
system.
NIS was formerly known as the Yellow Pages. However, this name is a registered trademark
of British Telecommunications in the United Kingdom, so the name of the service was
changed.
NIS Maps
chris:101:…
/etc/passwd scott:102:…
abby:103:…
Student Notes
The ASCII files that NIS uses are converted into databases files (also known as NIS map files)
when NIS is configured. Each NIS map file is sorted based on common fields used to index
into the file. For example, the /etc/passwd file is translated into NIS maps which index
based on login names (passwd.byname), and based on UIDs (passwd.byuid).
There is one NIS map called ypservers that is not built from an ASCII source file. It is
created automatically during NIS configuration. It contains a list of the master and slave
servers for the NIS domain.
Each of the maps is appended with two suffixes when created: .pag and .dir. For example:
passwd.byname.dir
passwd.byname.pag
passwd.byuid.dir
passwd.byuid.pag
If your file system only supports short file names, a file name can only have 14 characters.
This means map names can only be 10 characters in length because the .dir and .pag
suffixes are added to map names. NIS will then create short map names:
passw.byna.dir
passw.byna.pag
passw.byui.dir
passw.byui.pag
NIS Domains
Server
NIS Maps
Client
NIS Domain
Student Notes
An NIS domain is a logical grouping of nodes using the same NIS maps. There can be more
than one NIS domain within a physical network. Nodes that have the same domain name
belong to the same NIS domain.
NIS domain-related files are stored under a subdirectory beneath /var/yp on the NIS
servers. The subdirectory name corresponds to the name of the NIS domain which that
system serves. For example, maps in the research domain would be stored in directory
/var/yp/research. NIS domain names are case-sensitive. The NIS standard for systems
supporting long file names is a domain name of up to 64 characters.
NIS Roles
NIS Domain
Master Server
Clients
NIS Maps
Slave Server
Student Notes
The major components of a NIS domain include the master server, the slave servers, and the
clients.
The master server is the system on which the original ASCII files are kept and modified.
These files are translated into maps on the master server.
Slave servers have copies of the maps and, along with the master, serve the information
over the network to the clients. Slave servers are optional.
Clients do not have maps or copies of the server's ASCII files (though having their own local
ASCII files as backups is desirable). They look up entries across the network from either the
master or slave servers.
NIS servers and clients are different from NFS servers and clients.
/sbin/init /etc/inittab
/sbin/rc
Start Scripts
Configuration File
/sbin/rc2.d/* /etc/rc.config.d/namesvrs
Student Notes
When the system starts to run level 2 or higher, the start scripts (linked scripts) in
/sbin/rc2.d will be executed to start NIS server and NIS client functionality.
The start scripts are linked to the run scripts that reside in /sbin/init.d. These scripts
fetch the configurable parameters from the configuration file
/etc/rc.config.d/namesvrs, but the daemons will only be invoked if the appropriate
variables are set to the correct values.
Master and slave servers use the same technique to access the NIS maps like clients;
therefore, both run scripts are executed when a NIS server system boots.
NIS run scripts are invoked before NFS client and server functionality is started.
The process init controls run levels of an HP-UX system. Its configuration file is
/etc/inittab. The first entry in this file defines the default run level of a system.
The following table shows you which daemons and commands are invoked by the run scripts:
Table 1
NIS Daemons
portmap (HP-UX 10.20 and earlier) portmap (HP-UX 10.20 and earlier) portmap/rpcbind
rpcbind (HP-UX 10.30 and beyond) rpcbind (HP-UX 10.30 and beyond) ypbind
ypserv ypserv keyserv
ypxfrd ypxfrd
rpc.yppasswdd keyserv
rpc.ypupdated ypbind
keyserv
ypbind
Student Notes
Several daemons associated with NIS follow.
ypxfrd A new daemon with HP-UX 10.0, ypxfrd, provides faster transfer of
maps between master and slave servers.
keyserv keyserv stores the private encryption keys of all users logged into the
system. This daemon is part of the secure RPC programming
enhancement, and is not needed to access NIS maps.
Student Notes
Now that you understand the major concepts surrounding NIS, we will show you how to
configure NIS. The major steps are shown on the slide. We will discuss each step individually.
NOTE: When you are creating a slave server, the maps are copied from the master
server. Therefore, you must create the master server first.
2. Collect the ASCII source files, which are used to build the maps. They should be up to
date.
After configuring the NIS master server, clients and slaves can be configured in any order.
Testing NIS
Student Notes
After configuring NIS, there are several tools you can use to test your new configuration.
rpcinfo -p servername
First, use the rpcinfo command to verify that your NIS server is running the appropriate
daemons. NIS uses remote procedure calls, just like NFS. The rpcinfo command contacts
the server's portmap/rpcbind daemon and reports the server's registered RPCs. Master
servers should be running ypserv, ypxfrd, yppasswdd, and ypupdated. Slave servers
should be running ypserv and ypxfrd. If any of these daemons is missing, check your
server's configuration!
Each NIS map has an "order number" associated with it. Each time the master server rebuilds
a map, that map's order number is incremented. NIS slave servers use these order numbers to
determine if their local copies of the map files are up to date. If NIS is functioning properly,
the order numbers on the slaves' maps should always match the order numbers on the
master's maps.
domainname
If rpcinfo and yppoll both suggest that your server is functioning properly, you can begin
checking your client configuration. The domainname command will tell you to which
domain your client currently belongs.
ypwhich
The ypwhich command queries the local ypbind daemon to determine to which NIS server
you are currently bound.
ypcat -k passwd.byname
The ypcat command allows a client to dump the contents of an NIS server's maps. The -k
option prepends the key value for each map entry on the beginning of each line.
3 2 1
passwd.byname /etc/passwd
NIS Maps passwd
passwd.byuid
NIS Maps Client
Master Server
$ passwd
1. An NIS user issues the passwd command
Changing passwd for jim
to change his or her password.
Old NIS password: *****
2. The /etc/passwd file on the NIS master New Password: ******
server is updated to reflect the new Retype new password: ******
password.
3. The corresponding NIS maps are regenerated
to reflect the new password.
Student Notes
If a user uses the /usr/bin/passwd command to change passwords, the login ID, old
password, and new password are passed to the rpc.yppasswdd daemon of the NIS master
server. After the old password is verified, rpc.yppasswdd updates the ASCII file and
rebuilds the NIS maps (with ypmake passwd). Finally, the NIS slave servers receive a new
copy of these maps, and the change is complete.
If a user is not administered by NIS (there is a complete local entry without escape character
for this user), his or her password will be changed in the local /etc/passwd.
Prior to HP-UX 10.0, the user had to use the yppasswd command to change his or her
password in an NIS environment. This command is still available, but you no longer need to
use it.
4 3 2 1 vi /etc/hosts
# /var/yp/ypmake hosts
ypmake hosts
hosts.byname
NIS Maps
hosts.byaddr
/etc/hosts
NIS Maps
Master Server
Slave
Student Notes
In order to update an NIS map, you must:
1. Modify the ASCII source file on the master server.
Example
# vi /etc/hosts
# /var/yp/ypmake hosts
Another Example
# vi /etc/hosts
# /var/yp/ypmake
For NIS domain research:
ypmake Syntax
/var/yp/ypmake [DIR=path_to_source] \
[DOM=NIS_domain] \
[NOPUSH=num] \
[PWFILE=passwd_file] [mapname]
The DOM option lets you specify an NIS domain other than the host's default domain.
When not NULL, NOPUSH inhibits copying the new or updated databases to the slave NIS
servers. (By default, the databases are copied to the slaves.) If you don't push the map
(ypmake NOPUSH=1 mapname), you can do it later with the yppush command.
Student Notes
When you set up NIS, ypinit copies maps from the master server to the slave servers.
However, if you wish to keep the slave servers up-to-date, you should set up your system to
periodically propagate maps to the slaves. This can be done with ypxfr in one of the
following ways:
1. Periodically run ypxfr via cron on each slave server.
Syntax
/usr/sbin/ypxfr [-h server] [-f] [-d domain] mapname
• Running ypxfr via cron allows you to execute ypxfr at different rates for different
maps. For example, you could choose to update the passwd map once an hour and the
protocols map once a day.
The NIS service provides some scripts in the /var/yp directory that help you decide
which maps should be updated hourly or daily. These scripts are called
ypxfr_1perhour Fetches the maps
passwd.byname,passwd.byuid
You can use these scripts in conjunction with cron to update your maps. Your crontab
entries could look something like the following:
-d domain Allows you to copy a map from the domain specified (rather
than the domain returned by domainname).
• You can also update NIS maps by executing yppush on the master server. yppush sends
a transfer map request to each of the slave servers. In turn, ypserv on the server
executes ypxfr -C. The ypserv daemon then passes ypxfr the information needed to
identify and transfer the map. The syntax for yppush is
/usr/sbin/yppush [-d domain] [-v] mapname
For example,
# yppush passwd.byname
/etc/nsswitch.conf /etc/passwd
passwd: files nis root:... Who can log in?
group: files nis user1:...
user2:...
• all users in local passwd file
• all users in NIS passwd map
/etc/nsswitch.conf /etc/passwd
passwd: compat root:... Who can log in?
group: compat user1:...
user2:...
+hubert • all users in local passwd file
+cleo • cleo and hubert from NIS
map
Student Notes
By default, when a user lookup is required, the system initially searches for the username in
the local /etc/passwd file. If the username isn't found in /etc/passwd and NIS is
configured, the system then consults the NIS passwd map. Using this approach, all the users
both in the local password file and in the NIS map have access to all nodes in the NIS
domain.
Many shops prefer to limit access to a given node to a more limited list of users. The
/etc/nsswitch.conf file makes it possible to more narrowly define the concept of if, and
how, a client uses the NIS maps. Each line in /etc/nsswitch.conf contains a type of
lookup often performed by the system (for instance: passwd, group, hosts, and so forth),
followed by a list of sources the system should consult when performing those lookups.
If a host should only use the local password and group files, and ignore the NIS passwd and
group map, you should include the following lines in /etc/nsswitch.conf:
passwd: files
group: files
If, however, the host should allow all users defined either locally or in the NIS map to login,
include the following two lines in /etc/nsswitch.conf. (Or, simply leave the
nsswitch.conf file empty, as this is the default behavior anyway!)
If you want to allow all locally defined users, but only selected users from the NIS map to
access a host, add the following two lines to /etc/nsswitch.conf:
passwd: compat
group: compat
After adding the compat entries, you will need to add escape entries to your /etc/passwd
and /etc/group files to identify which NIS users should have access to the system.
The example below allows all locally defined users to access the system, as well as users
hubert and cleo as defined in the NIS map. Other users defined in the NIS map will not
have access to this system. Note the escape entries identified by the + signs. Allowing
additional NIS users to access the system would simply require the addition of more escape
entries.
root:ms0RtUNJemVSI:0:3::/:/sbin/sh
daemon:*:1:5::/:/sbin/sh
bin:*:2:2::/usr/bin:/sbin/sh
sys:*:3:3::/:
adm:*:4:4::/var/adm:/sbin/sh
uucp:*:5:3::/var/spool/uucppublic:/usr/lbin/uucp/uucico
lp:*:9:7::/var/spool/lp:/sbin/sh
nuucp:*:11:11::/var/spool/uucppublic:/usr/lbin/uucp/uucico
hpdb:*:27:1:ALLBASE:/:/sbin/sh
nobody:*:-2:60001::/:
+hubert
+cleo
Using escape entries in this manner allows the administrator to carefully control which users
are allowed to login to each host in an NIS domain. Your database servers' /etc/passwd
files, for instance, may only contain escape entries for the database administrators. Your
accounting department workstations' /etc/passwd files may only contain escape entries
for the users in the accounting department. Each administrator should carefully consider
which users in the NIS map need access to each machine.
NOTE: The compat entry is mutually exclusive of any other value in the passwd field
of the /etc/nsswitch.conf file.
We've only discussed the most common nsswitch.conf file possibilities here. The
nsswitch.conf man page discusses the file format in detail. Several sample nsswitch
files may be found in the /etc directory. Type ls /etc/nsswitch.* and copy the version
of the file that best meets your needs to /etc/nsswitch.conf, or simply leave the file
empty or nonexistent if you want to allow all NIS users to log into your NIS client.
Student Notes
By default, the master server uses /etc/passwd as the map source file. If all home
directories are available on the master server, all users can log into the master server. If you
want to restrict access to a smaller set of users than defined by the complete /etc/passwd,
perform the following steps on the master server:
1. Create an alternate password file as source for the maps.
# cp /etc/passwd /etc/passwd.nis
3. NIS will not recognize your escape entries in the /etc/passwd file unless you add the
following lines to your /etc/nsswitch.conf file:
passwd: compat
group: compat
to
to
PWFILE=${PWFILE:-$DIR/passwd.nis}
to
PWFILE=$(DIR)/passwd.nis
Directions
In this lab exercise, you will work with a team of two to four classmates to configure and test
NIS servers and clients in your own NIS domain. Working with the teammates assigned by
your instructor, decide on a name for your NIS domain.
Within your domain, you should configure one master server, a slave server, and one or more
clients. Decide among yourselves which machine will be your master server, which will be
the slave, and which will be the client(s):
Client(s): _________________
Note that the examples referenced in the instructions that follow refer to a domain called
"california" containing three hosts. Within this sample domain, "sanfran" is the master server,
"oakland" is the slave server, and "la" is a client.
Preliminary Step
1. Portions of this lab may disable your lan0 interface card. If you are using remote lab
equipment, login via the GSP/MP console interface for the duration of the lab.
1. Ensure that your ASCII source files (/etc/passwd, /etc/group, etc.) are up-to-date.
Although the ASCII files may be changed after configuring NIS, it is much easier to make
changes now. For the sake of this lab exercise, you may assume that your ASCII source
files are already up-to-date.
2. The script used to configure the NIS master server must know ahead of time the name of
the domain. Do this by setting your server's NIS domain name with the domainname
command:
# domainname california # set your domain name
# domainname # check your domain name
3. Next, run the ypinit -m command to build all the maps for your domain. When asked if
you wish to "quit on non-fatal errors,” answer "n.” ypinit prompts for a list of slave
servers for the domain, then builds all the necessary maps.
# ypinit -m
6. When your machine comes back up again, check to see which processes are running.
What NIS-related processes would you expect to see on an NIS master server?
Do not begin this portion of the lab until the master server is fully configured.
1. Start by setting your domain name as you did on the master.
3. Watch the ypinit messages. What does the ypinit do to configure the slave server?
(Note: disregard the ethers, bootparams, and netmasks errors generated by ypinit.
These maps are not used in HP-UX, but the ypinit utility still attempts to download
them.)
4. ypinit should have copied the NIS maps from the master server, and stored them
under the slave server's /var/yp directory. Do an ls of /var/yp, and find the
subdirectory for your domain. What do you see in your domain’s /var/yp subdirectory?
6. Remove all of your users' entries from your local password file, since NIS will now be
providing central administration of your user account information. However, be sure to
leave all accounts with userids below 100 in /etc/passwd. Why might it be important to
leave these userids (especially root.) in place?
7. Reboot.
2. As you did with your slave server, remove all user entries from /etc/passwd.
3. Reboot.
1. The ypwhich command tells you which server you are bound to. Which server are you
currently bound to?
2. The ypcat command displays the contents of NIS maps. Adding the -k option also
shows the key value associated with each entry in the map files. View the contents your
hosts map by typing:
client# ypcat -x
3. You can check the value associated with any key in an NIS map by using the ypmatch
command:
4. Do the standard UNIX utilities use the NIS? To find out, try logging in as user1. Note that
user1 no longer exists in the slave or clients' local password files. Why does this login
succeed?
5. Try another system utility. Use nslookup to determine which IP address is associated
with your neighbor's host name. Does nslookup appear to use NIS? How can you tell?
2. Is the password change reflected in the password map on the master, the slave, or both?
Use the yppoll command to check the order number on the master and the slave
servers:
3. Try another change on the client. Create a user account in the /etc/passwd file on the
client, then ypcat the passwd map again. Does ypcat show the new account? Explain.
4. What happens if you make your changes to /etc/passwd on the master server instead
of the client? Try it. Add user donald to the master server's passwd file. Then ypcat
the passwd map and explain the result.
5. On the master, do whatever is necessary to rebuild the passwd map and propagate the
updates to the slave server. Use ypcat to ensure this worked properly.
6. What happens if an NIS slave is down when the master attempts to push an update? Try it
and find out.
− Stop CDE.
− Shutdown the LAN card on the slave.
− Add user pluto to the master's /etc/passwd file.
− ypmake the passwd map on the master. (Be patient.)
7. Bring the slave's LAN card back up again, then do whatever is necessary on the slave to
update the maps. Note: ypxfr does not recognize the NIS nicknames.
8. Is any harm done if you ypxfr a map that is already up-to-date? Try it. From the slave,
try another ypxfr on passwd. What happens? Why might this behavior be
advantageous?
2. Did your escape entry have the desired effect? Can your client su to user1's account? Can
your client su to user6's account? Why can user6 still log in?
3. Create a new /etc/nsswitch.conf file for yourself with the entries required to
recognize escape characters in /etc/passwd and /etc/group.
4. Try logging in with the user1 and user6 usernames again. What happens now?
2. Follow the steps suggested in the notes to restrict access to the master server so only
root can log in.
3. Try logging into your master server as user3. This should fail.
# /sbin/init.d/dtlogin.rc stop
2. What happens if the NIS master server is unreachable for a period? Take down the LAN
card on your master server.
3. Can clients still access the maps? From the client, ypcat passwd and explain the result.
(Be patient.)
4. Can changes be made to the maps while the server is down? Log in as user1 on the client
and try changing the password with passwd. What happens? (Be patient.)
6. Try a ypcat on passwd. What happens? (Be patient. Once you see a few error messages,
press return to get back to a prompt.)
2. See if your client can still access the NIS maps. Try a ypcat passwd and see what
happens (be patient).
When an NIS server goes down, the client's first access may eventually time out and
generate an error. However, ypbind immediately attempts to bind to another NIS server
on the subnet. Try another ypcat passwd and see what happens. Did the ypcat
succeed this time?
3. There are a number of RPC daemons that must be running on an NIS server in order for
clients to be able to access the NIS maps. How can the client see if these RPCs are
registered and available?
# /labs/netfiles.sh –r NEW
− /etc/hosts
− NIS
− DNS/BIND
• Configure a primary DNS server using the hosts_to_named command.
• Add or remove a host in the DNS database, using the hosts_to_named command.
− /etc/rc.config.d/namesvrs
− /etc/named.conf
− /etc/resolv.conf
DNS/BIND
Name Resolution
Possibilities
/etc/hosts NIS/NIS+
Student Notes
Every packet that is sent across an IP network must contain a destination IP address.
However, users often prefer to identify destination hosts by hostname rather than IP address,
because IP addresses are difficult to remember. Most applications allow users to enter
destinations as hostnames, then automatically translate those hostnames to IP addresses
using the gethostbyname() resolver library function.
The resolver routines may use several different mechanisms to resolve hostnames and IP
addresses. Each method is described briefly below.
/etc/hosts
When the Internet was small, hostname resolution was handled exclusively via the
/etc/hosts file. Each entry in the /etc/hosts file has an IP address followed by the
hostname associated with that IP address. As networks grew larger and more geographically
disbursed, it became increasingly difficult to maintain consistent, updated hosts files across
all systems on the Internet. A more scalable solution was needed!
NIS
The Network Information Service simplifies host file maintenance by requiring all hosts on a
subnet to query a central NIS server for hostname lookups. Thus, using NIS, the
administrator needs only to manage one central hosts map instead of hundreds of
/etc/hosts files on individual hosts. Unfortunately, NIS does not scale well. The NIS hosts
map becomes increasingly unwieldy when it grows beyond a few hundred hostnames.
DNS/BIND
As the number of hosts on the Internet grew into the tens of thousands, a more flexible, more
scalable solution was required. The Domain Name Service (DNS) makes it possible to
manage millions of hostnames and IP addresses efficiently, and has become the primary
name resolution mechanism used on the Internet today.
There have been several implementations of DNS over the years. UNIX systems typically use
the Berkeley Internet Name Domain (BIND) implementation that was developed at UC
Berkeley. Microsoft systems use a different DNS implementation. Fortunately, both DNS
implementations use the same protocols for exchanging DNS information.
BIND has gone through many revisions over the years. Since many of these updates include
patches to security vulnerabilities, it is important to update BIND as new versions become
available. The BIND version number is included in the header information at the top of the
/usr/sbin/named executable. Use the what command to extract this version information:
# what /usr/sbin/named
The examples in this workbook were taken from a system running BIND 8.1.2.
DNS Overview
Hierarchical
Name Space
DNS
Components
Name
Servers Resolvers
Student Notes
There are several important components in the DNS/BIND architecture:
• DNS uses a "Hierarchical Name Space" to group related hosts together into DNS
"domains" in much the same way that UNIX uses a hierarchical file system structure to
group related files together into directories. Using a hierarchical name space makes it
possible to delegate responsibility for portions of the name space to other entities. For
instance, Hewlett Packard has been delegated responsibility for all hostnames ending in
hp.com .
• DNS name servers are specially configured hosts on the Internet that are able to resolve
hostnames to IP addresses for other client hosts. There are thousands of DNS name
servers on the Internet today, each of which is responsible for a small portion of the
overall DNS name space.
• Hosts on the Internet use DNS "Resolver Libraries" to send hostname and IP lookup
queries to DNS name servers. Any time a user uses telnet, ftp, or another network
service to access other hosts by hostname, the application uses the gethostbyname()
and gethostbyaddr() resolver library routines to send a query to a hostname
resolution service. The HP-UX resolver routines are able to do lookups using the
/etc/hosts file, NIS, or DNS. You can choose which lookup service or services you
want your resolver to use for hostname resolution.
Domains
sun hp ibm
il ca ny
rockford la buffalo
Student Notes
The traditional /etc/hosts file name resolution mechanism used a "flat" name space; all
hostnames were defined in a single monolithic /etc/hosts file that had to be updated
anytime a hostname anywhere on the Internet changed.
DNS was designed to be a distributed name resolution service. Responsibility for resolving
hostnames is delegated among thousands of DNS name servers on the Internet. Each of these
name servers is granted authority over a small portion of the hostnames in the overall name
space. This distributed approach greatly simplifies hostname allocation and management.
The DNS hierarchical name space makes it possible to distribute responsibility for the name
space among thousands of name servers by forming logical groupings of hosts called DNS
domains. By checking a host's domain name, it is possible to determine which name server is
responsible for resolving that host's hostname to an associated IP address. For instance, the
name servers for the hp.com domain are responsible for resolving all hostnames ending in
hp.com. The name servers for the ibm.com domain are responsible for resolving all host
names ending in ibm.com.
All hosts on the Internet ultimately belong to the root (.) level domain at the top of the
hierarchy. The root domain is subdivided into several hundred somewhat smaller domains.
Perhaps the best known of these "top-level" domains are com (for commercial entities), gov
(for U.S. government entities), edu (for educational institutions), and org (for
non-commercial organizations).
Each of these top-level domains is further subdivided into smaller domains. hp.com, for
instance, is a member of the com domain. Many of these domains are subdivided still further.
The example on the slide lists several theoretical regional subdomains under hp.com:
ca.hp.com (for California HP hosts), il.hp.com (for Illinois hosts), and ny.hp.com (for
New York HP hosts). Each organization may choose to subdivide their DNS domain
somewhat differently.
Hostnames in the overall DNS name space may be written in one of several different ways.
Oftentimes, we identify hosts via their relative, or unqualified, hostnames (for example,
sanfran, oakland, or la). In order to unambiguously identify a host on the Internet,
though, you should get in the habit of using absolute, or "Fully Qualified Domain Names"
(FQDNs) that specify a hostname and the DNS domain that the host belongs to (for example,
sanfran.ca.hp.com.). Officially, FQDNs always end with a dot representing the root
level domain.
. .
sun hp ibm hp
il ca ny il ca ny
Student Notes
There are two different types of DNS domains. The type of network to which your host is
connected will determine how you go about obtaining a domain name for your organization.
When you register your domain, you will be required to provide the IP addresses of one or
more DNS name servers that will be authoritative for your domain. When other hosts on the
Internet wish to contact hosts in your domain, their hostname resolution requests will be
forwarded to one of your authoritative name servers.
After your domain is registered, you can assign hostnames and create subdomains within
your domain as you wish. Since you are the delegate authority for your domain, changes
within your domain should be recorded on your authoritative name servers, but need not be
recorded with ICANN.
If your organization already has a registered DNS domain name, you should contact your IT
department to request a delegated subdomain or hostname.
In the private name space example on the slide, the private "." domain has only one
subdomain: com. The private com subdomain has only one subdomain: hp. The administrator
responsible for this network would have to configure a name server for both of these private,
upper-level domains, as well as the hp.com domain and its delegated subdomains. A single
name server could be configured to manage all three of these domains.
arpa com
in-addr hp
1 128 254 ca
oakland 128.1.1.2
0 1 255
la 128.1.1.3
1 2 3
sanfran oakland la
sanfran.ca.hp.com = 1.1.1.128.in-addr.arpa.
Student Notes
The primary purpose of the DNS name space is to map host names to IP addresses. However,
there are situations where applications may request a reverse lookup; given an IP address, a
name server may be asked to find the associated hostname. The in-addr.arpa portion of
the DNS name space makes this reverse resolution possible.
Each DNS name server is responsible for a small portion of the in-addr.arpa domain. If,
for instance, all hosts in the ca.hp.com domain had IP addresses on the 128.1.1 subnet,
then the ca.hp.com name server would also be responsible for the
1.1.128.in-addr.arpa portion of the in-addr.arpa domain.
Name servers for domains that span multiple subnets may be responsible for multiple
subdomains under in-addr.arpa.
sanfran oakland la
Student Notes
Hosts on the Internet, which have the ability to resolve DNS hostnames to IP addresses and
IP addresses to hostnames, are called DNS "Name Servers.” DNS clients send their hostname
and IP lookup requests to DNS name servers. In some cases, the name server may already
know the hostname or IP address that a client has requested in its DNS Resolver Record
database. In other cases, however, a name server may need to query other name servers to
find the information it needs to answer a client's query.
The BIND implementation of DNS uses a daemon called named to provide name service for
DNS clients.
.
com. edu . gov .
hp.com Zone
hp.
Delegated Subdomains
hp.com domain
Student Notes
Every DNS name server maintains a database of DNS "Resolver Records" that fully describes
a portion of the DNS name space. The portion of the name space for which a name server has
a full set of resolver records is known as the server's "Zone.”
In some cases, a name server's zone may include all of the hosts in a single domain. For
instance, if the hp.com domain had a single name server, then all hosts in the hp.com
domain would also be included in the hp.com zone of authority.
Oftentimes, though, a name server may delegate responsibility for a portion of its domain to
other name servers. In the example on the slide, the ca.hp.com is a delegated subdomain
with its own DNS name server. Since the hp.com name server has delegated responsibility
for California to another name server, the ca.hp.com subdomain is excluded from the
hp.com name server's zone of authority. il.hp.com, ga.hp.com, ny.hp.com, and
tx.hp.com are similarly excluded from the hp.com name server's zone of authority.
az.hp.com, wa.hp.com, and nc.hp.com are non-delegated subdomains that do not have
their own name servers. Instead, the hp.com name server includes these subdomains in its
zone of authority.
In summary, each name server is able provide the following authoritative information:
• The name server's own hostname and IP address
• The hostnames and IP addresses of all hosts within the name server's zone of authority
• The IP addresses of the name server's delegated subdomain name servers
la.ca.hp.com?
la = 128.1.1.3
oakland.ca.hp.com ca.hp.com NS
# telnet la.ca.hp.com sanfran 128.1.1.1
oakland 128.1.1.2
la 128.1.1.3
Student Notes
Each time you invoke an application and specify a target host by name, the application uses
the gethostbyname() system to resolve that hostname to an IP address. The resolver must
perform several tasks for the application:
• First, the resolver must determine if the local node is using DNS, NIS, or /etc/hosts.
Our example here will assume that DNS is the client's preferred name resolution
mechanism. The /etc/nsswitch.conf file determines which lookup source the client
uses. It will be discussed later in this chapter.
• If DNS is the preferred hostname resolution mechanism, and the user provided an
unqualified hostname, the resolver builds a search list of possible fully qualified
hostnames that the user may be attempting to resolve. For instance, if the user types
"telnet la,” the resolver routine must guess which domain host la might be in. The
resolver builds a list of possible fully qualified hostnames using the domain search list
specified in the client's /etc/resolv.conf file.
If the client's search list included ca.hp.com, il.hp.com, and hp.com, the resulting
list of possible fully qualified hostnames might look something like this:
la.ca.hp.com
la.il.hp.com
la.hp.com
If the user provides a fully qualified host name (with a dot “.” at the end), the resolver
routine simply attempts to resolve that hostname without consulting the domain search
list. /etc/resolv.conf is described in detail later in this chapter.
• Finally, the resolver queries the local name server to determine if any of the hostnames
generated in the previous step can be successfully resolved into an IP address. The
/etc/resolv.conf file may specify up to three name servers. If the first name server
fails to respond within 75 seconds, the resolver tries the second name server, and
eventually the third. If DNS is unconfigured, or if the name servers fail to respond, the
resolver may automatically resort to using NIS or the local /etc/hosts file, depending
on the "switch" mechanism defined in /etc/nsswitch.conf. This switch mechanism is
described in detail later in this chapter.
atlanta.ga.hp.com?
go to com. NS! . NS
atlanta.ga.hp.com?
oakland ca.hp.com NS com. NS
go to hp.com. NS!
atlanta.ga.hp.com?
128.1.3.1
atlanta.ga.hp.com? hp.com. NS
go to ga.hp.com. NS!
atlanta.ga.hp.com?
atlanta = 128.1.3.1
Student Notes
When accessing hostnames in other domains, the DNS client still sends the lookup request to
the local DNS name server. If a name server receives a query regarding a hostname that is not
included in the name server's own local zone data, the name server automatically performs a
recursive search for the hostname in other domains.
The sequence of events that occur when performing the recursive search are described
below:
1. The root server is queried. It provides the best answer it can: the address of the name
server closest to the destination.
2. The local DNS server then queries the name server suggested by the root-level server,
which responds with a referral to another server. After following several such referrals,
the local name server will eventually reach the name server whose zone of authority
includes the requested hostname. The answer provided by this server is said to be an
authoritative answer. The local DNS name server caches the addresses of all the name
servers, as well as the final answer.
3. If another client queries the local name server regarding the same hostname, the local
DNS server responds immediately with the cached data. Since this cached information
may be outdated, this is said to be a "non-authoritative" answer. Servers flush their
cached records on a regular (configurable) basis.
Notice a DNS name server initially knows only the hostnames and IP addresses of the hosts
within its own zone of authority, and the IP addresses of the root level name servers. A name
server does not initially know the addresses of its sibling name servers in other portions of
the domain. However, as the name server's cache builds over time, the name server will be
able to answer more and more queries non-recursively using information stored in cache.
Example on Slide
In the example on the slide, client oakland requests atlanta.ga.hp.com's IP address
from the ca.hp.com name server.
Since the local DNS name server for the ca.hp.com domain does not know atlanta's IP
address, it queries the root level name server (.). The root name server suggests a query to
the com name server, which suggests a query to the hp.com name server, which suggests a
query to the ga.hp.com name server. Finally, ga.hp.com responds with an authoritative
answer, which the ca.hp.com name server relays back to oakland.
In the meantime, the ca.hp.com name server caches all of this information for future
queries.
Student Notes
Every DNS zone must have one "Master Server" (also known as the "Primary Name Server").
The master server is the authoritative source for information about hosts in the zone. Any
hostnames that are added to the domain must be added to the master server's zone database
files, and any hosts that are removed from the domain must be removed from the master's
zone database files. The master server can also delegate responsibility for subdomains to
other name servers.
2. Fully qualify host names in /etc/hosts. The hosts_to_named utility provided with
HP-UX can create the DNS data files on your master server using the information already
in your /etc/hosts file. In order for this to work though, all of the entries in your
hosts file need to be converted to fully qualified host names. The old host names can be
used as aliases. If you wish, you can delete lines in the /etc/hosts file that refer to
domains for which your name server is not responsible. (Note, however, that the
localhost entry must remain.) The example below shows the changes that would be
required on sanfran:
# vi /etc/hosts
127.0.0.1 localhost
128.1.1.1 sanfran.ca.hp.com sanfran
128.1.1.2 oakland.ca.hp.com oakland
128.1.1.3 la.ca.hp.com la
3. Create a directory for the DNS database files. The hosts_to_named program will create
several DNS data files. These data files are typically stored in a directory called
/etc/named.data. Create the /etc/named.data directory manually with mkdir.
# mkdir /etc/named.data
# chmod 755 /etc/named.data
# cd /etc/named.data
4. Create a param file for hosts_to_named.
The hosts_to_named utility is a powerful tool for building DNS database.
hosts_to_named looks for a param file to determine which domains your name server
will serve.
− Include a -d entry for each domain for which this name server will be responsible.
Since some name servers serve multiple domains, you may have multiple -d entries.
− Include a -n entry for each (sub)net included in this domain. Since many domains
include hosts on several subnets, you may have multiple -n entries.
− The -b option determines where your DNS boot configuration file will be stored.
/etc/named.conf is the standard location.
− The next slide will discuss "Secondary Servers,” which serve as backups for the
master server. The secondary (or slave) servers will need to download a configuration
file containing the IP address of the master server and other information about the
domain. The -z option in the param file creates this configuration file for the slave
servers.
− Other options may be specified in this file as well. See the hosts_to_named man
page for details.
The param file for the sanfran name server looks like this:
# vi param
-d ca.hp.com # Use your domain name(s) here
-n 128.1.1 # Use your subnet address(es) here
-z 128.1.1.1 # Use your master server's IP here
-b /etc/named.conf
5. Create the DNS data and boot files with hosts_to_named. The hosts_to_named
utility automatically creates all the DNS data files needed to resolve host names and IP
addresses in your domain using your /etc/hosts file, and the options defined in your
param file.
# hosts_to_named -f param
Translating /etc/hosts to lower case ...
Collecting network data ...
128.1
Creating list of multi-homed hosts ...
Creating "A" data (name to address mapping) for net 128.1.1 ...
Creating "PTR" data (address to name mapping) for net 128.1.1 ...
Creating "MX" (mail exchanger) data ...
Building default named.boot file ...
Building default db.cache file ...
7. For the exercises that we do in class, we will download this file from the instructor
station, rather than the internic.
# ftp 128.1.0.1
> get /etc/named.data/db.cache
> quit
# /sbin/init.d/named start
10. Configure DNS client functionality on the master server. Most DNS servers are also DNS
clients. DNS client configuration is covered later in this chapter.
db.* files
Student Notes
Most domains have one or more slave servers (also called "secondary" name servers) in
addition to the domain master server. At boot time and at regular intervals thereafter, the
slave servers do a "zone transfer" to download copies of the zone database files from the
master server. Some slave servers store the zone data in data files on disk, while other simply
retain the downloaded data in memory.
Slave servers serve two purposes. First, slave servers provide a backup name server source if
the master server becomes unavailable. Second, slave servers reduce the load on the master
by handling some queries from clients' resolvers.
# mkdir /etc/named.data
# chmod 755 /etc/named.data
2. ftp copies the db.cache and db.127.0.0 from the master. The slave server will copy
the remaining db.* files (if needed) over, when the named daemon is first initialized and
spawned.
# ftp 128.1.1.1
> get /etc/named.data/db.cache
> get /etc/named.data/db.127.0.0
There are two different types of slave servers. Some slave servers store copies of the
master's database files on disk. Other slave servers simply copy the master's database
information directly into cache at boot time. The first approach allows the slave server to
answer clients' queries even if the master server is unreachable when the slave server
boots. The second approach saves some disk space.
3. Create the /etc/named.conf file. The named daemon determines where its DNS
database files are stored by consulting the /etc/named.conf file at startup. Running
hosts_to_named on the master server automatically creates a boot file for the slave
servers. ftp the boot file from the master server, then move it to its proper location on
the slave. You can download an appropriate file from the master server.
If you do not want to maintain disk-based copies of the DNS database files on your slave
server, then download and install the /etc/named.data/conf.sec file instead.
4. Modify /etc/rc.config.d/namesvrs.
# vi /etc/rc.config.d/namesvrs
NAMED=1
NAMED_ARGS=""
# /sbin/init.d/named start
6. Configure DNS client functionality on the slave server. Most DNS servers are also DNS
clients. DNS client configuration is covered later in this chapter.
Student Notes
Master and slave servers both maintain authoritative database records for one or more
domains. A cache-only name server does not maintain authoritative information for any
domains (except 127.0.0.1). Any time a cache-only server receives a query regarding a new
hostname, it must do a recursive query to find the desired information. However, every
lookup on behalf of a client adds another entry to the server's cache. Over time, as the cache
grows, fewer and fewer client requests result in recursive queries.
1. On the cache-only server, create a separate directory for the database and configuration
files. Most slave servers store local copies of the domain's DNS database files in the
/etc/named.data directory.
# mkdir /etc/named.data
# chmod 755 /etc/named.data
2. ftp copies of the db.cache and db.127.0.0 files from the master. The cache-only
server only needs to be able to resolve the loopback address and find the root-level
name servers. Cache-only servers do not need copies of all of the other db.* files
3. Create the /etc/named.conf file. The named daemon determines where its DNS
database files are stored by consulting the /etc/named.conf file at startup. Running
hosts_to_named on the master server automatically creates a boot file for the slave
servers. ftp the boot file from the master server, then move it to its proper location on
the slave. You can download an appropriate file from the master server.
4. Modify /etc/rc.config.d/namesvrs.
# vi /etc/rc.config.d/namesvrs
NAMED=1
NAMED_ARGS=""
# /sbin/init.d/named start
6. Configure DNS client functionality on the cache-only server. Most DNS servers are also
DNS clients. DNS client configuration is covered later in this chapter.
# nslookup
> server 128.1.1.1 # Choose a name server
> oakland.ca.hp.com # Resolve a hostname to an IP
> 128.1.1.2 # Resolve an IP to a hostname
> exit
Trying DNS
Name: oakland.ca.hp.com
Address: 128.1.1.2
Student Notes
You can ensure that your DNS name servers are functioning properly using the nslookup
command. If your host has already been configured with DNS client functionality, simply
type:
Alternately, if you haven't yet configured DNS client functionality, or if you wish to override
the default name server listed in /etc/resolv.conf, you may wish to run nslookup
interactively:
# nslookup
> server 128.1.1.1 (try some lookups using the master server)
> corp.hp.com
> 128.1.0.1
> server 128.1.1.2 (now test the slave server, too)
> corp.hp.com
> 128.1.0.1
> exit
There are many other commands available within nslookup for troubleshooting your DNS
name servers. At the ">" prompt, you can enter a "?" for a list of available tools within
nslookup.
Question
You may notice that nslookup sometimes returns a "Non-authoritative answer.”
In fact, if you look up the same host name twice, you may notice that only the second
response from nslookup is marked as "Non-authoritative.” Can you guess why?
1. Create /etc/resolv.conf
search ca.hp.com hp.com
nameserver 128.1.1.1
nameserver 128.1.1.2
2. Modify /etc/nsswitch.conf
hosts: dns nis files
3. Modify /etc/hosts
127.0.0.1 localhost
128.1.1.3 la.ca.hp.com la
Student Notes
All hosts within a DNS domain, including the master and slave servers, should be configured
as DNS clients. Configuring a host as a DNS client ensures that the host's resolver routines
resolve host names and IPs using a designated DNS name server rather than the local hosts
file. The steps required to configure a host as a DNS client are described below.
1. Modify the resolver configuration file. The configuration file for the system host name
resolver routines is called /etc/resolv.conf. The resolv.conf file has two
important components:
a. Creating a resolv.conf "search" list
Including other domains in the search list saves your users the hassle of fully
qualifying host names for machines in the listed domains. For example, since the
resolv.conf file shown below includes ca.hp.com in the search list, users could
telnet to sanfran by simply typing telnet sanfran. Accessing atlanta,
however, would require a fully qualified host name, since ga.hp.com is not included
in the search list. Include your users' most frequently referenced domains in the
search list.
# vi /etc/resolv.conf
search ca.hp.com hp.com # replace ca.hp.com with your domain name
Your local resolver must be told which name server to use when resolving host names
and IP addresses. You may configure up to three name server IP addresses in the
/etc/resolv.conf file; if the first name server listed fails to respond, the resolver
will automatically try the second name server.
Since the resolver will always access the DNS servers in the order in which they are
listed in resolv.conf, you can provide some measure of load balancing by
alternating the order in which the servers are listed. On some hosts, list the master
server first; on others list the slave server first.
# vi /etc/resolv.conf
search ca.hp.com hp.com # replace ca.hp.com with your domain name
nameserver 128.1.1.1 # replace 128.1.1.1 with your master's IP
nameserver 128.1.1.2 # replace 128.1.1.2 with your slave's IP
2. Modify /etc/nsswitch.conf.
HP-UX can resolve host names using the local hosts file, NIS, or DNS. The
/etc/nsswitch.conf file determines which source the resolver uses for name
resolution. If you do not have an /etc/nsswitch.conf file, DNS is the default name
resolution source anyway, and you can skip this step. If you have a hosts entry in your
/etc/nsswitch.conf file, ensure that DNS is the first source listed. A later slide in the
chapter will discuss /etc/nsswitch.conf in more detail.
# cat /etc/nsswitch.conf
...
hosts: dns files
...
3. Modify /etc/hosts.
Since most host names will now be resolved using the DNS server, you may choose to
remove many of the entries in /etc/hosts. However, you should retain some critical
entries in case the name servers become unavailable. At a minimum, retain the localhost
entry, and your own host name.
On the master server, retain all the host entries for your name server's zone. They are
required by the hosts_to_named utility. Make sure that the host names that remain in
/etc/hosts are fully qualified. You may also wish to include the "non-qualified" host
names as aliases. On la.ca.hp.com, the modified hosts file might look like this:
# vi /etc/hosts
127.0.0.1 localhost
128.1.1.3 la.ca.hp.com la
Any utilities that do reverse resolution to convert the IPs of incoming packets to host
names must be updated with the DNS domain name appended to each host name. If the
following files exist, fully qualify each of the host names they contain:
~/.netrc
/etc/hosts.equiv
/var/adm/inetd.sec
# vi ~/.rhosts
oakland.ca.hp.com
sanfran.ca.hp.com
la.ca.hp.com
A: Check /etc/nsswitch.conf!
hosts: files
or hosts: dns nis files
or hosts: dns [NOTFOUND=continue] files
or hosts: dns [NOTFOUND=return] files
Student Notes
Applications, utilities, and daemons on an HP-UX box frequently need to resolve IP addresses
to host names, UIDs to user names, and GIDs to group names. In fact, these are just a few of
the many types of names and addresses that need to be resolved in a UNIX environment.
HP-UX can resolve many of these addresses using a variety of "databases.” Host names, for
instance, may be resolved to IP addresses via the local /etc/hosts file, DNS, or NIS.
Somehow, the administrator needs to be able to specify if and when each of these resources
should be referenced. This is the purpose of the /etc/nsswitch.conf file.
or
On real systems, though, things become more complicated. Many administrators prefer to
define a "fallback" mechanism. If the DNS server is down, for instance, you may want your
machine to try to resolve host names via the local hosts file. /etc/nsswitch.conf makes
this possible.
This line says that the host name resolver routines should resolve host names first via DNS. If
the DNS nameserver finds the host name requested, the resolver need look no further. If,
however, the DNS nameserver is unavailable or does not recognize the requested host name,
the resolver automatically falls back on the local /etc/hosts file for host name lookups.
If you are also a member of an NIS domain, you may wish to use the following line, which
causes the resolver to try all three lookup sources until it finds the host name or IP address it
is looking for.
When the resolver receives one of these responses, you can configure it to react in one of two
ways:
SUCCESS=return
NOTFOUND=continue
UNAVAIL=continue
TRYAGAIN=continue
This says that the resolver should try DNS first. If DNS recognizes the requested host name,
then use the IP address returned by DNS. If DNS is unconfigured, or if the DNS server doesn't
respond in a timely manner, or if the DNS server simply doesn't recognize the requested host
name, then the resolver should fall back on the local /etc/hosts file.
With this entry in your /etc/nsswitch.conf file, the resolver will attempt host name
lookups first via DNS. NOTFOUND=return means that if the DNS name server responds to a
query, but doesn't have any record of the host name in question, the resolver will quit rather
than fall back on /etc/hosts. Since the nsswitch.conf file does not explicitly state what
should occur if the DNS lookup results in a SUCCESS, UNAVAIL, or TRYAGAIN, the resolver
uses the default actions for these results:
SUCCESS=return (default)
NOTFOUND=return (as defined in /etc/nsswitch.conf)
UNAVAIL=continue (default)
TRYAGAIN=continue (default)
In other words, DNS is referenced first. NIS will only be consulted if DNS is unconfigured or
unresponsive. The local hosts file, then, will only be consulted if NIS, too, is unconfigured.
The full list of default actions used by HP-UX 11.x when /etc/nsswitch.conf does not
exist is shown below:
passwd: files nis
group: files nis
hosts: dns [NOTFOUND=return TRYAGAIN=return] nis [NOTFOUND=return] files
networks: nis [NOTFOUND=return] files
protocols: nis [NOTFOUND=return] files
rpc: nis [NOTFOUND=return] files
publickey: nis [NOTFOUND=return] files
netgroup: nis [NOTFOUND=return] files
automount: files nis
aliases: files nis
services: nis [NOTFOUND=return] files
If /etc/nsswitch.conf doesn't exist on a 10.x system, the following policies are used:
SUCCESS=return
NOTFOUND=return
UNAVAIL=continue
TRYAGAIN=return
nsswitch.compat
nsswitch.files
nsswitch.hp_defaults
nsswitch.nis
nsswitch.nisplus
Student Notes
At HP-UX 11.x, you should use the nsquery command to test your resolver configuration:
The nsquery command first checks your /etc/nsswitch.conf file to determine which
switch policy you have chosen to use. If you have chosen /etc/hosts, then nsquery
simply searches the /etc/hosts file for the host name or IP address you have specified.
If you have chosen to use DNS as a lookup source, nsquery checks /etc/resolv.conf to
find the address of your default name server, and forwards the resolution request
accordingly. If the first name server times out, nsquery will try the second name server
listed in /etc/resolv.conf. If none of the name servers in /etc/resolv.conf respond,
nsquery displays a message indicating that the DNS lookup failed, then follows the
“fallback” policy defined in your switch file to choose another lookup service.
nsquery reports the result of each lookup service consulted, so you can determine if your
switch policy behaves as expected.
CAUTION: At HP-UX 10.20, use the nslookup command to test your resolvers. At HP-UX
11.0, nslookup was unable to interpret the /etc/nsswitch.conf file
properly. The nsquery command is now the preferred command for testing
the fallback resolution method defined in the /etc/nsswitch.conf file.
Introducing /etc/named.data
Student Notes
DNS name servers store their zone configuration data in a series of files under the
/etc/named.data directory. This directory should contain one file for each of the domains
for which your name server is authoritative source. The master name server for the
ca.hp.com domain would have the following files in /etc/named.data:
db.cache Contains the addresses of the root level name servers, which are used for
recursive queries. Some administrators mistakenly believe that this file
may be modified to force non-root-server addresses into cache. Not so.
This file should only contain root-level server addresses.
db.root (Not shown on slide) This file replaces the db.cache file on root level
name servers.
All of these are ASCII files that can be viewed directly and modified. For more information
about the file contents, attend HP's DNS course (Course #H3540) or buy a copy of Cricket
Liu's DNS and BIND, Third Edition, from O'Reilly and Associates (ISBN 1-56592-512-2).
Introducing /etc/named.conf
options {
check-names response fail;
check-names slave warn
directory = "/etc/named.data";
}
Student Notes
When the named daemon is launched during system startup, it consults a file called
/etc/named.conf to determine which domains it is responsible for, and which db.* files
need to be loaded. The slide shows the /etc/named.conf file on sanfran, the master
name server for the ca.hp.com domain.
The options block at the top of the file defines some general parameters for the daemon. In
the example on the slide, the two check-names directives cause named to verify the format
hostnames that this server obtains via recursive queries to other servers. If a recursive query
yields a hostname that contains an underscore or other non-standard characters, named will
refuse to send the results back to the client that requested the lookup. This directive is
designed to prevent syntax errors in other servers' database files from filtering back to your
resolver clients.
The directory directive tells named in which directory the db.* files are stored.
The remaining lines in the sample file tell named for which zones it is responsible. Each line
has several fields. The zone directive specifies a zone name. The type directive indicates
whether the server is a master or slave for the zone. The file directive specifies the name of
the database file containing the zone information. Slave servers have one more field with
each record: a master directive that specifies the IP address of the master server that the
slave should query for regular updates.
Many more options are available in the named.conf file. See the previously mentioned
O'Reilly DNS book, or read the named man page for more information.
Student Notes
When the system boots to run level 2 or higher, the /sbin/init.d/named searches in the
/etc/rc.config.d/namesvrs file and starts the named daemon if the NAMED control
variable is set to 1.
The named daemon reads /etc/named.conf to determine the zones for which it is
responsible, then reads in the appropriate /etc/named.data/db.* files into memory.
Note that named reads only the DNS database files at startup. If you make any changes to the
db.* files, you must force named to re-read its database files as described on the next slide.
You can stop or start named by executing the startup script with the appropriate argument:
# /sbin/init.d/named stop
# /sbin/init.d/named start
Student Notes
Any time a hostname or IP address is added, removed, or changed in your DNS domain, the
name server data files must be updated accordingly. You could make these changes directly
with vi, but in smaller domains, it is often easier to update /etc/hosts, then rerun
hosts_to_named.
The example below adds a host named "sacramento" with IP address 128.1.1.4 to the
ca.hp.com domain.
1. Update /etc/hosts on the master server.
Add a new line to /etc/hosts for each new host name/IP pair. Be sure to use fully
qualified host names.
# vi /etc/hosts
127.0.0.1 localhost
128.1.1.1 sanfran.ca.hp.com. sanfran
128.1.1.2 oakland.ca.hp.com. oakland
128.1.1.3 la.ca.hp.com. la
128.1.1.4 sacramento.ca.hp.com. sacramento
This will rebuild the master server's DNS data files to reflect the changes made in
/etc/hosts.
# cd /etc/named.data
# hosts_to_named -f param
By default, named only reads the db files at startup. The sig_named command forces
the named daemon on the master to reload any updated database files.
# sig_named restart
Note that the slave servers will not be updated immediately. Turn to the next slide to learn
how the slave server data files are updated.
A: named consults a data file’s SOA record to determines if/when the file must be updated:
ca.hp.com. IN SOA sanfran.ca.hp.com root.sanfran.ca.hp.com (
1 ; Serial
10800 ; Refresh every 3 hours
3600 ; Retry every 1 hour
604800 ; Expire after 1 week
86400 ) ; Minimum TTL of 1 day
Student Notes
When hostname and IP address changes are required, the changes are made on the DNS
master server. Every slave server should be configured to periodically query the master
server to determine if an update is required.
Every DNS database file has a "Start of Authority" (SOA) record at the top of the file that
determines how frequently slave servers request updates from their master servers. Consider
the sample start of authority record on the slide.
The first line in the SOA identifies the domain name (ca.hp.com) and master server name
(sanfran.ca.hp.com), and the domain administrator's email address
(root.sanfran.ca.hp.com = [email protected]).
The remaining fields determine how frequently the zone updates occur:
Serial Each zone has a serial number. Slave servers determine if their database files
are up-to-date by comparing their zone data file serial numbers against the
serial numbers on the master's data files. If the master's number is greater
than the slave's, the slave requests a zone transfer. The master server
administrator must remember to increment the serial number in the SOA any
time a db.* file is modified (hosts_to_named does this automatically).
Refresh This field determines how frequently slave servers should request updates
from the master. The interval is specified in seconds.
Retry If the master does not respond to a slave's update request, the Retry field
determines how long the slave should wait before trying again. This
parameter, too, is defined in seconds.
Expire If one week passes without a successful update from the master, the slave
shown on the slide expires the zone data and refuses to answer client queries
about the expired zone. This parameter, too, is defined in seconds.
TTL The "Time To Live" determines how long other name servers (not slave
servers) may retain this zone data in cache. This parameter, too, is defined in
seconds.
If you want to force an immediate zone transfer on your slave server, execute the
sig_named restart command. Note that there is no mechanism in DNS that allows the
master to "push" an immediate zone transfer to the slaves; slaves are expected to "pull"
updates at regular intervals.
Introduction
In this exercise, you will configure a DNS master server, a slave server, and a DNS client. You
will also have a chance to update the DNS data on your name servers, and explore some of
the name server database files.
Your instructor will break the class into teams of 2 or 3 students each. Each team will be
assigned a DNS sub-domain under hp.com from the table below. You will then work with
your teammates to configure a master server, a slave server, and one or more DNS clients
within your assigned domain. The instructor's station will serve as a root level name server so
you can access other teams' domains as well.
The first two octets in the IP addresses will vary from classroom to classroom, but should be
consistent across all hosts within your classroom. Ask your instructor what the first two
octets should be set to.
Table 12-1.
Preliminary Steps
1. Portions of this lab may disable your lan0 interface card. If you are using remote lab
equipment, login via the GSP/MP console interface for the duration of the lab.
2. Modifying IP connectivity on a running system can wreak havoc on CDE and other
applications. Kill CDE before going any further:
# /sbin/init.d/dtlogin.rc stop
3. If you haven’t already changed your IP address and hostname to match the hostname
your instructor assigned to you, do so now. Use the /labs/netsetup.sh script to
make the change.
4. Run hosts_to_named.
# hosts_to_named -f param
If hosts_to_named fails for any reason, check the syntax in /etc/hosts, remove
/etc/named.data/conf.*, /etc/named.data/boot.*,
/etc/named.data/db.*, and /etc/named.conf, and re-run hosts_to_named.
5. Copy the db.cache file from corp. Note that the FTP daemon on corp attempts to
resolve the source IP address of each incoming FTP request to a hostname. Since DNS
isn’t fully configured at this point, it may take a couple minutes for the resolver to
timeout. Be patient.
# /sbin/init.d/named start
# mkdir /etc/named.data
# chmod 755 /etc/named.data
3. FTP a copy of conf.sec.save from the master server, and move it into place on the
slave server as /etc/named.conf.
# vi /etc/rc.config.d/namesvrs
NAMED=1
NAMED_ARGS=""
# /sbin/init.d/named start
# vi /etc/resolv.conf
search state.hp.com hp.com # use your domain name here
nameserver w.x.y.z # use your master's IP here
nameserver w.x.y.z # use your slave's IP here
2. If your /etc/nsswitch.conf exists, delete it. You can experiment with the default
behavior for now. You will have a chance to re-create the file later.
# rm /etc/nsswitch.conf
3. If you are the master server, you should have modified your /etc/hosts file back in
Part 2, so you can skip this step. Slaves and clients, however, still need to modify
/etc/hosts at this point. Fully qualify and create aliases for your host in your local
domain, and remove all other entries (except localhost).
# vi /etc/hosts
127.0.0.1 localhost
128.1.1.3 city.state.hp.com city # Keep your host’s entry
Answer
2. Try the same tests that you did in the previous question, but use the slave name server
this time. Does your slave server seem to work?
Answer
3. Which name server does nslookup use by default if you simply type nslookup
corp.hp.com from the shell prompt? Try it. How can you permanently change the
default name server?
Answer
4. Try resolving a host name in your domain using the simple host name (eg: sanfran,
rather than sanfran.ca.hp.com). Try resolving a host in another domain using the
simple host name. Your first experiment should succeed, while the second should fail.
Why?
Answer
Answer
2. Which two db.* files would you expect to be affected by the newly added host and IP?
Look at the SOA records for those two files. How can you tell that the files were updated?
Answer
3. Now that the db.* files have been updated, can you nslookup the new host using the
master server? Try it, and explain the results.
Answer
4. What do you need to do to ensure that your DNS clients can resolve the new host name?
Make it so.
Answer
5. By default, when will your slave name server recognize that a new host name and IP have
been added to the domain? How can you force the slave to do an immediate update? Do
it.
Answer
# nslookup
> server w.x.y.z # Use your slave server's IP here.
> city.state.hp.com # Use your new hostname here
> exit
Part 7: Cleanup
1. Restore your pre-DNS configuration on all hosts in your domain by running
/labs/netfiles.sh:
• Allow or prevent access to selected Internet services via the inetd.conf file.
Student Notes
The Internet Services are among the most frequently used network applications. The HP-UX
Internet Services product includes utilities for remotely logging into other hosts on the LAN,
transferring files across the LAN, delivering email, and many other basic services.
The Internet Services product includes two families of utilities: the ARPA services and
Berkeley services. The chart on the slide and the notes below overview some of the features
these services provide.
ARPA Services
ARPA services are the de facto networking standards in the scientific and engineering
communities. For LANs and WANs, they define protocols for:
• terminal access (telnet)
• file transfer (ftp, the file transfer protocol, and tftp, the trivial file transfer protocol)
Berkeley Services
BSD UNIX 4.3 implements a de facto networking standard for the UNIX community.
• mapping host names to IP addresses (BIND DNS, the BIND Domain Name Service)
The Internet Services can be put in the context of the OSI model as shown.
6 Presentation SMTP
3 Network IP IP
LAN Link
Figure 1
NOTE: The Internet Services software product requires the LAN/9000 Link, FDDI
9000/Link, Token Ring/9000 Link or X.25 Link product.
The sendmail utility, dynamic routing with gated, BIND, and time synchronization with NTP
will not be discussed in this module.
roger gary
Student Notes
The Internet Services are built on a client-server model.
A client uses services that a server provides. The term client/server is very often used with
systems and not with processes, but a server system can provide a service only when a server
process is running there. On the other side a client system can only use a service when its
client process is able to communicate with the appropriate server process on the server
system.
A system can be simultaneously a server and a client if server processes and as well client
processes are running there.
The slide shows a very simple example of a client/server relationship. A user executes the
rlogin command on node roger to get a virtual terminal on the remote node gary. The
rlogin program is the client process. The appropriate server process, rlogind, is then
invoked on node gary, and a network communication session is established between
rlogin and rlogind.
The following table shows other client/server relationships within the Internet Services:
Table 1
/sbin/init
/sbin/rc
/sbin/rc2.d/S*
Linked to
/sbin/init.d/*
Execution Scripts Configuration Files
gated /etc/rc.config.d/netconf
inetd /etc/rc.config.d/netdaemons
named /etc/rc.config.d/namesvrs
rwhod
/etc/rc.config.d/netdaemons
xntpd
sendmail /etc/rc.config.d/mailservs
Student Notes
Many of the Internet Services have server daemons that are started at run-level 2 during the
boot process, and run continuously on the system.
Some of these services may be disabled. Be sure to check the control variables in the
/etc/rc.config.d files (especially netdaemons), to determine which services are
enabled and which are disabled on your system.
Server processes for the remaining Internet services that are not included in the list above
are all managed by the inetd “superdaemon” which is introduced on the next slide.
roger gary
/etc/inetd.conf
inetd inetd
/etc/services
$ telnet gary
/var/adm/inetd.sec
telnet telnetd
Student Notes
Although many of the internet services have daemons that run continuously on the system,
some internet service server processes are managed by the inetd "super-daemon.”
The inetd daemon starts at run-level 2 during the system boot process, and monitors the
server's ports for requests for a variety of internet services. When a client requests access to
one of the services provided by inetd, inetd starts whatever server process is necessary to
respond to the client's request. The server process handles all further communication with
the client so inetd can listen for additional service requests.
Starting server processes via inetd offers two major advantages. First, since server
processes are only started on an as-needed basis, the system load on the server is reduced.
Second, inetd makes it possible for the server to maintain connections to multiple clients
simultaneously. The inetd daemon simply starts an additional server process for each
additional client. Thus, if three clients telnet to your server, inetd will start three
telnetd server processes.
NOTE: The inetd daemon is only needed on the server side. You should be able to
telnet and ftp out to other hosts even if inetd is not running.
The inetd daemon starts at run-level 2 and runs continuously on the system until shutdown.
Unlike most other scripts executed during the boot process, /sbin/init.d/inetd does
not have a control variable. Thus, if you do not want to start inetd at boot, you must remove
the inetd start script from /sbin/rc2.d.
You can manually stop or start inetd by executing the inetd startup script:
# /sbin/init.d/inetd stop
# /sbin/init.d/inetd start
The inetd daemon references several configuration files that are described in the slides that
follow:
/etc/inetd.conf
/etc/services
/var/adm/inetd.sec
Configuring /etc/inetd.conf
inetd
Q: Should I provide FTP service?
Q: How do I start an ftp daemon?
# inetd -c
Student Notes
When inetd is invoked, it reads the /etc/inetd.conf configuration file and configures
itself to support whatever services are included in the file.
To disable an incoming service, you can use the comment sign # in /etc/inetd.conf.
NOTE: If you modify the /etc/inetd.conf file, you have to force inetd to reread
its configuration file. Use inetd -c.
service name The name of a valid service in the file /etc/services or, if the
server is RPC-based (nfs), the service name should be in rpc.
socket type Either stream or dgram, depending on whether the server socket is a
stream or a datagram socket. Sockets will be discussed later in this
module.
wait/nowait wait applies to datagram sockets only. All other sockets should
specify nowait. wait instructs inetd to execute only one
datagram server for the specified socket at any one time. It instructs
inetd to execute a datagram server for a specified socket whenever a
datagram arrives.
user The name of the user as whom the server should run.
server program The absolute path name of the program which inetd executes when it
finds a request on the server's socket.
arguments The arguments to the server program starting with argv[0], which is
the name of the program.
Configuring /etc/services
inetd
Q: Which port should I monitor for FTP requests?
:
ftp 21/tcp # File Transfer Protocol (Control)
telnet 23/tcp # Virtual Terminal Protocol
login 513/tcp # remote login
shell 514/tcp # remote command, no passwd used
:
Student Notes
Recall that a packet's destination is determined by the packet's destination socket address.
The socket address is a concatenation of the destination host's IP address, and a port number
on the destination host. The socket address allows the system to deliver each packet to the
appropriate destination.
Each internet service has a "well-known" port number that is consistent across all hosts. The
/etc/services file associates these well-known port numbers with service names.
Lines in /etc/services may be commented out with a "#" sign to prevent access to a
particular service. However, the more conventional approach to disabling a service is to
comment the service's line out of /etc/inetd.conf.
Establishing a Connection
Let's take a closer look at what occurs when a client attempts to connect to a server. The
example considers the steps required to initiate a telnet connection between two hosts.
First, the inetd daemon is started automatically during system startup. After reading
/etc/inetd.conf and /etc/services, inetd determines that it should listen for
telnet requests on well-known port number 23. If other services are configured in
inetd.conf, inetd listens for connection requests on those services' well-known ports,
too.
Client Server
Figure 2
When a user on the client issues the telnet command, the telnet client process opens any
available port on the client and sends a connection request to the well-known telnet port
number 23 on the server. There is no need for the client telnet process to use a well-known
port number, since nobody is trying to find the client process. Server processes, however,
must use well-known port numbers so clients know which port to address their connection
requests to.
Client Server
Figure 3
The server's inetd daemon receives the request for service on port 23. Since port 23 is the
well-known port for telnet, inetd spawns a telnetd server process and establishes a
socket connection upon which the telnetd and telnet processes communicate directly
without intervention from inetd. inetd continues listening for new requests.
Client Server
inetd (LISTEN)
Port 23 telnetd (ESTABLISHED)
Figure 4
If additional clients request telnet service, the server's inetd daemon simply starts
additional telnetd processes on port 23 as necessary.
Configuring /var/adm/inetd.sec
inetd
Q: Which clients are allowed FTP access?
:
ftp deny 128.1.1.1
telnet deny 128.1.*.*
shell allow 192.1.1.* 192.1.3.*
login allow 192.1.1-3.* host1 host2
:
Student Notes
If you want to allow selected clients access to one or more Internet services, configure
/var/adm/inetd.sec.
Each line in the file defines which clients may access a particular service managed by inetd.
The slide examples are explained below:
• The inetd daemon denies ftp service to the host at 128.1.1.1. All other hosts, however,
can ftp to the server.
• Only clients on the 192.1.1 or 192.1.3 networks can remsh to the server.
• Any host on the 192.1.1, 192.1.2, or 192.1.3 networks can rlogin to the server. The host
names host1 and host2, will also have rlogin access.
If inetd.sec does not exist, all configured services will be available to all clients. If the file
exists but does not have an entry for one or more inetd services, the unlisted services will
be available to all clients.
allow|deny Determines if the list of remote hosts in the next field is allowed or
denied access to a service. The default is to allow access.
host_specifiers The IP address, network names, or host name that should be allowed
or denied access. A wild card character (*) and a range character (-)
are allowed. These characters can be present in any fields of the
address.
NOTE: You have to use the official service name as specified in the /etc/services
file. The service for rlogin is called login. The shell service is needed for
rcp and remsh.
syslogd
/etc/rc.config.d/netdaemons
export
Edit INETD_ARGS= -l # Enable inetd logging at every boot by
# setting the INETD_ARGS variable here!
Student Notes
The inetd -l command toggles inetd logging. If connection logging is enabled, the
logging information is reported to the system logger (/usr/sbin/syslogd) and its log file
/var/adm/syslog/syslog.log. If you activate logging, inetd will log attempted
connections to the services. It will also log those connection attempts that fail the security
check. This can be useful when trying to determine if someone is trying to break into your
system. An example of the contents of the syslog file is shown below:
To enable inetd logging at system start up, configure the appropriate variable in the
/etc/rc.config.d/netdaemons file and restart the daemon:
# vi /etc/rc.config.d/netdaemons
export INETD_ARGS=-l
# /sbin/init.d/inetd stop
# /sbin/init.d/inetd start
Note that inetd logging records host names that have requested internet services, but does
not record the usernames that requested those services. The /var/adm/wtmp and
/var/adm/btmp files log successful and unsuccessful login attempts. Use the following
commands to view these files:
Student Notes
System and user equivalency allows selected users to bypass password security when using
rlogin, remsh, and rcp to access hosts across the network.
System equivalency is configured via the /etc/hosts.equiv file, and user equivalency is
configured via ~/.rhosts. Both of these files will be discussed in detail in the slides that
follow.
Although these files allow your users conveniently and transparently to access their accounts
on multiple systems, they create a significant security risk. Be sure the permissions on both
files are set appropriately:
r--r--r-- /etc/hosts.equiv
rw------- ~/.rhosts
Configuring /etc/hosts.equiv
/etc/hosts.equiv /etc/hosts.equiv
login: leo host1 -sue host1 tom
host1
1 $ rlogin host2
2 $ rlogin host2 -l tom
3 $ remsh host3 ll
Which command
4 $ remsh host3 -l tom ll succeeds?
login: sue
5 rcp host2:.profile .
Student Notes
The /etc/hosts.equiv file associates remote hosts with a user's host. This association
identifies equivalent hosts that are frequently accessed by the same users. If a remote host is
listed in hosts.equiv, and the remote user's login name matches a login name on the local
host, the user is not prompted for a password. This equivalency does not apply to
superusers. If you are logged in as root and you attempt to access another system,
/etc/hosts.equiv is bypassed.
Typically, the system administrator creates the /etc/hosts.equiv file if she or he wishes
to use this feature.
/etc/hosts.equiv works only with the Berkeley Services remsh, rcp, and rlogin
NOTE: When you list a system in hosts.equiv, all users on that system with the
same user name as on your system, have access to your system except the
root user. Root user equivalency can be set up through .rhosts.
Entries in /etc/hosts.equiv
A host name or user name can match the corresponding field in an entry in hosts.equiv in
many ways. Several of these are
Literal match A host in hosts.equiv may literally match the host name (not an alias) of
the remote host. A user name in hosts.equiv may literally match the
remote user name. If there is no user name in the hosts.equiv entry, the
remote user name must literally match the local user name.
-name If the host name in hosts.equiv is of this form, and if name literally
matches the remote host name or name with the local domain name
appended matches the remote host name, then access is denied regardless of
the user name. If the user name in hosts.equiv is of this form, and name
literally matches the remote user name, access is denied. Even if access is
denied in this way by hosts.equiv, access can still be allowed by
.rhosts.
+ Any remote host name matches the + host name in hosts.equiv. Any remote user
matches the + user name.
Examples
1. $ rlogin host2
leo has to enter the password because equivalency between different users is not
possible with /etc/hosts.equiv.
3. remsh host3 ll
leo wants to access system host3 as user leo. This will fail because there is only
equivalency configured for user tom from host1.
leo wants to access system host3 as user tom. This will fail because there is only
equivalency configured for user tom from host1.
5. rcp host2:.profile .
sue from host1 wants to access sue on system host2. rcp fails because sue is the
only user from system host1 who is excluded.
Configuring ~/.rhosts
host1 host2
~root/.rhosts
host1
login: leo
~sue/.rhosts
1 rlogin host2 -l root
host1 sue
2 remsh host2 ll host1 joe
3 remsh host2 -l sue ll
login: sue ~leo/.rhosts
4 rlogin host2 host1 -sue
host1 +
5 rcp leo@host2:.profile .
Student Notes
$HOME/.rhosts can be created and configured by any user to specify remote login names
that are equivalent to the local user's login name. $HOME/.rhosts must be owned by the
local user.
The local host allows a remote user with a login listed in the local $HOME/.rhosts file to
log into the local user's account without specifying a password. The remote user can also
copy files or execute commands on the local user's system.
The .rhosts file works only with the Berkeley Services remsh, rcp, and rlogin.
The characters + and - can also be used. Look at the examples shown on the slide.
NOTE: .rhosts can be used to allow service to a particular user whose system has
not been granted access in /etc/hosts.equiv. You must create .rhosts
for the home directory of the superuser account if you wish to use equivalent
login names for root.
Examples
A password is required. Root's /.rhosts is only configured for the user root from
system host1.
2. remsh host2 ll
leo wants to access user leo on system host2. This is successful because
/home/leo/.rhosts on system host2 has an entry for all users from system host1
except user sue.
This fails because there is no entry for user leo from system host1 in sue's file.
4. rlogin host2
Now sue wants to log in to her account on system host2. There is no password required
because of the entry host1 sue is in /home/sue/.rhosts.
5. rcp leo@host2:.profile .
This fails. No user equivalency is configured for sue in the /home/leo/.rhosts file.
She is the only user from system host1 who is excluded.
# vi /etc/inetd.conf
login stream tcp nowait root /usr/lbin/rlogind rlogind –l
shell stream tcp nowait root /usr/lbin/remshd remshd -l
# inetd -c
Note that this does not affect root's .rhosts file. /etc/hosts.equiv will still be
consulted.
Student Notes
There are three different security issues related to the configuration of FTP.
Note that .netrc poses a possible security risk since passwords are stored in cleartext.
Make sure the .netrc permissions are set to:
rw-------
The login will fail if the permissions on the file are not set properly.
The ftpd daemon does not check the startup program field in /etc/passwd, so
accounts that have a restricted shell as the startup program should be listed in
/etc/ftpd/ftpusers. Other users who should not have ftp access may be included in
the file as well.
home
r-xr-xr-x rwxrwxrwx
bin passwd
group
ls logingroup
Figure 5
Anonymous ftp is a secure public user account. If this has been set up, users can access
the anonymous ftp account with the user name anonymous or ftp and any non-null
password (by convention, the client email address). ftpd does a chroot() to the home
directory of user ftp, thus limiting anonymous ftp users' access to the system. The
anonymous ftp account must be present in the password file (user ftp). The password field
should be an asterisk (*), the group membership should be guest, and the login shell should
be /usr/bin/false. For example, (assuming the guest group ID is 10)
Since ftpd does a chroot() to /home/ftp, it must have the following subdirectories and
files:
~ftp/usr/bin This directory must be owned by root and must have the permissions 555
(not writable). It should contain a copy of /usr/bin/ls. This is needed
to support directory listing by ftpd. The command should have the
permissions 111 (executable only). If the ftp account is on the same file
system as /usr/bin, ~ftp/usr/bin/ls can be a hard link, but it
cannot be a symbolic link because of the chroot().
~ftp/etc This directory must be owned by root and must have the permissions 555
(not writable). It should contain versions of the files /etc/passwd,
/etc/group, and /etc/logingroup. These files must be owned by
root and must have the permissions 444 (readable only). These files are
needed to map user and group ids to names when using the built-in ls
command of ftp, and to support (optional) sublogins of anonymous
ftp.
~ftp/pub This directory (optional) is used by anonymous ftp users to deposit files
on the system. It should be owned by user ftp and should have the
permissions 1777 (readable and writable by all). If this directory is
created, disk quotas should be used to prevent anonymous users from
filling the file system.
NOTE: The directory ~ftp/pub for depositing files must have the permissions 1777. To
prevent anonymous ftp users from filling the file system you should use disk
quotas.
If you only want to make files available, you do not need the directory ~ftp/pub.
When adding or removing users with SAM, the files in /home/ftp/etc are not
customized.
sam
|
V
Networking and Communications ->
|
V
Network Services
|
V
Anonymous ftp
/etc/inetd.conf
/var/adm/inetd.sec
/etc/ftpd/ftpusers /etc/hosts.equiv
~/.netrc ~/.rhosts
Student Notes
This slide reviews the important executables and configuration files that control access to
the Internet services. An explanation of the ARPA/Berkeley service configuration files
follows below:
/etc/ftpusers Defines which usernames are not valid for ftp logins
(optional)
Note that the "server" and "client" roles assigned in this lab are relatively arbitrary. Most
HP-UX machines are configured to provide both client and server functionality.
Preliminary Step
1. Portions of this lab may disable your lan0 interface card. If you are using remote lab
equipment, login via the GSP/MP console interface for the duration of the lab.
2. (server)
The server's inetd daemon must be running in order for clients to have access to any of
the internet services. Use ps -e to check to ensure that the inetd daemon is running on
your server.
4. (server)
Look at /etc/inetd.conf and /etc/services to determine which internet services
are configured on your server, then complete the table below:
Service Enabled? Port#
------- -------- -----
telnet
ftp
login
tftp
bootps
5. Do you currently have server processes running for these services? Explain.
6. (server)
Ensure that the services in inetd.conf that appear to be enabled actually are enabled.
Use netstat -a to check the status of each of the enabled services and ports you listed
in the table above.
Configure your /var/adm/inetd.sec file such that only the hosts in your row
(including your partner) have telnet access. Add another line to ensure that all your
classmates except your partner can ftp to your machine.
2. (client)
See if your server's configurations so far have succeeded. What messages do you see
when you attempt to telnet or ftp to the server?
3. (server)
What do you have to do to enable inetd logging? Make it so.
4. (client)
See if the logging feature works. From the client, telnet to the server, do an ls, then
immediately exit. Then attempt to ftp to the server (this should fail). Move on to the
next question to see what was recorded in the inetd log.
5. (server)
How much detail is recorded in the inetd log? On the server, do a more on the file
where ARPA/Berkeley service requests are logged.
• Does inetd log the name of the service requested?
• Does inetd log the host name of the requesting client?
• Does inetd log the username of the user making telnet requests?
• Does inetd log the commands executed during the telnet session?
• Does inetd log deny requests for Internet service?
Which telnet related processes are running on the client now? Which telnet related
processes are running on the server now?
5. (client)
Close your telnet connections to the server.
# inetd
4. (client)
Must the client be running inetd in order to establish connections to a server? Try it,
and explain the result.
2. (client)
While logged in as root, use rlogin to log into the server. What happens? Why?
Exit out of your rlogin session before proceeding to the next question.
3. (client)
Use the su command to switch your user ID to user1. Then try rlogin again. What
happens? Why?
4. (server)
What can you do on the server to enable root on the clients password free access to your
machine? Make it so.
5. (client)
Terminate the rlogin and su sessions you stated previously. Ensure that you are back
to the "root" userid. Then see if you can rlogin to the server without a password.
6. (server)
Remove /etc/hosts.equiv and ~root/.rhosts.
The list below suggests several different ways to corrupt the internet service configuration on
your "server" machine. Take turns being the "corrupter" and the "troubleshooter.”
The "corrupter" should perform any one of the corruption techniques from the list below on
the "server" machine. It is the duty of the "troubleshooter,” then to do whatever is necessary
on the server to enable the client to successfully telnet to the server.
Try the exercise several times, alternating roles as "corrupter" and "troubleshooter.”
/sbin/init.d/dtlogin.rc stop
4. Take down the server's LAN card with ifconfig lan0 down.
Part 7: Cleanup
Before moving on to the next chapter, restore your network configuration to the state it was
in before this lab.
# /labs/netfiles.sh –r NEW
Directions
Answer the following questions.
6. What is a port? What file associates port numbers with a service name?
9. Are the /etc/hosts.equiv and $HOME/.rhosts files optional for using the Berkeley
Services? Explain your answer.
10. What is the name and what are the features of the security file that ftpd uses?
14. If inetd logging is enabled, which file contains the logging output?
GET hpnpl/myprinter.cfg
TFTP request/response
BOOTP/TFTP BOOTP/TFTP
hpnpl/myprinter.cfg
Client Server
Student Notes
The Bootstrap Protocol (BOOTP) allows certain network client devices such as network
printers to obtain their TCP/IP configuration and boot information from another system on
the network.
After obtaining network parameters from a BOOTP server, some BOOTP clients download
additional configuration information from the BOOTP server via the Trivial File Transfer
Protocol (TFTP). TFTP supports get, put, and several other ftp-like commands.
In HP-UX, TFTP and BOOTP services are provided via the inetd daemon, and utilize the
UDP transport protocol. When inetd receives a BOOTP broadcast on port 67, it spawns a
/usr/lbin/bootpd server process to respond to the client. When inetd receives a TFTP
request on port 69, it spawns a /usr/lbin/tftpd server process to handle the request.
If you manage multiple network printers, BOOTP provides a convenient central point of
administration to manage the printers’ TCP/IP configuration information.
Student Notes
Several configuration files must be modified to support BOOTP/TFTP.
# setup_bootp
# setup_tftp –h [dirname]
2. Verify that the services are defined in /etc/services. BOOTP and TFTP should both
appear in the /etc/services file. The BOOTP server uses UDP port#67, and the TFTP
server uses UDP port#69. These entries are added to /etc/services as part of the
InternetSrvcs product that is loaded as a standard part of the OS, so no changes should be
required.
# cat /etc/services
bootps 67/udp
tftp 69/udp
3. Verify that the services are defined and enabled in /etc/inetd.conf. The bootps and
tftp lines in /etc/inetd.conf must be commented in. The setup_bootp and
setup_tftp programs mentioned above should do this automatically. If you specified
any directories that should be made available via TFTP in addition to /home/tftpdir,
those directories should be listed as arguments on the end of the tftp line.
# cat /etc/inetd.conf
bootps dgram udp wait root /usr/lbin/bootpd bootpd
tftp dgram udp wait root /usr/lbin/tftp tftp [dirname]
4. Verify that the TFTP account is defined in /etc/passwd. TFTP uses this /etc/passwd
file entry to determine which directory should be made available to TFTP users. The
setup_tftp command above should take care of this automatically. The account
should be disabled to ensure that TFTP users can’t login via telnet or any other
interactive shell login.
# cat /etc/passwd
tftp:*:510:1:Trivial FTP User:/home/tftpdir:/usr/bin/false
5. Verify that /home/tftpdir/ exists. TFTP users will be chroot’ed to this directory at
login, and all files under this directory will be accessible to TFTP users. setup_tftp
should create this directory for you.
# ll -d /home/tftpdir/
dr-xr-xr-x 2 tftp other 96 Aug 27 17:17 /home/tftpdir/
Configuring /etc/bootptab
myprinter:\
hn:\
ht=ether:\
ha=080009a752c3:\
ip=128.1.1.4:\
sm=255.255.0.0:\
gw=128.1.0.1:\
dn=ca.hp.com:\
ds=128.1.1.1:\
T144=“myprinter.cfg”:\
vm=rfc1048
Student Notes
The /etc/bootptab file tells the BOOTP daemon which network parameters are required
for each BOOTP client. When the /usr/lbin/bootpd daemon receives a BOOTP
broadcast, it compares the clients MAC address to the ha (hardware address) field in each
/etc/bootptab entry. When it finds a matching record, it returns the IP address, subnet
mask, and other information back to the client.
# cat /etc/bootptab
+----------------------------
|myprinter:\
| hn:\
| ht=ether:\
| ha=080009a752c3:\
| ip=128.1.1.4:\
| sm=255.255.0.0:\
| gw=128.1.0.1:\
| dn=ca.hp.com:\
| ds=128.1.1.1:\
| T144=“hpnp/myprinter.cfg”:\
| vm=rfc1048
+-----------------------------
The table below describes the fields in the example above. Read the extensive comments at
the top of the /etc/bootptab to learn about other supported fields.
Field Purpose
myprinter Indicates the device’s hostname
hn Indicates that the hostname should be included in the BOOTP response
ht Indicates the device’s interface card type
ip Indicates the IP address to include in the BOOTP response
sm Indicates the subnet mask to include in the BOOTP response
gw Indicates the default gateway to include in the BOOTP response
dn Indicates the DNS domain name to include in the BOOTP response
ds Indicates the DNS nameserver address to include in the BOOTP response
T144 Indicates the configuration file that the client should download via TFTP
vm Indicates the “vendor magic cookie”, (http://www.faqs.org/rfcs/rfc1048.html)
You can edit the /etc/bootptab file using any text editor, but many administrators prefer
to manage the file via automated utilities such as HP’s hppi utility, which is described on the
next page. Changes in the /etc/bootptab file take effect immediately.
Student Notes
When adding BOOTP entries for network printers, it’s easiest to edit the /etc/bootptab
file via the hppi (HP Printer Installer) utility from HP.
Before you begin, you will need to know the new printer's:
• MAC address
• IP address
• Hostname
• Subnet mask
• Default gateway address (optional)
• DNS domain name (optional)
• DNS name server address (optional)
The IP address, hostname, netmask, gateway, and DNS address all may be obtained from
your network administrator. Print a test page on the printer to determine the printer's MAC
address.
With this information in hand, you can begin configuring your printer!
1. Enable the BOOTP and TFTP services.
# setup_bootp
# setup_tftp –h
2. Install the HPNPL product (J4189-1101B). HP recommends using a menu-based utility
called hppi to configure BOOTP/TFTP service for network printers. hppi is part of the
HPNPL (HP Network Printer Library) product, which is available from the
http://www.hp.com website. Follow the instructions on the website to download and
install the HPNPL software.
# swlist HPNPL
# vi /etc/hosts
4. Run the HP Printer Installer. The next slide explains the hppi menus in detail.
# hppi
-> JetDirect Configuration
-> Create printer configuration in BOOTP/TFTP database
Student Notes
hppi is an intuitive, menu drive utility that allows you to manage BOOTP/TFTP entries for
network printers and add HP Jetdirect-based network printers to your LP spooler
configuration.
The screen captures below demonstrate the complete process required to add a
BOOTP/TFTP entry via hppi.
# hppi
****************************************************************
*****] ****
**** ] **** JetDirect Printer Installer for UNIX
**** ]]]]] ]]]]] **** Version E.10.18
**** ] ] ] ] ****
**** ] ] ]]]]] **** M A I N M E N U
***** ] ****
****** ] **** User: (root) OS: (HP-UX B.11.11)
I N V E N T
****************************************************************
3) Diagnostics:
- diagnose printing problems
?) Help q) Quit
****************************************************************
*****] ****
**** ] **** JetDirect Printer Installer for UNIX
**** ]]]]] ]]]]] **** Version E.10.18
**** ] ] ] ] ****
**** ] ] ]]]]] **** M A I N M E N U
***** ] ****
****** ] **** User: (root) OS: (HP-UX B.11.11)
I N V E N T
****************************************************************
- OR -
Following are optional parameters you may set for JetDirect. Select
any non-zero numbers to make the changes. The settings are used to
create a BOOTP/TFTP database when '0' is selected. To abort the
operation, press 'q'
Following are optional parameters you may set for JetDirect. Select
any non-zero numbers to make the changes. The settings are used to
create a BOOTP/TFTP database when '0' is selected. To abort the
operation, press 'q'
Following are optional parameters you may set for JetDirect. Select
any non-zero numbers to make the changes. The settings are used to
create a BOOTP/TFTP database when '0' is selected. To abort the
operation, press 'q'
Following are optional parameters you may set for JetDirect. Select
any non-zero numbers to make the changes. The settings are used to
create a BOOTP/TFTP database when '0' is selected. To abort the
operation, press 'q'
Following are optional parameters you may set for JetDirect. Select
any non-zero numbers to make the changes. The settings are used to
create a BOOTP/TFTP database when '0' is selected. To abort the
operation, press 'q'
(configuring) ...
Completed creating BOOTP/TFTP configuration database for r816p1.
Please wait...
(testing, please wait) ...
Testing BOOTP with 080009000000...:
RESULT: Passed BOOTP test 1 with 080009000000.
If you are not ready for the next test (for example, the IP name
has not taken affect in your DNS server), press 'q' to return to
the configuration menu now.
During the testing phase at the end of the process you may see an error message regarding a
port conflict with rbootd. rbootd is an old network service that supported diskless
devices prior to 10.x. The /etc/bootptab entry should be added despite the rbootd
warning; hppi simply warns you that it wasn’t able to verify the configuration as a result of a
port conflict with rbootd. If you disable rbootd in /etc/rc.config.d/netdaemons,
you won’t see the error message.
# tail /etc/bootptab
myprinter:\
:ht=ether:\
:ha=080009000001:\
:sm=255.255.0.0:\
:gw=128.1.0.1:\
:hn:\
:ip=128.1.0.2:\
:T144="hpnpl/myprinter.cfg":\
:vm=rfc1048:
If you specified any of the optional parameters, you should also find a configuration file in
the TFTP home directory containing those parameters.
# cat /home/tftpdir/hpnpl/myprinter.cfg
idle-timeout: 120
location: print room
contact: darren
Answer:
Answer:
3. Verify that the bootps and tftp services are both enabled in /etc/inetd.conf and
the /etc/services file.
Answer:
4. Verify that the TFTP account exists in /etc/passwd and that a TFTP home directory was
created.
Answer:
Answer:
3. Using hppi, create a bootptab entry for a network printer. Use the hardware address,
IP address, host name, subnet mask, and default router address provided by your
instructor. Use your classroom's room name or number as the printer location, and your
own name as the printer contact.
Answer:
4. Check the /etc/bootptab file for changes made by hppi. Name three pieces of
information defined in the printer's new entry in bootptab.
Answer
5. At this point your machine is ready to service bootp requests from the network printer
you configured.
6. Now remove the new printer bootp configuration from your machine using hppi.
# /opt/hpnpl/bin/hppi
-> (2) JetDirect Configuration
-> (2) Remove printer configuration from BOOTP/TFTP database
− NTP server
− NTP peer
− NTP broadcast client
− NTP polling client
• Configure an NTP server.
Student Notes
Many computer applications rely on the system clock to accurately determine the current
system time.
• System backup utilities use the system clock and file time stamps to determine which
files should be included in incremental backups.
• More and more security sensitive organizations are using Kerberos or other
authentication/encryption mechanisms to protect their data. These security tools often
use authentication keys that expire after a period of time. In order for this mechanism to
function properly, the system clock must be accurate!
• Programmers oftentimes use the make utility to compile and link programs. make
depends on the system clock and file time stamps to determine when source code files
have changed.
In large, networked environments where hosts share files and other resources, it is critical
that hosts maintain accurate, or at least consistent, time to avoid causing problems for the
time-sensitive applications listed above. Humans rarely notice a discrepancy of one or two
seconds between hosts, but time-sensitive applications might!
Unfortunately, the built-in clocks in today's computers are not perfect. Even the best system
clocks may gain or lose a second or two per day. In order to ensure consistent time stamps
across their LANs, many administrators choose to synchronize their hosts' system clocks
using the Network Time Protocol, or NTP.
NTP was developed at the University of Delaware, and is bundled with HP-UX. The HP-UX
xntpd daemon is used to implement the NTP service in HP-UX.
Student Notes
NTP can be used to synchronize system clocks using a variety of time sources:
• A radio clock can be attached to the serial port of an HP-UX system. A radio clock
determines the current time using signals from GPS (Global Positioning System) satellites
or other radio time sources. Radio clocks are among the most accurate time sources, but
cost several thousand dollars. A list of radio clock suppliers is available at
http://www.ece.udel.edu/~ntp. Before purchasing a clock, verify that the model
you choose is supported by HP.
• If you cannot afford a radio clock, a public NTP timeserver on the network can be used to
synchronize a system's clock. A list of public NTP timeservers on the public internet is
available from http://www.ece.udel.edu/~ntp.
• If you do not have a radio clock or an Internet connection, select one host on your local
network as your "authoritative" time source. Other nodes on the LAN, then, can
synchronize their system clocks to the selected "authoritative" source. This guarantees
that hosts on your LAN agree on a common system time, but does not guarantee that your
hosts are synchronized with other hosts outside your local network.
S1
System with a locally attached radio clock
S2
System getting time from an S1 NTP server
S3
System getting time from an S2 NTP server
Student Notes
Hosts with directly connected radio clocks are considered stratum 1 time sources.
Timeservers that obtain the system time by polling a stratum 1 server across the Internet are
typically considered stratum 2 servers. Servers that obtain the system time from stratum 2
servers are typically considered stratum 3 servers. Thus, servers with lower stratum levels
are likely to be more accurate time sources.
NTP Etiquette
Before you configure your xntpd daemon to access a public NTP timeserver, check the
University of Delaware web page to see if the server administrator requires some sort of
registration, or imposes any restrictions on NTP clients. Ideally, you should configure two or
three NTP servers on your local network to poll a stratum 1 or 2 server on the Internet, then
configure other hosts on your local network to poll these local NTP servers. This minimizes
the load on the public timeservers.
NTP Roles
Stratum 1
Servers
Stratum 2
Server peers peers
Peers
Student Notes
When implementing NTP on a network, systems can play four possible roles:
NTP Peers Many NTP servers form peer relationships with other same-
stratum servers. If a stratum 2 server loses connectivity to its
stratum 1 time source, it may temporarily use the time service
provided by a stratum 2 peer.
NTP Direct Polling Clients A direct polling client regularly polls one or more NTP servers,
compares the servers' responses, and synchronizes the system
clock to the most accurate time source.
NTP Broadcast Clients An NTP broadcast client passively listens for NTP broadcasts
from NTP servers on the local network. Broadcast clients
generate less network traffic than direct polling clients, but
provide less accuracy.
The example on the slide shows a typical NTP configuration. The servers at the top of the
slide are stratum 1 servers on the Internet with locally attached radio clocks.
The second tier servers on the slide are stratum 2 servers that poll stratum 1 servers to obtain
the current system time. It is recommended that each stratum 2 NTP server consult three or
more stratum 1 servers to ensure reliability. The xntpd daemon will automatically poll both
stratum 1 servers and synchronize to the source that it deems most accurate. To further
improve reliability, each stratum 2 server should form a peer relationship with one or more
other stratum 2 servers.
Finally, the slide shows two broadcast clients that passively listen for NTP broadcasts, and
two direct polling clients that regularly poll their respective servers to obtain NTP service. If
you have several NTP servers on your local network, you may choose to have your clients
poll all of these servers to ensure reliability.
# vi /etc/ntp.conf
/etc/ntp.conf for
server 127.127.26.1
server1a, with a locally
attached radio clock. peer server1b
peer server1c
# vi /etc/ntp.conf
/etc/ntp.conf for server server1a
server 2a, which polls server server1b
two stratum 1 servers, and peer server2b
provides broadcast service.
driftfile /etc/ntp.drift
broadcast 128.1.255.255
# vi /etc/ntp.conf
/etc/ntp.conf for
a stratum 10 server that uses
server 127.127.1.1
its own local system clock. fudge 127.127.1.1 stratum 10
broadcast 128.1.255.255
Student Notes
The /etc/ntp.conf file is used to define a system's NTP relationships with other systems
on the network. The file is read by the xntpd daemon during the system startup process.
# vi /etc/ntp.conf
server 127.127.26.1
peer server1b
peer server1c
• Each radio clock server should peer with several other stratum-1 servers in case the local
radio clock becomes unavailable. This sample file defines peer relationships with
server1b and server1c.
# vi /etc/ntp.conf
server server1a
server server1b
peer server2b
driftfile /etc/ntp.drift
broadcast 128.1.255.255
• The peer entry defines a peer relationship with another stratum 2 server, server2b.
• The driftfile entry specifies the name of a file to use to track long-term drift of the
local clock.
• The broadcast entry causes xntpd to regularly broadcast the official NTP time to
broadcast clients on the 128.1.0.0/16 network.
• The fudge entry defines a stratum level to be assigned to this clock. It is a good idea to
treat the internal system clock as a stratum 10 time source so clients that have access to
real NTP servers will synchronize to those servers.
• The broadcast entry causes the server to broadcast NTP information to broadcast
clients on the 128.1.255.255 network.
• This method of time synchronization should only be used on networks with no access to
an external time source.
# vi /etc/ntp.conf
server server2a
/etc/ntp.conf for
server server2b
a direct polling client
driftfile /etc/ntp.drift
# vi /etc/ntp.conf
/etc/ntp.conf for
broadcastclient yes
a broadcast client
driftfile /etc/ntp.drift
Student Notes
Each NTP client should have an /etc/ntp.conf configuration file, too.
# vi /etc/ntp.conf
server server2a
server server2b
driftfile /etc/ntp.drift
• The driftfile is used to track differences between the client's time and the server's
time. As the driftfile stabilizes, the server will be polled less frequently.
# vi /etc/ntp.conf
broadcastclient yes
driftfile /etc/ntp.drift
• This method is recommended over direct server polling for large networks since it
significantly reduces NTP network traffic.
/usr/sbin/xntpd
• Daemon started at system boot
• Polls one or more NTP servers at regular intervals
• "Slews" local clock gradually to match the most accurate server
/etc/ntp.drift
• File maintained and used by xntpd
• Tracks the local clock’s accuracy over time
Student Notes
NTP provides three different mechanisms for synchronizing your system clock with other
nodes on the network.
5. Wait for NTP to establish associations with servers and peers. Be patient!
Student Notes
Several steps are required to configure an NTP server:
1. Edit the /etc/rc.config.d/netdaemons file to configure the xntpd daemon to
startup every time the system boots. Set the XNTPD variable to equal 1.
# vi /etc/rc.config.d/netdaemons
export NTPDATE_SERVER=
export XNTPD=1
export XNTPD_ARGS=
If the server uses a radio clock, or the internal system clock, leave the NTPDATE_SERVER
variable null. If the server obtains its system time from other network timeservers, the
NTPDATE_SERVER variable should be set equal to a space-separated list of timeservers.
2. Edit the /etc/TIMEZONE file and specify the correct time zone for the system. Set the
TZ variable to equal the time zone for the system. See the /usr/lib/tztab file for a list
of all the available time zones.
# vi /etc/TIMEZONE
TZ=CST6CDT
export TZ
3. Edit the /etc/ntp.conf file and define the NTP server as described earlier in this
module.
6. Verify the NTP server configuration (and its association with peer NTP servers) by
executing the following command:
# ntpq -p
More information on the ntpq command is contained in the upcoming slides.
5. Wait for NTP to establish associations with servers and peers. Be patient!
Student Notes
The procedure for configuring an NTP client is virtually identical to that of configuring an
NTP server — only the contents of the configuration files change.
3. Edit the /etc/ntp.conf file and define the NTP client as described earlier in this
module.
6. Verify association with NTP server(s) and peers were correctly established. Execute the
command:
# ntpq -p
Student Notes
Several tools are available to verify that NTP is functioning properly.
• Check the syslog.log log file:
# tail /var/adm/syslog/syslog.log
When the xntpd daemon starts up, it logs a number of entries to the
/var/adm/syslog/syslog.log log file, including:
# ps –e | grep xntpd
• View the relationships established by your xntpd daemon by executing the ntpq -p
command.
# ntpq -p
remote refid st t when poll reach delay offset disp
---------------------------------------------------------------
*server2a server1a 3 u 64 64 377 0.87 10.56 16.11
+server2b server1b 3 u 100 264 376 9.89 5.94 16.40
server2c 0.0.0.0 16 - - 64 0 0.00 0.00 1600.00
ntpq displays several fields of information for each of the defined NTP relationships. The
fields are described below:
The NTP source that you are currently synchronized to is indicated by a “*”. Other strong
contenders are indicated by a “+”. “-“ indicates a discarded source.
Directions
Your instructor will assign you to work with a team of your classmates to configure an NTP
server, and one or more NTP clients. Record the host names and chosen roles of your
teammates' machines below.
Record the commands you use to complete the steps below, and answer all questions.
Preliminary Step
1. Portions of this lab may disable your lan0 interface card. If you are using remote lab
equipment, login via the GSP/MP console interface for the duration of the lab.
Since you probably do not have access to a radio clock in the classroom, use the NTP server's
internal system clock as the authoritative time source for your team.
1. Set the local clock forward 2 minutes so the clients can actually see a clock "step" after
enabling NTP.
date MMDDhhmm
xclock -update 1 &
2. Add a server line to the end of the /etc/ntp.conf file defining the local clock as the
only time source. Since the internal system clock is not likely to be accurate, set the
stratum level of this time source to 10.
# vi /etc/ntp.conf
server 127.127.1.1
fudge 127.127.1.1 stratum 10
5. After xntpd starts, it takes 5 poll cycles (320 seconds) to establish the appropriate peer
and server relationships. Wait 5 minutes before proceeding on to the next question.
6. Is the xntpd daemon running? Are there any NTP errors in the syslog?
# ps -e | grep xntpd
# tail /var/adm/syslog/syslog.log
If all is well, the daemon should be running, and there should not be any XNTPD
"ERROR"s in the syslog.
7. Does ntpq -p suggest that the correct association has been formed? What stratum level
did NTP assign to your local clock?
# ntpq -p
There should be one line in the ntpq -p output showing that the local clock is being
used as a stratum 10 time source.
You may use the server's hostname rather than the IP if you wish.
Note: xntp must be able to write to the directory where the drift file is located.
Here again, you may use the server's host name in place of the IP if you wish.
3. Run the NTP startup script on the client to start the NTP daemon. Note the output as
ntpdate steps the system clock.
# /sbin/init.d/xntpd start
4. Check to ensure that your client formed the proper association by running ntpq -p.
# ntpq -p
5. Compare the time on your client against the time on the NTP server. Do they appear to be
synchronized at this point?
• Create a depot.
An SD-UX “Depot” is a repository for software that has been bundled using HP’s
Software Distributor utilities and tools. Depots may be stored on CD, tape, in a
.depot file, or in a directory on disk.
depot
Student Notes
Managing software in today’s large computing environments can be a challenging task.
Administrators often manage dozens of systems, and must contend with a constant stream of
software and patch updates.
Fortunately, all software from HP – from the HP-UX install CD’s, to Openview product CDs,
to patch downloads from the ITRC -- are packaged using HP’s Software Distributor UX (SD-
UX) utilities.
The SD-UX utilities make it fairly easy to install, remove, and catalog software on HP-UX
systems.
Administrators that manage multiple systems can streamline software management even
further by taking advantage of SD-UX software depots. An SD-UX depot is a repository for
software packaged using the SD-UX utilities. Depots can be stored using a variety of media.
• The OS and application software that you receive in the HP-UX media kit are structured
as CDROM depots.
• The Support+ patch bundles that are distributed several times each year are structured as
CDROM depots, too.
• Patches that you download from the ITRC website are stored as .depot files.
Contributed software that you download from the HP users’ group typically comes in a
.depot file, too.
• Occasionally, HP support personnel may provide a patch tape, which is also recorded in
the SD-UX depot format.
Juggling stacks of media kits, CD-ROMs, tapes, and .depot files can be challenging.
Fortunately, SD-UX offers a better solution: using the swcopy command, you can
consolidate software from multiple sources into a consolidated directory depot.
Application depot
Student Notes
After you create one or more directory depots on a system, you may wish to make those
depots available to other hosts on the network, too. Systems that are configured to share
depots with remote SD-UX clients are called “SD-UX Depot Servers”.
A depot server may have one or more depots, and can specify which depots should be shared
with clients.
Student Notes
Configuring an SD-UX depot server offers many advantages:
• Instead of managing stacks of CDROMs and tapes, SD-UX client administrators can
swinstall software and patches from your SD-UX depot server. This is especially
helpful when installing systems that don’t have a CDROM or tape drive available.
• A depot server provides a single point of administration for your software and patch
updates.
• Installing all of your hosts from a central depot server ensures that all hosts have a
similar software/patch image.
• Configuring a depot server makes it possible to remotely install and manage software.
Individual hosts on your network can “pull” software from the depot server. With HP-UX
11i, it is now possible to “push” software installs and updates from a depot server to one
or more remote targets.
• After you select a patch, product, or bundle in a depot, swinstall auto-selects other
products from the depot that your selected product requires.
• When swinstall’ing software from an SD-UX depot, if the depot contains patches for
the user-selected product(s), swinstall will automatically select and install those
patches at the same time that it installs the selected product itself. This can significantly
decrease the amount of downtime required to update software and patches on a system.
Student Notes
Several important design issues should be considered before you configure an SD-UX server.
# vgdisplay vg01
# lvcreate –L 100 –n depots vg01
# newfs –F vxfs /dev/vg01/rdepots
# mkdir /depots
# mount /dev/vg01/depots /depots
# vi /etc/fstab
Prior to HP-UX 11.00, patches and products had to be stored in separate depots. HP-UX 11.00
introduced some swinstall enhancements that made it practical to co-mingle patches and
products in a single depot.
/mydepot
Student Notes
After you create your depot directories, you can copy software to the depots from a variety of
sources using the swcopy command.
Copy all software from one directory depot to another directory depot:
svr# swcopy –s /myolddepot ‘*’ @ /mydepot
PHCO_1000.depot
svr# swcopy \
–s /tmp/PHCO_xxxx.depot \
PHCO_2000.depot /mydepot
-x enforce_dependencies=false \
\* @ /mydepot
PHNE_3000.depot
Student Notes
Although a product-only depot is useful, providing patches as well as products in your SD-UX
depots offers even greater power and flexibility:
• Some of the patches that you use in your shop probably come from the Support+ CD,
some may be downloaded from the ITRC, and some may be pulled from patch tapes.
Using an SD-UX depot, you can consolidate patches from all of those sources into a
single network depot.
Since swinstall installs both the product and its patches simultaneously, this
minimizes the number of swinstall sessions and reboots necessary to install the
product and necessary patches. If the auto-selected patches have dependencies,
• After a product has been initially installed, depots simplify patch updates, too. Client
administrators can simply use the swinstall –x patch_match_target
command to automatically select patches from the depot that match products already
installed on the target system:
Instead of manually managing patches on each individual system, simply ensure that
your depot server has the most current, tested patches, then run the swinstall
command above on each target host on a regular basis. swinstall will choose the
appropriate patches for each system.
This next example copies all the patches from a Support+ GOLDBASE11i depot to
/mydepot:
This last example copies all the patches from a patch tape to /mydepot:
Patch dependencies
Note that all of the swcopy examples above included the –x
enforce_dependencies=false option.
Oftentimes, in order for an HP-UX patch to function properly, one or more additional patches
may be necessary to meet the patch’s dependencies. These dependencies are typically
documented in the patch’s .text file, and on the ITRC patch database web page. By default,
the swcopy command won’t copy a patch to a depot unless the patch’s dependencies can be
resolved in the depot.
When setting up a patch depot, it is common to copy patches from multiple .depot files,
each of which contains a single patch. Since the patch in one .depot file may be dependent
on a patch in another .depot file, meeting those dependencies can be a real hassle. The
process is much simpler if you disable swcopy dependency checking.
When clients swinstall patches from the depot, however, the swinstall command must
verify that dependencies have been met. Although it is safe to override dependency checking
on swcopy, it is very dangerous to override dependency checking when running
swinstall. Doing so can render a system unstable.
Remove all products from the depot, and the depot itself
svr# swremove –d \* @ /mydepot
svr# rm -rf /mydepot
Student Notes
The command required to remove a product from a depot is fairly straightforward:
If you wish to remove all of the software from a depot, simply replace FooProd with a ‘*’.
This will also ”unregister” the depot itself.
The table below summarizes the resulting swremove behavior you will see when using the
most common combinations of these options to remove a patch that has dependencies:
# Initializing...
# tgt “svr" has the following depot(s):
/mydepot
/myappdepot
# tgt: svr:/mydepot
# Bundle(s):
100BaseT-00 B.11.11.01 EISA 100BaseT
100BaseT-01 B.11.11.01 HP-PB 100BaseT
Student Notes
Listing available depots and their contents
After creating a depot, you can verify that the depot is visible to your clients by executing the
swlist command. Other hosts on the network can use the same command to see which
depots are available from your server.
You can also list the contents of a specific depot using a variation on the same command.
This feature, too, is available to anyone on the network.
Register a depot:
svr# swreg –l depot @ /cdrom
svr# swlist –l depot
# Initializing...
# tgt “sanfran" has the following depot(s):
/cdrom
Unregister a depot:
svr# swreg –ul depot @ /cdrom
svr# swlist –l depot
# Initializing...
# WARNING: No depot was found for "sanfran:".
Student Notes
In order for a depot to be visible to clients on the network, the depot must be “registered”. If
you have a locally mounted CDROM depot that you wish to make available to other clients on
the network, simply register the depot via the swreg command.
Before you unmount and remove the CDROM, be sure to unmount it.
When you copy software to a directory depot, swcopy automatically registers the depot for
you. Also, when you remove the last product from a depot, swremove unregisters the depot
for you.
You can always install software from a depot on your localhost, even if the depot isn’t
registered.
software pull
Student Notes
Once you have configured your depot server, your clients can use the swinstall command
to pull software from your depots, just as you would install software from a CD. Simply
specify server:/depotpath after the –s source option.
After analyzing the requirements of the selected product(s) and auto-selecting dependencies
and patches from the depot, swinstall installs and configures the software on your system.
If the product or bundle contains a kernel fileset, swinstall will automatically reboot your
system.
tgt1
tgt2
software tgt3
push
svr
Student Notes
The 11i version of the swinstall command was enhanced to provide the ability to push
software to remote systems from a depot server. The swremove, swcopy, and swlist all
are capable now of performing remote operations, too, both from the command line and via
the interactive GUI interface. You can monitor the results of a remote operation using the
swjob job browser GUI.
This new functionality allows you to manage software and patches on multiple systems from
one central depot server. With sufficient network bandwidth, you could potentially maintain
consistent software loads on hundreds of systems scattered across your enterprise from one
central depot server!
Allow the depot server to push software to a client: (repeat on each client)
tgt# swinstall –s svr:/var/opt/mx/depot11 \
-x autoreboot=true \
AgentConfig.SD-CONFIG
Use the push functionality to remotely install, list, and remove software:
svr# swinstall –s svr:/mydepot FooProd @ tgt1 tgt2
svr# swlist @ tgt1 tgt2
svr# swremove FooProd @ tgt1 tgt2
Student Notes
Next, touch a file called /var/adm/sw/.sdkey. When you run the swinstall GUI,
swinstall checks to determine if this file exists. If the file exists, swinstall launches a
somewhat modified GUI that allows you to specify one or more remote target hosts to push
software to. Without this file, swinstall launches the traditional GUI interface and
assumes that all selected software should be installed on the localhost. If you don’t use the
# touch /var/adm/sw/.sdkey
The depot server isn’t allowed to push software to a target client until the client explicitly
allows the depot server to do push installs. This requires several changes to the SD-UX
Access Control Lists (unrelated to HFS or JFS ACLs). The SD-UX ACL mechanism is fairly
sophisticated, and can’t be covered in this class. For more information, see the Software
Distributor Administrator Guide for HP-UX 11i (Part Number: B2355-90699) manual on
http//docs.hp.com. Fortunately, SD-UX will set the appropriate ACLs for you if you
install the SD-CONFIG fileset from the depot server on the target client.
Use the push functionality to remotely install, list, and remove software
After you have configure both the depot server and target clients, you can begin pushing
software from the depot server. Here are a few examples:
If you created the /var/adm/sw/.sdkey file above, then the GUI interface for each of these
commands will include a new screen that allows you to select a target host for the SD-UX
operations.
Limitations
• You cannot use remote operations to directly “push” an HP-UX OS update to remote
systems.
• The following commands don’t support the SD-UX push functionality: update-ux,
install-sd, swpackage, swmodify
• You can only push software from an 11i depot server, though the target hosts can be
10.20, 11.00, or 11i.
Directions
Carefully follow the directions below.
# /labs/netfiles –r ORIGINAL
Answer:
Answer:
Answer:
4. List the contents of your new depot to verify that the software was copied properly.
Answer:
5. Temporarily unregister your depot. What impact does this have on the depot list reported
by swlist –l depot?
Answer:
Answer:
7. Use a “pull” install to install the EchoApp product from your new depot on your
localhost. Watch the output carefully. What was installed as a result of your
swinstall?
Answer:
# /opt/echoapp/bin/echoapp
9. Remove the EchoApp product. Watch the output carefully. What was removed as a
result of your swremove?
Answer:
Answer:
Answer:
Answer:
4. Use the remote swlist functionality to verify that EchoApp installed properly on your
partner’s system.
Answer:
5. Can you remotely remove EchoApp from your partner’s system, too? Try it!
Answer:
Part 3: Cleanup
1. Remove all of the software from your /depots/Rel_B.11.11/appl depot.
Answer:
Answer:
8 08 00001000
9 09 00001001
10 0a 00001010
11 0b 00001011
12 0c 00001100
13 0d 00001101
14 0e 00001110
15 0f 00001111
16 10 00010000
17 11 00010001
18 12 00010010
19 13 00010011
20 14 00010100
21 15 00010101
22 16 00010110
23 17 00010111
24 18 00011000
25 19 00011001
26 1a 00011010
27 1b 00011011
28 1c 00011100
29 1d 00011101
30 1e 00011110
31 1f 00011111
32 20 00100000
33 21 00100001
34 22 00100010
35 23 00100011
36 24 00100100
37 25 00100101
38 26 00100110
39 27 00100111
40 28 00101000
41 29 00101001
42 2a 00101010
43 2b 00101011
44 2c 00101100
45 2d 00101101
46 2e 00101110
47 2f 00101111
48 30 00110000
49 31 00110001
50 32 00110010
51 33 00110011
52 34 00110100
53 35 00110101
54 36 00110110
55 37 00110111
56 38 00111000
57 39 00111001
58 3a 00111010
59 3b 00111011
60 3c 00111100
61 3d 00111101
62 3e 00111110
63 3f 00111111
64 40 01000000
65 41 01000001
66 42 01000010
67 43 01000011
68 44 01000100
69 45 01000101
70 46 01000110
71 47 01000111
72 48 01001000
73 49 01001001
74 4a 01001010
75 4b 01001011
76 4c 01001100
77 4d 01001101
78 4e 01001110
79 4f 01001111
80 50 01010000
81 51 01010001
82 52 01010010
83 53 01010011
84 54 01010100
85 55 01010101
86 56 01010110
87 57 01010111
88 58 01011000
89 59 01011001
90 5a 01011010
91 5b 01011011
92 5c 01011100
93 5d 01011101
94 5e 01011110
95 5f 01011111
96 60 01100000
97 61 01100001
98 62 01100010
99 63 01100011
100 64 01100100
101 65 01100101
102 66 01100110
103 67 01100111
104 68 01101000
105 69 01101001
106 6a 01101010
107 6b 01101011
108 6c 01101100
109 6d 01101101
110 6e 01101110
111 6f 01101111
112 70 01110000
113 71 01110001
114 72 01110010
115 73 01110011
116 74 01110100
117 75 01110101
118 76 01110110
119 77 01110111
120 78 01111000
121 79 01111001
122 7a 01111010
123 7b 01111011
124 7c 01111100
125 7d 01111101
126 7e 01111110
127 7f 01111111
128 80 10000000
129 81 10000001
130 82 10000010
131 83 10000011
132 84 10000100
133 85 10000101
134 86 10000110
135 87 10000111
136 88 10001000
137 89 10001001
138 8a 10001010
139 8b 10001011
140 8c 10001100
141 8d 10001101
142 8e 10001110
143 8f 10001111
144 90 10010000
145 91 10010001
146 92 10010010
147 93 10010011
148 94 10010100
149 95 10010101
150 96 10010110
151 97 10010111
152 98 10011000
153 99 10011001
154 9a 10011010
155 9b 10011011
156 9c 10011100
157 9d 10011101
158 9e 10011110
159 9f 10011111
160 a0 10100000
161 a1 10100001
162 a2 10100010
163 a3 10100011
164 a4 10100100
165 a5 10100101
166 a6 10100110
167 a7 10100111
168 a8 10101000
169 a9 10101001
170 aa 10101010
171 ab 10101011
172 ac 10101100
173 ad 10101101
174 ae 10101110
175 af 10101111
176 b0 10110000
177 b1 10110001
178 b2 10110010
179 b3 10110011
180 b4 10110100
181 b5 10110101
182 b6 10110110
183 b7 10110111
184 b8 10111000
185 b9 10111001
186 ba 10111010
187 bb 10111011
188 bc 10111100
189 bd 10111101
190 be 10111110
191 bf 10111111
192 c0 11000000
193 c1 11000001
194 c2 11000010
195 c3 11000011
196 c4 11000100
197 c5 11000101
198 c6 11000110
199 c7 11000111
200 c8 11001000
201 c9 11001001
202 ca 11001010
203 cb 11001011
204 cc 11001100
205 cd 11001101
206 ce 11001110
207 cf 11001111
208 d0 11010000
209 d1 11010001
210 d2 11010010
211 d3 11010011
212 d4 11010100
213 d5 11010101
214 d6 11010110
215 d7 11010111
216 d8 11011000
217 d9 11011001
218 da 11011010
219 db 11011011
220 dc 11011100
221 dd 11011101
222 de 11011110
223 df 11011111
224 e0 11100000
225 e1 11100001
226 e2 11100010
227 e3 11100011
228 e4 11100100
229 e5 11100101
230 e6 11100110
231 e7 11100111
232 e8 11101000
233 e9 11101001
234 ea 11101010
235 eb 11101011
236 ec 11101100
237 ed 11101101
238 ee 11101110
239 ef 11101111
240 f0 11110000
241 f1 11110001
242 f2 11110010
243 f3 11110011
244 f4 11110100
245 f5 11110101
246 f6 11110110
247 f7 11110111
248 f8 11111000
249 f9 11111001
250 fa 11111010
251 fb 11111011
252 fc 11111100
253 fd 11111101
254 fe 11111110
255 ff 11111111
Directions
Answer the following questions:
1. If a host has two LAN interface cards, will the MAC addresses of the two cards be the
same, or different?
Answer
Different. Every LAN card should have a unique MAC address.
2. Is it possible to determine which network a host is on just by looking at the host's MAC
address?
Answer
No. Given a host's IP address and netmask you can determine which network the host is
on, but a MAC address alone is insufficient.
3. Complete the following table:
Answer
IP Address Netmask Network Address Broadcast Address
167.12.132.5/16 255.255.0.0 167.12.0.0/16 167.12.255.255
124.132.12.5/8 255.0.0.0 124.0.0.0/8 124.255.255.255
213.1.231.45/24 255.255.255.0 213.1.231.0/24 213.1.231.255
4. Which of the networks listed in question 3 would allow the fewest hosts?
What is the maximum number of hosts allows on that network?
Answer
The 213.1.231.0/24 network has the fewest host bits, so it would support the fewest hosts.
With 8 host bits, this network could have at most 28 = 256 addresses. Subtracting the
broadcast and network addresses means that the network could support no more than
254 hosts.
5. How many different networks are represented by the list of IP addresses below?
132.1.1.3/16
132.2.1.1/16
132.1.1.2/16
132.1.1.1/16
132.1.2.1/16
132.1.2.2/16
Answer
The /16 tells us that there are 16 network bits in each of these IP addresses. Thus, the first
two octets define the network portion of the IP. This suggests that just two networks are
represented in this list: 132.1.0.0/16 and 132.2.0.0/16.
Answer
The highest host IP is 158.153.255.254.
The lowest host IP is 158.153.0.1.
7. What is the difference between a destination port number and a destination IP address?
Answer
A destination IP determines which host should receive a packet. A destination port
number determines which application on a host should receive a packet.
Answer
TCP is a connection-oriented protocol that provides a built-in acknowledgement
mechanism. UDP is a connection-less protocol that does not provide an
acknowledgement mechanism.
9. HPUX provides three different methods for mapping host names to IP addresses. Name
two.
Answer
/etc/hosts, DNS, and NIS may all be used to resolve host names to IP addresses.
Directions
This lab will configure a new host name and IP address for each system in your classroom.
Preliminary Steps
1. Just in case something goes wrong during this lab, make a backup copy of all of your
network configuration files. There is a shell script in your labs directory designed
specifically for this purpose. The shell script will save a tar archive backup of your
network configuration files in the file you specify. Add the –l option to verify your
backup.
# /labs/netfiles.sh -s ORIGINAL
# /labs/netfiles.sh –l
# /labs/netfiles.sh –l ORIGINAL
2. Portions of this lab may disable your lan0 interface card. If you are using remote lab
equipment, login via the GSP/MP console interface for the duration of the lab.
3. Changing your host name and IP on a running system can wreak havoc on CDE and other
applications. Kill CDE before going any further:
# /sbin/init.d/dtlogin.rc stop
1. How many LAN cards does your system have, and what are their Hardware paths?
Answer
The following commands may be used to view your LAN card hardware paths:
# lanscan
# ioscan –funC lan
2. Verify that the "Networking" product is installed on your machine. Is any additional
networking software installed on your machine to support your LAN interface cards?
Answer
# swlist Networking
Every machine should have the Networking product loaded. Other LAN software will vary
from system to system.
3. Does your kernel contain the drivers necessary to support your LAN cards? Which
command will tell you if a driver has CLAIMED your LAN cards? If your LAN card is
UNCLAIMED, install the necessary drivers.
Answer
# ioscan –funC lan
Look for "UNCLAIMED" LAN cards. The drivers should already be installed, and all cards
should be "CLAIMED.”
Answer
# ioscan -funC lan
5. List the current MAC address, IP address, netmask, and broadcast address for each of
your LAN cards.
Answer
Note that these solutions assume that your default LAN card is lan0. The default LAN
interface name on your system may be different. The IP, netmask, and broadcast
addresses will also vary from classroom to classroom.
The first two octets in the IP addresses will vary from classroom to classroom, but should be
consistent across all hosts within your classroom. Ask your instructor what the first two
octets should be set to. The last two octets must be set in accordance with the table below.
1. There should be a script in the /labs directory called netsetup.sh. This script will
ask you for your instructor-assigned hostname, and the first two IP octets that your
instructor should also provide. After you enter the requested information, the script will
display your assigned IP address and a variety of other network settings that you will use
later in the class. The script will also create a new hosts file in /tmp/hosts. Run the
script, then review the /tmp/hosts file. By default, the script doesn’t actually change
your network configuration.
# /labs/netsetup.sh
# cat /tmp/hosts
2. From the command line, change your IP to the address suggested in /tmp/hosts. Be
sure to change your netmask, too!
Answer
# ifconfig lan0 w.x.y.z netmask 255.255.0.0 # replace w.x.y.z w/ your IP
3. Is your new IP address set properly? How can you find out?
Answer
# ifconfig lan0
ifconfig should indicate that the IP and netmask have been set properly.
4. Modify the appropriate startup file to make your IP address change permanent. Allow the
system to default the broadcast address. Also, permanently change your host name in this
startup file. If a default route is currently defined, delete it. You will have a chance to
configure a new default route in the next chapter.
Answer
# vi /etc/rc.config.d/netconf
ROUTE_DESTINATION[0]=default
ROUTE_MASK[0]=""
ROUTE_GATEWAY[0]=""
ROUTE_COUNT[0]=""
ROUTE_ARGS[0]=""
5. Copy the /tmp/hosts file into place as the default /etc/hosts file.
# cp /tmp/hosts /etc/hosts
6. Define a host name alias for each of the host names in your row. Use the first name of the
user sitting at each station as the alias.
Answer
# vi /etc/hosts
w.x.y.z city student1 # use your neighbor’s IP, hostname, name here
Answer
# shutdown –ry 0
Answer
# ifconfig lan0
2. The hostname command will display your system host name. Check to ensure that your
host name is set properly.
Answer
# hostname
3. Based on your Answers to questions 1 and 2 above, what commands did the
/sbin/init.d/net script appear to execute on your behalf during the boot process?
Answer
The system should have executed the uname, hostname, and ifconfig commands on
your behalf.
# uname -S hostname
# hostname hostname
# ifconfig lan0 w.x.y.z netmask 255.255.0.0 up
Answer
# ping w.x.y.z # use your instructor’s IP address here.
Answer
# ping hostname # use your instructor's host name here.
Assuming the hostname you ping has been added to /etc/hosts, and that host is
configured properly, this should work.
6. Try to ping the a neighboring machine using an alias you defined in your /etc/hosts
file. Does this seem to work?
Answer
# ping instructor
Directions
Record the commands you use to perform the tasks suggested below.
Your instructor has configured host corp as a router with two LAN interfaces. Record corp’s
IP and network addresses here. The first IP should be a /16 address whose first two octets
match your first two octets. The second IP address should be a /24 address that is entirely
different from your system’s IP address.
corp's first interface’s IP: ___ . ___ . _ 0 . 1 /16 (should be on your net)
corp's second interface’s IP: ___ . ___ . __ _ . _1__ /24 (should be on another net)
Verify that your instructor has configured corp’s second interface before proceeding.
Preliminary Steps
4. Portions of this lab may disable your lan0 interface card. If you are using remote lab
equipment, login via the GSP/MP console interface for the duration of the lab.
5. Modifying IP connectivity on a running system can wreak havoc on CDE and other
applications. Kill CDE before going any further:
# /sbin/init.d/dtlogin.rc stop
Answer
# netstat –rn
Answer
You should be able to ping corp’s first address since it is on the same IP network as your
LAN interface, which you already have a route to.
The second LAN card, however, is on a different network. Since your routing table
doesn’t have an entry for the second network, you shouldn’t be able to ping corp’s
second IP address.
3. From the command line, add a route to the second network via corp’s first LAN interface.
Then check your routing table again to verify that you were successful.
Answer
Answer
# ping secondIP
Now that you have a route to the second network, you should be able to ping corp’s
second IP.
5. Delete the route that you just added. Then check the routing table to verify that you were
successful.
Answer
6. Now, define corp’s first IP as your default route. Then check your routing table again to
be sure this worked.
Answer
7. Can you ping the second IP now, even though you do not have an explicit route to the
second network?
Answer
# ping secondIP
This should work! Although there isn’t an explicitly defined route to the second network,
your system uses the default route you just defined. Since the default route points to
corp, which has a connection to the second network, this ping should succeed.
8. How can you ensure that your default route is defined after every system boot? Make it
so.
Answer
# vi /etc/rc.config.d/netconf
ROUTE_DESTINATION[0]=default
ROUTE_MASK[0]=""
ROUTE_GATEWAY[0]=firstIP
ROUTE_COUNT[0]=1
9. Reboot your machine. When your machine comes back up again, check the routing table
to verify that the default route is defined.
Answer
# shutdown –ry 0
# vi /etc/hosts
firstIP corp
secondIP corp
2. If you ping corp, which of corp's IP addresses does your system appear to choose?
Watch your ping output carefully.
Answer
# ping corp
The system appears to ping the first address listed in /etc/hosts, which should be
corp’s first IP address in this case.
Answer
# vi /etc/hosts
firstIP corp corp1
secondIP corp corp2
Answer
# ping corp1
# ping corp2
# /labs/netfiles.sh –s NEW
# /labs/netfiles.sh –l
# /labs/netfiles.sh –l NEW
Directions
Answer all of the questions below. Assume that your network contains some older devices
that do not support all-0 or all-1 subnet addresses.
Part 1
1. Your company's network address is 128.20.0.0/16, but your netmask is set to
255.255.255.0. Given this netmask, how many bits are in the subnet portion of your IP
address?
Answer
The /16 appended to the end of the network IP address indicates that the first 16 bits (or
first two octets) contain network bits. The netmask indicates that the first three octets
are all masked. Thus, all 8 bits in the third octet must be subnet bits.
2. Given your answer to the previous question, how many host addresses may be configured
on each subnet?
Answer
With 8 bits, it is possible to represent 28 = 256 addresses. However, each subnet must
have a subnet address and a broadcast address. Thus, each subnet could have at most 254
hosts.
Answer
4. What are the lowest and highest host addresses on the first subnet?
Answer
Part 2
Your company's network address is 192.30.40.0/24, and you need to create two subnets.
1. How many contiguous bits are needed, and in which octet?
Answer
Answer
We need to mask the network bits in the first three octets, as well as the two subnet bits
in the fourth octet. This yields netmask value 255.255.255.192.
255.255.255.11000000 = 255.255.255.192
Answer
192.30.40.01000000 = 192.30.40.64/26
192.30.40.10000000 = 192.30.40.128/26
Part 3
Your company's network address is 132.40.0.0/16. You need to configure nine subnetworks.
1. How many bits are needed to form 9 subnets?
Answer
Answer
255.255.11110000.00000000 = 255.255.240.0
Answer
132.40.00010000.00000000 = 132.40.16.0/20
132.40.00100000.00000000 = 132.40.32.0/20
132.40.00110000.00000000 = 132.40.48.0/20
Answer
Since there are 4 host bits in the third octet, and 8 host bits in the fourth octet, we have a
grand total of 12 host bits. With 12 host bits, we can represent 212 = 4096 addresses.
Subtracting the subnet address and broadcast address, we are left with 4094 host
addresses per subnet.
5. What is the complete address for the first host on the first subnet?
Answer
The address of the first host on the first subnet must be:
132.40.00010000.00000001 = 132.40.16.1/20
6. What would be the complete address for the last host on the first subnet?
Answer
To formulate the address of the last host on the first subnet, set all but the last host bit to
"1". This yields:
132.40.00011111.11111110 = 132.40.31.254/20
7. Fill in the variable values you would expect to see in the /etc/rc.config.d/netconf
file for the last host on the first subnet. Record the variable values below, but do not
actually modify the /etc/rc.config.d/netconf file on your system.
INTERFACE_NAME[0]=lan0
IP_ADDRESS[0]=
SUBNET_MASK[0]=
Answer
INTERFACE_NAME[0]=lan0
IP_ADDRESS[0]=132.40.31.254
SUBNET_MASK[0]=255.255.240.0
8. What command would the /sbin/init.d/net script execute because of the netconf
values in the previous question?
Answer
Preliminary Steps
1. Portions of this lab may disable your lan0 interface card. If you are using remote lab
equipment, login via the GSP/MP console interface for the duration of the lab.
2. Disabling the LAN card can cause problems for CDE, too. Before starting the lab, shut
down CDE:
# /sbin/init.d/dtlogin.rc stop
Answer
Answer
# netstat -in # shows your network address
# netstat -rn # shows your routing table
(including the default route)
3. Given a host name, how can you determine that hostname’s corresponding IP address?
Which IP address is associated with corp’s first interface?
Answer
# nslookup corp
4. Can you determine the MAC address associated with corp’s first interface, too? Record
this MAC address for future reference.
Answer
# ping corp # ping corp to add it to the
arp cache
# arp corp # now find their IP and MAC in the
arp cache
Answer
# ping corp
2. Can you still ping other hosts if your LAN interface is "DOWN"? Change the IP
configuration state of your lan0 interface to "DOWN.” Which field in the netstat –in
output indicates that the interface is down?
Answer
# ifconfig lan0 down
# netstat –in
The “*” following the interface name in the first column indicates that the card is down.
Answer
ping hangs when you attempt to hit corp. However, you may be surprised to discover
that you can ping your own hostname or your loopback address even when your LAN
interface is down.
4. Now try linkloop'ing to your corp's MAC address. Does this work? Explain.
Answer
linkloop should succeed, even though ping fails.
linkloop is an OSI layer 2 utility that succeeds regardless of the IP configuration of the
card.
5. Based on your answer to the previous question, when might linkloop be useful?
Answer
linkloop can test connectivity between any two hosts on a network even if the IP
configuration on either host is corrupted. If you can linkloop a host, but can’t ping
that same host, you may want to check the routing tables and IP addresses on both
machines.
Answer
# ifconfig lan0 up
2. There should be a shell script in your /labs directory called /labs/corrupt.sh. Run
the script. When prompted, enter a number between 1 and 5. Based on your response, the
script will corrupt your LAN configuration in one of five different ways. When the script
terminates, your task is to fix your LAN configuration so the command ping corp
succeeds. Take advantage of all the tools we discussed in this chapter.
3. Once you successfully troubleshoot and fix your configuration, run the script again,
choose a different number, and again fix the resulting problem. If time permits, try each
of the five options provided by the script.
Good luck!
Part 4: Cleanup
Before moving on to the next chapter, restore your network configuration to the state it was
in before this lab.
# /labs/netfiles.sh –r NEW
Directions
Work on your own to perform the following tasks.
Preliminary Step
1. Portions of this lab may disable your lan0 interface card. If you are using remote lab
equipment, login via the GSP/MP console interface for the duration of the lab.
# ls /sbin/rc*.d/S*
Answer the questions below using the output from the ls command above.
1. At which run level does NFS client functionality start?
Answer
Answer
3. At which run level does your system set its host name?
Answer
4. At which run level does the net script set your IP address?
Answer
Run level 2.
5. At which run level does the sendmail daemon begin delivering mail?
Answer
Run level 2.
Answer
Run level 2.
7. At which run level does the system enable access to ftp, telnet, and other Internet
services?
Answer
Run level 2.
Answer
# ps -e | grep sendmail
Answer
# /sbin/init.d/sendmail stop
Answer
# ps -e | grep sendmail
Answer
# /sbin/init.d/sendmail start
# ps -e | grep sendmail
Setting a control variable to "1" enables that service at next boot, while setting the control
variable to "0" disables the service at next boot. Control variables are set in configuration files
in /etc/rc.config.d/*. Sometimes the configuration file matches the name of the
service. You can always use the grep command to find the proper configuration file for a
service. For instance, the output from the following grep command suggests that the
sendmail control variable is defined in /etc/rc.config.d/mailservs.
See if you can find the /etc/rc.config.d configuration files for each of the services
below, and determine which of those services are enabled on your system.
nfs.client /etc/rc.config.d/nfsconf Y
nis.server /etc/rc.config.d/namesvrs N
nis.client /etc/rc.config.d/namesvrs N
sendmail /etc/rc.config.d/mailsvrs Y
xntpd /etc/rc.config.d/netdaemons N
# cp /sbin/init.d/template /sbin/init.d/pfs_mountd
b. Scroll down to the stop_msg portion of the case statement that looks like this:
'stop_msg')
# Emit a _short_ message relating to running this script
# with the "stop" argument; this message appears as part
# of the checklist.
echo "Stopping the <specific> subsystem"
;;
c. Scroll down to the start argument in the case statement that looks like this:
# Check to see if this script is allowed to run...
if [ "$CONTROL_VARIABLE" != 1 ]; then
rval=2
else
# Execute the commands to start your subsystem
:
fi
;;
d. Next, scroll down to the stop argument in the case statement that looks like this:
fi
;;
3. Create a configuration file and a control variable for your new startup script:
# vi /etc/rc.config.d/pfs_mountd
PFS_MOUNTD=1
4. Create a start link to start the new service at run level 3 using the “don’t care” 900
sequence number, and a kill link to kill the new service with sequence number 100 at run
level 2:
# ln –s /sbin/init.d/pfs_mountd /sbin/rc3.d/S900pfs_mountd
# ln –s /sbin/init.d/pfs_mountd /sbin/rc2.d/K100pfs_mountd
5. Test your new startup script by executing both the start and kill links.
# /sbin/rc3.d/S900pfs_mountd start
# ps –ef | grep pfs_mountd
# /sbin/rc2.d/K100pfs_mountd stop
# ps –e
6. Assuming the previous test succeeded, change run levels a few times to further test your
scripts.
# init 2
# init 3
# init 2
Note that the first init 2 may fail. Can you explain why?
Answer
The pfs_mountd daemon is not running initially, so the kill command fails. Bouncing
back up to run level 2 starts the daemon, so the second attempt to run the start script
succeeds.
Directions
In this lab you will work with a partner to experiment with some of the features of NFS. One
of you will function as an NFS server, and the other will function as an NFS client. You
should work together throughout the lab to ensure that you feel comfortable with both the
client and server functionalities of NFS. At this point, decide between yourselves who will be
the server and who will be the client.
Preliminary Steps
1. Portions of this lab may disable your lan0 interface card. If you are using remote lab
equipment, login via the GSP/MP console interface for the duration of the lab.
2. (client)
Install the lab files needed on your client:
# cd /labs
# tar -xvf nfs.client.tar
You should now have two new user accounts defined in your /etc/passwd file: "mickie"
and "minnie.” The passwords for the new accounts are "mickie" and "minnie" respectively.
Note that neither user has a home directory on your machine. You will mount their home
directories from your partner's NFS server.
3. (server)
Install the lab files needed on your server:
# cd /labs
# tar -xvf nfs.server.tar
This tarball creates several new files and directories, and two new user accounts in your
/etc/passwd file for users "mickie" and "minnie.” The passwords for the new accounts
are "mickie" and "minnie" respectively. The tarball also creates home directories for
mickie and minnie.
Answer
Your machines should be configured with both NFS server and NFS client functionality.
3. (client)
What daemons should you see on an NFS client?
Use ps -e on the client to ensure that the necessary daemons are actually running.
Answer
portmap/rpcbind (optional)
biod (optional)
rpc.statd
rpc.lockd
4. (server)
What daemons should you see on an NFS server?
Use ps -e to ensure that the server has the necessary daemons running.
Answer
portmap/rpcbind
rpc.mountd
nfsd
rpc.statd
rpc.lockd
rpc.pcnfsd (optional)
Answer
server
# vi /etc/exports
/home -access=client
/opt/phone -rw=client
/opt/fun -ro
server
# exportfs -a
2. (server)
What command can you use to see what file systems you have made available? Can you
tell which export options you used?
What command can you use to see what file systems other servers have made available?
Choose another machine in the classroom and see what it has exported. Can you tell
which export options were used?
Answer
server# exportfs
The exportfs command shows what is exported, and which export options were used.
This command shows who can mount the file system, but does not indicate what export
options were used.
3. (client)
Create mount points for the file systems your neighbor exported, and mount them:
/home/mickie
/home/minnie
/opt/fun
/opt/phone
Answer
Syntax errors in the /etc/fstab file may cause the next system boot to fail. Do a
mount -a to ensure that you did not make any mistakes in fstab file.
Finally, use mount -v to ensure that all the NFS file systems actually mounted properly.
Answer
client# vi /etc/fstab
server:/home/mickie /home/mickie nfs defaults 0 0
server:/home/minnie /home/minnie nfs defaults 0 0
server:/opt/fun /opt/fun nfs defaults 0 0
server:/opt/phone /opt/phone nfs defaults 0 0
client# mount -a
client# mount -v
5. (server)
What command lists the remote machines that have your exported file systems mounted?
Answer
server# showmount -a
From your client, try executing some of the programs mounted from the NFS server to
verify that this is true:
client# /opt/fun/melt
client# /opt/fun/xroach -speed 1
Another benefit of NFS is that files created in an NFS file system instantly become
available to multiple client machines. Do the following experiment to verify that this is
true:
client# ls /home/mickie
server# touch /home/mickie/data
client# ls /home/mickie
Does the client see the new file that was created on the server?
Answer
Yes, the client should see the new file that was created on the server.
Why did this command succeed when executed on the server, but not when executed on
the client? (hint: look at /etc/exports)
Answer
The file system was exported with write permission, but without root permission. Thus,
on the client, user root is treated as "nobody" inside /opt/fun.
3. (client)
Let's try a variation on the experiment you did back in Q#1 of this part of the lab.
client# touch /home/mickie/memo
Do whatever is necessary to successfully execute the touch command on the client. (You
should not have to type anything on the server. Hint: Which user on the client has write
permission on Mickie's home directory?)
Answer
The file system was neither exported nor mounted with read only permissions. However,
root on the client is treated as nobody. User nobody only has r-x permissions on mickie's
home directory.
client# su - mickie
client# touch /home/mickie/memo
exit
4. (client and server)
We saw in the previous question that root on an NFS client does not (by default) have the
same file access as root on the NFS server. If a single administrator manages several
systems, however, it may be useful to allow root on NFS clients to have true root access
to exported file systems.
What would you have to do on the NFS server side to allow root on the client to have the
same full root access to the /home file system? Make it so.
Did this seem to work? While logged in as root on the client, try touching a file in mickie's
home directory. Did you have to do anything on the client side to recognize the change in
the server's exports file?
Answer
server# vi /etc/exports
/home -root=client
server# exportfs –a
client# touch /home/mickie/junk
This should work, even without remounting the file system
Use mount -v to see which file systems remain in the client's mount table. Also do an
ls of /home/mickie, and note that the memo and data files that were under
/home/mickie no longer appear since the file system has been unmounted.
Answer
client# umount /home/mickie
client# mount -v
client# ls /home/mickie
2. (client)
Let us try a more complicated scenario. Can the client unmount an NFS file system if one
of the client's users is accessing that file system?
On the client machine, open two windows. In one of the windows, cd to the
/home/minnie directory. In the other window, issue the umount command to unmount
the minnie file system. Did this work?
The fuser command can tell you who is currently using a file system. Try the following
to see who is currently using /home/minnie.
client# fuser -cu /home/minnie
Try a fuser -cuk on /home/minnie, and see what happens. Then try unmounting the
file system again.
Answer
This should fail. You cannot unmount a file system on the client while a process on the
client is using the file system.
This command kills the window that was using /home/minnie. After killing the process,
its no problem to unmount the file system.
3. (server)
In Part 2, Question 5, we saw a command that the server administrator could use to
determine which of the exported file systems were actually mounted on client hosts. Now
try executing that command again. Was the NFS server notified when the client
unmounted mickie and minnie?
Answer
server# showmount -a
We saw that the administrator can force users out of a mounted file system with the
fuser command. If fuser is executed on the NFS server, does it kill processes on the
NFS clients, or just on the server itself? Try it.
client# cd /opt/fun
server# fuser -cuk /opt
Unfortunately, there is no mechanism in NFS to kill client processes from the server.
Answer
You should see that the fuser command, when executed on the server, only kills
processes on the server. The clients should be unaffected.
We just discovered in the previous question that the NFS server has no way of killing
processes on client hosts. Local file systems cannot be unmounted until all processes
using them die. Does this mean that an NFS server administrator is unable to unmount
his/her exported file systems until the clients that have mounted those file systems
voluntarily unmount? Let's find out.
server# fuser -cuk /opt # kill any proc's on the svr using /opt
server# umount /opt # unmount the local /opt file system
Did you successfully unmount the file system? Any errors? What happened to the client
process that was using your exported /opt?
Try the following commands on the client and note the output.
client# pwd
client# ls
client# cd ..
client# cd /
client# umount /opt/fun
On the client, could you unmount /opt/fun, even after the server unmounted?
Answer
No errors. The client process does not appear to have been affected — yet.
pwd works.
cd / works.
Answer
Answer
server# mount -a
server# exportfs -a
client# mount -a
# /sbin/init.d/dtlogin.rc stop
Now take the server's lan card down and note what happens to the client:
What happens when the client regains connectivity to the NFS server?
Answer
The ls hangs indefinitely. Shortly, you should get an NFS server not responding
error.
What happens if the client tries to remount that file system again while the server is still
down? Try it.
Answer
The umount actually occurs immediately. However, the client attempts to notify the
server that the file system has been unmounted. It may take several minutes for this to
time out.
What happens when a process on the client tries to hit a file system on the downed server
(assuming the default mount options are used)? Do they hang indefinitely or time out?
What happens when a client tries to mount a file system from a downed server? (Again,
assume that the default mount options are used.)
Answer
When the NFS server becomes unavailable, no client processes are killed. However, if a
client process attempts to access the server, the process hangs indefinitely. The client can
always unmount a file system, even if the NFS server is down.
By default, HP-UX mounts NFS file systems "hard,intr.” If the NFS server goes down with
these default mount options, we saw client attempts to access the NFS files and
directories hang indefinitely. Can the user abort a command if they get tired of waiting?
Try it.
server# ifconfig lan0 down
client# ls /opt/fun # can the user abort the ls with ^C?
server# ifconfig lan0 up
Alternately, you can mount an NFS file system nointr. How would the nointr mount
option affect the experiment above? Try it.
Answer
With the default intr mount option, the user can ^C out of a process that hangs because
of a downed NFS server.
If the file system is mounted nointr, however, a process hung as the result of a downed
NFS server hangs indefinitely. The user will get a prompt back only when it regains
connectivity to the NFS server.
2. (server and client)
Soft versus hard mounts
The client can also override the hard option with mount -o soft. If a client has
mounted an NFS file system "soft" and the NFS server goes down, what happens to client
requests to the server? Try it.
Answer
Eventually, ls times out with a message saying: "NFS access failed.” In contrast to this
behavior, the "hard" option would have hung indefinitely.
3. (client)
From the client, try your connectivity test commands again:
client# ping server
client# rpcinfo -p server
Will clients be able to access already mounted NFS file systems? Why?
Answer
You can still ping the server, but the mount and nfs rpcs are no longer registered with
the server's portmapper. The client cannot mount the file system since mountd daemon
is not running on the server anymore. Nor can the client access already mounted NFS file
systems since the nfs daemon is not running.
Part 8: Cleanup
1. Before moving on to the next chapter, restore your network configuration to the state it
was in before this lab.
# /labs/netfiles.sh –r NEW
Preliminary
Portions of this lab may disable your lan0 interface card. If you are using remote lab
equipment, login via the GSP/MP console interface for the duration of the lab.
This lab assumes that the classroom has been configured with the 128.1.*.* IP addresses
configured earlier in the course. The instructor station must be assigned IP address 128.1.0.1.
Execute the following preliminary setup steps on both the student and instructor
workstations in preparation for the lab:
# /labs/autofs.lab.setup.sh
These scripts added several entries to the /etc/passwd and /etc/hosts files on both the
instructor and student workstations. When executed on the instructor station, the script also
configures several additional IP addresses via IP multiplexing, and creates and exports
several directories.
2. AutoFS was not included in the NFS product that was initially shipped with 10.20 and
11.00. Verify that AutoFS is included in the version of the NFS product installed on your
system by checking for the existence of the /usr/lib/netsvc/fs/autofs directory.
Answer
# ll /usr/lib/netsvc/fs/autofs
3. HP-UX 10.20 and 11.x support both AutoFS and the older Automounter. Is either of these
services configured on your machine? Which one, if any?
Answer
# more /etc/rc.config.d/nfsconf
Answer
# vi /etc/rc.config.d/nfsconf
AUTOMOUNT=1
AUTOFS=1
There is no need to change the defaults for any of the other AutoFS and Automount
related variables in the file at this point.
5. Automount and AutoFS should never run concurrently on a system. Technically, you
should be able to switch from one service to the other by tweaking the control variables
in /etc/rc.config.d/nfsconf. Realistically speaking, however, it is often difficult to
shut down automounter without rebooting since the daemon will not die until all of the
automounted file systems are unmounted. The cleanest solution is to reboot. Make it so!
# shutdown –ry 0
6. When your system comes back up again, verify that the AutoFS daemons are running.
Answer
Answer
# cat /etc/auto_master
/net –hosts –nosuid,soft
2. Does the mount table reflect the fact that AutoFS is managing the /net mount point?
Answer
# mount –v
Yes! You should see an entry in your mount table showing that –hosts is mounted on
/net. The file system type field in the mount table should indicate that this is an autofs
file system.
3. Test your –hosts map! What happens when you access /net/corp? Try it!
# ls /net/corp
Answer
Several NFS file systems should have been mounted under /corp on your behalf, and
should appear in the ls output.
Answer
# mount –v
The –hosts entry in the mount table remains. Also, you should see one entry in the
mount table for each of the NFS file systems mounted under /net/corp/* .
5. Will AutoFS recognize a host referenced by IP address rather than name? Try it!
# ls /net/128.1.0.1
# mount -v
It works! You may reference hosts under /net by either host name or IP address.
# ls /net/10.1.1.1
Answer
The resulting AutoFS mount request fails, and AutoFS returns a "not found" message.
Answer
# vi /etc/auto_master
/- /etc/auto.direct
Answer
# vi /etc/auto.direct
/usr/contrib/games –ro corp:/usr/contrib/games
3. What must be done to make this change take effect? Make it so!
Answer
# automount
4. What appears in the mount table to indicate that AutoFS has recognized the new direct
map?
Answer
# mount –v
5. Does the games mount point appear when you list the contents of /usr/contrib? Does
listing the /usr/contrib directory cause AutoFS to mount the games file system from
the NFS server?
# ls /usr/contrib
# mount –v
Answer
The games mount point directory does appear in the ls output. However, the file system
does not actually mount until the contents of /usr/contrib/games are first accessed.
# cd /usr/contrib/games
# ls
# /usr/contrib/games/oneko/bin/X11/oneko &
# mount –v
Answer
Any attempt to access the contents of an AutoFS managed mount point should cause the
associated NFS file system to mount. Any one of the three actions in this question – cd,
ls, or running the executable – would have been sufficient to cause the file system to
mount.
Viewing the mount table should verify this. You should see /usr/contrib/games
mounted from the NFS server.
7. Add another entry to your direct map to mount the /data/contacts directory from the
corp NFS server. Users will need both read and write access to this file system. Do not
execute the automount command yet.
Answer
# vi /etc/auto.direct
/usr/contrib/games –ro corp:/usr/contrib/games
/data/contacts -rw corp:/data/contacts
Answer
# ls /data/contacts
This should generate a "not found" error message. The automount command must be
executed to notify AutoFS any time the direct map changes.
Answer
# automount
# mount -v
# ls /data/contacts
# mount -v
# umount /home
2. Add an indirect map entry for /home to /etc/auto_master. This map entry should
reference the /etc/auto.home map file.
Answer
# vi /etc/auto_master
/home /etc/auto.home
3. What must be done anytime the master map changes? Make it so!
Answer
You must update the mount table anytime the master map changes:
# automount
# mount -v
4. Now create the /etc/auto.home map file. The map file should configured such that:
Answer
# vi /etc/auto.home
finance finance:/home/finance
business business:/home/business
sales sales:/home/sales
It is not necessary to execute automount after modifying an indirect map. This is one
key advantage that the indirect map has over a direct map!
5. Check the mount table. How many mount table entries were created because of the new
indirect map? How many entries would have been created in the mount table if this had
been configured as a direct map?
Answer
# /usr/sbin/mount –v
There should be just one new entry in the mount table indicating that /etc/auto.home
is mounted on /home. If this had been configured via a direct map, there would have
been three new entries in the mount table.
6. Do an ls of /home. Can you explain the result? Did AutoFS mount any file systems?
Answer
# ls /home
# mount -v
The ls command does not list anything! This is expected. The home directories will not
be mounted until they are actually accessed.
7. Now access a specific user's home directory and see what happens to the mount table:
# ls /home/finance/user1
# mount –v
Answer
AutoFS intercepts the /home/finance access attempt, and automatically mounts the
needed file system from the finance server. This is reflected in the mount table.
8. Will this configuration automatically mount a user's home directory at login time? Try it!
Try logging in as user "user3.” Then check the mount table to verify that the user's home
directory was in fact mounted from the proper location.
# su – user3
$ pwd
$ ls -a
$ exit
# mount -v
Answer
The user login should succeed. The login process attempts to cd to the home directory
specified by the user's entry in the /etc/passwd file. Assuming /etc/passwd and
AutoFS are configured properly, users will never know that their home directories are
mounted by AutoFS.
9. Can you shorten the /etc/auto.home file to a single line? How? Make it so! Then test
your solution:
# vi /etc/auto.home
# ls /home/sales/user5
# mount –s.
Answer
# vi /etc/auto.home
* &:/home/&
# ls /home/sales/user5
# mount -v
AutoFS key substitution provides the solution to this problem. The /etc/auto.home
file suggested below will automatically NFS mount any user's home directory if each NFS
server's home directories are named according to the following convention:
/home/servername/username. The ls command should succeed.
Part 5: Cleanup
Before moving on to the next chapter, run the netfiles.sh cleanup script:
# /sbin/init.d/nfs.client stop
# mount -a
# /labs/netfiles.sh –r NEW
Directions
In this lab exercise, you will work with a team of two to four classmates to configure and test
NIS servers and clients in your own NIS domain. Working with the teammates assigned by
your instructor, decide on a name for your NIS domain.
Within your domain, you should configure one master server, a slave server, and one or more
clients. Decide among yourselves which machine will be your master server, which will be
the slave, and which will be the client(s):
Client(s): _________________
Note that the examples referenced in the instructions that follow refer to a domain called
"california" containing three hosts. Within this sample domain, "sanfran" is the master server,
"oakland" is the slave server, and "la" is a client.
Preliminary Step
1. Portions of this lab may disable your lan0 interface card. If you are using remote lab
equipment, login via the GSP/MP console interface for the duration of the lab.
1. Ensure that your ASCII source files (/etc/passwd, /etc/group, etc.) are up-to-date.
Although the ASCII files may be changed after configuring NIS, it is much easier to make
changes now. For the sake of this lab exercise, you may assume that your ASCII source
files are already up-to-date.
2. The script used to configure the NIS master server must know ahead of time the name of
the domain. Do this by setting your server's NIS domain name with the domainname
command:
# domainname california # set your domain name
# domainname # check your domain name
3. Next, run the ypinit -m command to build all the maps for your domain. When asked if
you wish to "quit on non-fatal errors,” Answer "n.” ypinit prompts for a list of slave
servers for the domain, then builds all the necessary maps.
# ypinit -m
Answer
# vi /etc/rc.config.d/namesvrs
NIS_MASTER_SERVER=1
NIS_SLAVE_SERVER=0
NIS_CLIENT=1
NIS_DOMAIN=california
Answer
# cd /
# shutdown -ry 0
6. When your machine comes back up again, check to see which processes are running.
What NIS-related processes would you expect to see on an NIS master server?
Answer
Among others, you should see portmapper/rpcbind, ypserv, rpc.yppasswd, and
ypbind. A complete list of NIS-related daemons was provided earlier in the chapter.
Do not begin this portion of the lab until the master server is fully configured.
1. Start by setting your domain name as you did on the master.
Answer
# domainname california
3. Watch the ypinit messages. What does the ypinit do to configure the slave server?
(Note: disregard the ethers, bootparams, and netmasks errors generated by ypinit.
These maps are not used in HP-UX, but the ypinit utility still attempts to download
them.)
Answer
ypinit automatically downloads all the NIS maps from the master server.
4. ypinit should have copied the NIS maps from the master server, and stored them
under the slave server's /var/yp directory. Do an ls of /var/yp, and find the
subdirectory for your domain. What do you see in your domain’s /var/yp subdirectory?
Answer
All NIS maps are stored in subdirectories under /var/yp. The california maps, for
instance, would be found in /var/yp/california.
5. Next, modify the NIS startup configuration file, /etc/rc.config.d/namesvrs.
Enable your machine as an NIS_SLAVE_SERVER and NIS_CLIENT and define your
DOMAINNAME.
Answer
# vi /etc/rc.config.d/namesvrs
NIS_MASTER_SERVER=0
NIS_SLAVE_SERVER=1
NIS_CLIENT=1
NIS_DOMAIN=california
6. Remove all of your users' entries from your local password file, since NIS will now be
providing central administration of your user account information. However, be sure to
leave all accounts with userids below 100 in /etc/passwd. Why might it be important to
leave these userids (especially root.) in place?
Answer
# vipw # remove all user account definition lines
If there are problems with NIS, you should ensure that at least the critical system
accounts are still available so root can log on and fix the problem.
7. Reboot.
Answer
# cd /
# shutdown -ry 0
Answer
# vi /etc/rc.config.d/namesvrs
NIS_CLIENT=1
NIS_DOMAIN=california
2. As you did with your slave server, remove all user entries from /etc/passwd.
Answer
# vipw
# remove all user entries, but leave userid's 0 -100
3. Reboot.
Answer
# cd /
# shutdown -ry 0
client# ypcat -x
3. You can check the value associated with any key in an NIS map by using the ypmatch
command:
client# ypmatch user1 passwd.byname
client# ypmatch 0 passwd.byuid
4. Do the standard UNIX utilities use the NIS? To find out, try logging in as user1. Note that
user1 no longer exists in the slave or clients' local password files. Why does this login
succeed?
Answer
client# login user1 # login as user1
client# exit # log back out again
The system calls used to look up usernames and passwords are smart enough to
reference the NIS maps instead of the local password file.
5. Try another system utility. Use nslookup to determine which IP address is associated
with your neighbor's host name. Does nslookup appear to use NIS? How can you tell?
Answer
client# nslookup oakland
nslookup notes in its output: "Trying NIS.”
Even if the /etc/hosts file did not exist, your client could resolve host names using the
NIS hosts map.
client# exit
2. Is the password change reflected in the password map on the master, the slave, or both?
Use yppoll command to check the order number on the master and slave servers.
yppoll -h slave passwd.byname
yppoll -h master passwd.byname
yppoll -h slave passwd.byuid
yppoll -h master passwd.byuid
Answer
The order numbers should be the same, which indicates that both servers' maps were
updated.
4. Try another change on the client. Create a user account in the /etc/passwd file on the
client, then ypcat the passwd map again. Does ypcat show the new account? Explain.
client# useradd donald
client# ypcat passwd
Answer
ypcat does not reflect the changes. NIS consults the NIS maps (which haven't changed
yet) rather than the local passwd file.
5. What happens if you make your changes to /etc/passwd on the master server instead
of the client? Try it. Add user donald to the master server's passwd file. Then ypcat the
passwd map and explain the result.
master# vi /etc/passwd
master# ypcat passwd
Answer
Even changing the ASCII source files on the master will not yield an immediate change in
the ypcat output. Remember, ASCII source files are distinct from NIS maps. The master's
NIS maps must be rebuilt and pushed out to the slaves anytime the ASCII source files
change.
6. On the master, do whatever is necessary to rebuild the passwd map and propagate the
updates to the slave server. Use ypcat to ensure this worked properly.
Answer
master# /var/yp/ypmake passwd
master# ypcat passwd
ypmake rebuilt both password maps and automatically pushed them out to the slave.
7. What happens if an NIS slave is down when the master attempts to push an update? Try it
and find out.
ypmake should display a "Timeout talking to slave" warning. However, the final message
from ypmake says: "no errors encountered." Make a habit of reading ALL the messages
from ypmake so you do not miss timeout warnings.
8. Bring the slave's LAN card back up again, then do whatever is necessary on the slave to
update the maps. Note: ypxfr does not recognize the NIS nicknames.
Answer
slave# ifconfig lan0 up
slave# ypxfr passwd.byuid
slave# ypxfr passwd.byname
9. Is any harm done if you ypxfr a map that is already up-to-date? Try it. From the slave,
try another ypxfr on passwd. What happens? Why might this behavior be
advantageous?
Answer
slave# ypxfr passwd.byname
slave# ypxfr passwd.byuid
ypxfr only downloads new copies of the maps if there have been changes. Since the
maps on the master have not changed since the last ypxfr, there was no need to
download the maps again. The slave's maps remain unchanged.
Answer
client# vipw # add the following lines to the end of the file
+user1
+user2
+user3
2. Did your escape entry have the desired effect? Can your client su to user1's account? Can
your client su to user6's account? Why can user6 still log in?
Answer
Both users appear to be able to log in despite the escape entry. By default, HPUX 11.x
does not recognize escape entries. In order to force the system to recognize the escape
entries, you must modify /etc/nsswitch.conf.
3. Create a new /etc/nsswitch.conf file for yourself with the entries required to
recognize escape characters in /etc/passwd and /etc/group.
Answer
client# vi /etc/nsswitch.conf
passwd: compat
group: compat
4. Try logging in with the user1 and user6 usernames again. What happens now?
Answer
client# su - user1 # succeeds.
client# su - user6 # fails.
Answer
The /etc/passwd file on the master is used to build the passwd map. Deleting all the
user lines would leave the passwd maps empty after the next ypmake.
2. Follow the steps suggested in the notes to restrict access to the master server so only
root can log in.
Answer
master # vi /etc/nsswitch.conf
passwd: compat
group: compat
master# vi /etc/rc.config.d/namesvrs
master# vi /var/yp/ypmake
3. Try logging into your master server as user3. This should fail.
# /sbin/init.d/dtlogin.rc stop
2. What happens if the NIS master server is unreachable for a period of time? Take down
the LAN card on your master server.
Answer
3. Can clients still access the maps? From the client, ypcat passwd and explain the result.
(Be patient.)
Answer
This should work. If the client was bound to the master, it may take a few minutes to
timeout, but eventually ypbind should send out a broadcast to find a new server to
which it can bind. The slave should be able to provide the requested map.
4. Can changes be made to the maps while the server is down? Log in as user4 on the client
and try changing the password with passwd. What happens? (Be patient.)
Answer
The passwd command fails. No changes may be made to the maps until the master server
returns.
Answer
6. Try a ypcat on passwd. What happens? (Be patient. Once you see a few error messages,
press return to get back to a prompt.)
Answer
Eventually, you should get "NIS server not responding" messages, and the command
times out with a failure message since no servers are available.
Answer
Answer
client # ps -e
2. See if your client can still access the NIS maps. Try a ypcat passwd and see what
happens (be patient).
When an NIS server goes down, the client's first access may eventually time out and
generate an error. However, ypbind immediately attempts to bind to another NIS server
on the subnet. Try another ypcat passwd and see what happens. Did the ypcat
succeed this time?
Answer
3. There are a number of RPC daemons that must be running on an NIS server in order for
clients to be able to access the NIS maps. How can the client see if these RPCs are
registered and available?
Answer
client# rpcinfo -p sanfran # test the master
client# rpcinfo -p oakland # test the slave
The master server should be running rcbind (portmap), ypbind, ypserv, ypxfrd,
yppasswd, and ypupdated.
The slave should be running all of the above except rpc.yppasswd and
rpc.ypupdated.
Introduction
In this exercise, you will configure a DNS master server, a slave server, and a DNS client. You
will also have a chance to update the DNS data on your name servers, and explore some of
the name server database files.
Your instructor will break the class into teams of 2 or 3 students each. Each team will be
assigned a DNS sub-domain under hp.com from the table below. You will then work with
your teammates to configure a master server, a slave server, and one or more DNS clients
within your assigned domain. The instructor's station will serve as a root level name server so
you can access other teams' domains as well.
The first two octets in the IP addresses will vary from classroom to classroom, but should be
consistent across all hosts within your classroom. Ask your instructor what the first two
octets should be set to.
Table 12-1.
Preliminary Steps
1. Portions of this lab may disable your lan0 interface card. If you are using remote lab
equipment, login via the GSP/MP console interface for the duration of the lab.
2. Modifying IP connectivity on a running system can wreak havoc on CDE and other
applications. Kill CDE before going any further:
# /sbin/init.d/dtlogin.rc stop
3. If you haven’t already changed your IP address and hostname to match the hostname
your instructor assigned to you, do so now. Use the /labs/netsetup.sh script to
make the change.
4. Run hosts_to_named.
# hosts_to_named -f param
If hosts_to_named fails for any reason, check the syntax in /etc/hosts, remove
/etc/named.data/conf.*, /etc/named.data/boot.*,
/etc/named.data/db.*, and /etc/named.conf, and re-run hosts_to_named.
5. Copy the db.cache file from corp. Note that the FTP daemon on corp attempts to
resolve the source IP address of each incoming FTP request to a hostname. Since DNS
isn’t fully configured at this point, it may take a couple minutes for the resolver to
timeout. Be patient.
# /sbin/init.d/named start
# mkdir /etc/named.data
# chmod 755 /etc/named.data
3. FTP a copy of conf.sec.save from the master server, and move it into place on the
slave server as /etc/named.conf.
# vi /etc/rc.config.d/namesvrs
NAMED=1
NAMED_ARGS=""
# /sbin/init.d/named start
# vi /etc/resolv.conf
search state.hp.com hp.com # use your domain name here
nameserver w.x.y.z # use your master's IP here
nameserver w.x.y.z # use your slave's IP here
2. If your /etc/nsswitch.conf exists, delete it. You can experiment with the default
behavior for now. You will have a chance to re-create the file later.
# rm /etc/nsswitch.conf
3. If you are the master server, you should have modified your /etc/hosts file back in
Part 2, so you can skip this step. Slaves and clients, however, still need to modify
/etc/hosts at this point. Fully qualify and create aliases for your host in your local
domain, and remove all other entries (except localhost).
# vi /etc/hosts
127.0.0.1 localhost
128.1.1.3 city.state.hp.com city # Keep your host’s entry
Answer
# nslookup
> server w.x.y.z # Tell nslookup to use your master server
> city.state.hp.com. # Resolve a host name in your domain
> corp.hp.com. # Resolve corp.hp.com
> w.x.y.z # Resolve an IP in your domain
> w.x.y.z # Resolve corp's IP
> exit
2. Try the same tests that you did in the previous question, but use the slave name server
this time. Does your slave server seem to work?
Answer
# nslookup
> server w.x.y.z # Tell nslookup to use your slave server
> city.state.hp.com. # Resolve a host name in your domain
> corp.hp.com. # Resolve corp.hp.com
> w.x.y.z # Resolve an IP in your domain
> w.x.y.z # Resolve corp's IP
> exit
3. Which name server does nslookup use by default if you simply type nslookup
corp.hp.com from the shell prompt? Try it. How can you permanently change the
default name server?
Answer
The default name server is defined by the first nameserver entry in
/etc/resolv.conf. Reversing the order of the nameserver entries in
/etc/resolv.conf changes the default name server.
4. Try resolving a host name in your domain using the simple host name (eg: sanfran,
rather than sanfran.ca.hp.com). Try resolving a host in another domain using the
simple host name. Your first experiment should succeed, while the second should fail.
Why?
Answer
# nslookup city # use a host in your domain
# nslookup city # use a host in a different domain
The second example fails since the external host’s domain isn’t included in your
/etc/resolv.conf search list.
Answer
The example that follows adds host sacramento.ca.hp.com to sanfran, the master
server for ca.hp.com. Your hostnames and IPs will be different.
# vi /etc/hosts
w.x.y.z city.state.hp.com # add another city to your hosts file
# cd /etc/named.data
# hosts_to_named –f param
2. Which two db.* files would you expect to be affected by the newly added host and IP?
Look at the SOA records for those two files. How can you tell that the files were updated?
Answer
This is reflected by the serial number in the SOA records at the top of both files; the serial
number has been incremented by one.
3. Now that the db.* files have been updated, can you nslookup the new host using the
master server? Try it, and explain the results.
Answer
# nslookup
> server w.x.y.z # Tell nslookup to use your slave server
> city.state.hp.com. # Resolve the new host name
> exit
This should fail. named must be forced to reread its data files before it will resolve the
new hostname. If you run nslookup non-interactively, though, it may find the hostname
in the /etc/hosts file.
4. What do you need to do to ensure that your DNS clients can resolve the new host name?
Make it so.
Answer
Run sig_named on the master server to force the named daemon to reload its data files.
# sig_named restart
# nslookup
> server w.x.y.z # Use your master server's IP here.
> city.state.hp.com # Use your newly added host here
> exit
5. By default, when will your slave name server recognize that a new host name and IP have
been added to the domain? How can you force the slave to do an immediate update? Do
it.
Answer
By default, the slave will only refresh its DNS data at the interval specified in the SOA
records. Typically, the refresh interval is 3 hours. You can force an immediate refresh by
restarting named on the slave server:
# sig_named restart
# nslookup
> server w.x.y.z # Use your slave server's IP here.
> city.state.hp.com # Use your new hostname here
> exit
Part 7: Cleanup
1. Restore your pre-DNS configuration on all hosts in your domain by running
/labs/netfiles.sh:
Directions
This lab offers an opportunity to configure, use, and troubleshoot the ARPA/Berkeley service
configuration on your machine. For a portion of the lab, you will need to work with a partner.
Choose a partner, and decide which machine will be the internet service "server" during the
experiments that follow, and which will be the "client".
Note that the "server" and "client" roles assigned in this lab are relatively arbitrary. Most HP-
UX machines are configured to provide both client and server functionality.
Preliminary Step
1. Portions of this lab may disable your lan0 interface card. If you are using remote lab
equipment, login via the GSP/MP console interface for the duration of the lab.
Answer
# swlist -l product InternetSrvcs
2. (server) The server's inetd daemon must be running in order for clients to have access
to any of the internet services. Use ps -e to check to ensure that the inetd daemon is
running on your server.
Answer
# ps -e | grep inetd
3. (server and client) Which script starts inetd during the boot process? At which run level
does inetd start?
Answer
Answer
# more /etc/inetd.conf
# more /etc/services
The list of services enabled may vary from machine to machine, depending on the
contents of /etc/inetd.conf. Services that are commented out are not available.
The port numbers for the services may be found in the second field of the
/etc/services file.
Most likely, there are no server processes running for any of the listed services. Server
processes for these services are only started on an as-needed basis.
6. (server) Ensure that the services in inetd.conf that appear to be enabled actually are
enabled. Use netstat -a to check the status of each of the enabled services and ports
you listed in the table above.
Answer
# netstat -a
netstat -a lists the status of all configured ports. Unless the services are currently in
use, all ports associated with the services listed in the table should all be in a "LISTEN"
state.
Configure your /var/adm/inetd.sec file such that only the hosts in your row
(including your partner) have telnet access. Add another line to ensure that all your
classmates except your partner can ftp to your machine.
Answer
vi /var/adm/inetd.sec
telnet allow 128.1.1.1-4 # actual IP addresses will vary
ftp deny 128.1.1.2 # actual IP addresses will vary
2. (client) See if your server's configurations so far have succeeded. What messages do you
see when you attempt to telnet or ftp to the server?
Answer
telnet succeeds.
Answer
# vi /etc/rc.config.d/netdaemons
export INETD_ARGS="-l"
# /sbin/init.d/inetd stop
# /sbin/init.d/inetd start
4. (client) See if the logging feature works. From the client, telnet to the server, do an ls,
then immediately exit. Then attempt to ftp to the server (this should fail). Move on to
the next question to see what was recorded in the inetd log.
Answer
# ftp server # server host name will vary
# telnet server # server host name will vary
5. (server) How much detail is recorded in the inetd log? On the server, do a more on the
file where ARPA/Berkeley service requests are logged.
• Does inetd log the name of the service requested?
• Does inetd log the host name of the requesting client?
• Does inetd log the username of the user making telnet requests?
• Does inetd log the commands executed during the telnet session?
• Does inetd log denied requests for Internet service?
Answer
Answer
Which telnet related processes are running on the client now? Which telnet related
processes are running on the server now?
Answer
How many telnetd server processes are running on the server? How many telnet
processes are running on the client? Explain.
Answer
There should be two telnet processes on the client, and two telnet server processes
on the server.
Every telnet service request first goes to the server's inetd daemon, at which point
inetd starts the appropriate server process to manage interaction with the requesting
client.
You will have multiple telnetd server processes running on your machine if there are
multiple simultaneously connected clients.
4. (client and server) Take a look at the ports that are being used by your telnet
processes:
client# netstat -a | grep telnet
server# netstat -a | grep telnet
How many telnet connections are ESTABLISHED? What process do you suppose is
monitoring the port in the LISTEN state? Do the client side telnet processes share a
port or use different ports? Which well-known port number are the telnetd daemons on
the server sharing?
Answer
On the client side, the telnet processes each have a separate port.
On the server side, however, all the telnet daemons receive data on port 23.
5. (client) Close your telnet connections to the server.
Answer
The connection fails. Clients cannot connect until the server's inetd daemon returns.
2. (client and server) What happens if the server's inetd daemon goes down AFTER a
session has been established -- does the existing connection remain, or are all client
connections immediately terminated? Try it, then explain the result.
client# telnet server # establish a connection to the server
server# inetd –k # kill the server's inetd.
server# ps -e | grep telnetd # does the telnet daemon remain?
Answer
Existing connections remain, even if inetd is killed. After the initial connection, inetd
is no longer involved in the client - server communication.
3. (client and server) What happens if the server's telnetd server process is killed while a
client is connected? Try it.
server# ps -e | grep telnetd # find the server process's PID
server# kill _____ # kill telnetd's PID
Does the client telnet process exist after the server's telnetd daemon is killed?
# inetd
Answer
Killing the telnetd process on the server severs the connection. The client telnet
process dies as a result.
4. (client) Must the client be running inetd in order to establish connections to a server?
Try it, and explain the result.
client# inetd -k # kill the client's inetd
client# telnet server # can the client still telnet out?
client# inetd # restart the client's inetd
Answer
Even if the client's inetd process dies, the client should still be able to telnet out.
Answer
# vi /etc/hosts.equiv # list all hosts in your row, one
per line, by host name.
2. (client)
While logged in as root, use rlogin to log into the server. What happens? Why?
Exit out of your rlogin session before proceeding to the next question.
Answer
# rlogin server
You should still be prompted for a password. Remember, host equivalency does not apply
to the root account.
# exit
3. (client)
Use the su command to switch your user ID to user1. Then try rlogin again. What
happens? Why?
Answer
# su - user1
# rlogin server
This should work. /etc/hosts.equiv on the server grants host equivalency to users
on the client.
4. (server)
What can you do on the server to enable root on the clients password free access to your
machine? Make it so.
Answer
# vi ~root/.rhosts # add the client's host name to the file
5. (client)
Terminate the rlogin and su sessions you stated previously. Ensure that you are back
to the "root" userid. Then see if you can rlogin to the server without a password
Answer
# exit
# exit
# rlogin server # should work!
6. (server)
Remove /etc/hosts.equiv and ~root/.rhosts.
Answer
# rm /etc/hosts.equiv ~root/.rhosts
The list below suggests several different ways to corrupt the internet service configuration on
your "server" machine. Take turns being the "corrupter" and the "troubleshooter.”
The "corrupter" should perform any one of the corruption techniques from the list below on
the "server" machine. It is the duty of the "troubleshooter,” then to do whatever is necessary
on the server to enable the client to successfully telnet to the server.
Try the exercise several times, alternating roles as "corrupter" and "troubleshooter.”
/sbin/init.d/dtlogin.rc stop
4. Take down the server's LAN card with ifconfig lan0 down.
Part 7: Cleanup
Before moving on to the next chapter, restore your network configuration to the state it was
in before this lab.
# /labs/netfiles.sh –r NEW
Directions
Answer the following questions.
Answer
A daemon is a software process that runs continuously (in the background) and provides
services upon request. A server process runs one time, when called by a daemon, and
then stops.
Answer
Answer
inetd is a "superdaemon"; it is responsible for invoking other Internet servers when they
are needed. By allowing this one daemon to invoke many servers, the system load is
reduced. (The alternative would be to have one daemon for each of the services, which
would significantly increase the load.)
Answer
/etc/inetd.conf
Answer
# inetd –c
6. What is a port? What file associates port numbers with a service name?
Answer
A port is an address within a host that is used to differentiate between multiple sockets
with the same Internet address. Ports are identified by port numbers. (A socket address
consists of the Internet address plus the port number.)
The /etc/services file associates a port number with a service name. These ports are
called well-known ports.
Answer
• /etc/hosts.equiv
• $HOME/.rhosts
• /etc/ftpusers
• /etc/inetd.conf
• /var/adm/inetd.sec
• inetd logging
Answer
The servers remshd and rlogind use these files, if the files are present.
9. Are the /etc/hosts.equiv and $HOME/.rhosts files optional for using the Berkeley
Services? Explain your Answer.
Answer
The Answer is yes and no. These files are optional, they are used if they are present. You
need to configure bypass password security (user or system equivalency) if a remote user
is able to access to one of your password secured user accounts with rcp or remsh.
10. What is the name and what are the features of the security file that ftpd uses?
Answer
The /etc/ftpusers file denies remote users to access the specified users with ftp.
Answer
A public user account without password security that allows a user to copy files with ftp
from or to a remote system . A chroot() is done to the anonymous FTP user's HOME
directory.
Answer
It allows or denies certain services that are administrated by inetd for specific hosts or
networks.
This will not work. The official service name (see /etc/services) is login, not
rlogin.
14. If inetd logging is enabled, which file contains the logging output?
Answer
Answer:
# swlist InternetSrvcs
Answer:
# setup_bootp
# setup_tftp –h
3. Verify that the bootps and tftp services are both enabled in /etc/inetd.conf and
the /etc/services file.
Answer:
# grep –e bootp –e tftp /etc/inetd.conf
# grep –e bootp –e tftp /etc/services
4. Verify that the TFTP account exists in /etc/passwd and that a TFTP home directory was
created.
Answer:
Answer:
# swlist HPNPL
3. Using hppi, create a bootptab entry for a network printer. Use the hardware address,
IP address, host name, subnet mask, and default router address provided by your
instructor. Use your classroom's room name or number as the printer location, and your
own name as the printer contact.
Answer:
# /opt/hpnpl/bin/hppi
-> (2) JetDirect Configuration
-> (1) Create printer configuration in BOOTP/TFTP database
(Answer the questions that follow according to instructor's directions.)
4. Check the /etc/bootptab file for changes made by hppi. Name three pieces of
information defined in the printer's new entry in bootptab.
Answer
The following are a few of the most common fields found in the /etc/bootptab file:
:ht= #Network interface card type (ether, ieee, etc.)
:ha= #MAC address
:hn: #Should BOOTP provide the printer a host name?
:ip= #IP address
:sm= #Subnet mask
5. At this point your machine is ready to service bootp requests from the network printer
you configured.
6. Now remove the new printer bootp configuration from your machine using hppi.
# /opt/hpnpl/bin/hppi
-> (2) JetDirect Configuration
-> (2) Remove printer configuration from BOOTP/TFTP database
Directions
Your instructor will assign you to work with a team of your classmates to configure an NTP
server, and one or more NTP clients. Record the host names and chosen roles of your
teammates' machines below.
Record the commands you use to complete the steps below, and Answer all questions.
Preliminary Step
1. Portions of this lab may disable your lan0 interface card. If you are using remote lab
equipment, login via the GSP/MP console interface for the duration of the lab.
Since you probably do not have access to a radio clock in the classroom, use the NTP server's
internal system clock as the authoritative time source for your team.
1. Set the local clock forward 2 minutes so the clients can actually see a clock "step" after
enabling NTP.
date MMDDhhmm
xclock -update 1 &
2. Add a server line to the end of the /etc/ntp.conf file defining the local clock as the
only time source. Since the internal system clock is not likely to be accurate, set the
stratum level of this time source to 10.
# vi /etc/ntp.conf
server 127.127.1.1
fudge 127.127.1.1 stratum 10
5. After xntpd starts, it takes 5 poll cycles (320 seconds) to establish the appropriate peer
and server relationships. Wait 5 minutes before proceeding on to the next question.
6. Is the xntpd daemon running? Are there any NTP errors in the syslog?
# ps -e | grep xntpd
# tail /var/adm/syslog/syslog.log
If all is well, the daemon should be running, and there should not be any XNTPD
"ERROR"s in the syslog.
7. Does ntpq -p suggest that the correct association has been formed? What stratum level
did NTP assign to your local clock?
# ntpq -p
There should be one line in the ntpq -p output showing that the local clock is being
used as a stratum 10 time source.
You may use the server's hostname rather than the IP if you wish.
Note: xntp must be able to write to the directory where the drift file is located.
3. Run the NTP startup script on the client to start the NTP daemon. Note the output as
ntpdate steps the system clock.
# /sbin/init.d/xntpd start
4. Check to ensure that your client formed the proper association by running ntpq -p.
# ntpq -p
5. Compare the time on your client against the time on the NTP server. Do they appear to be
synchronized at this point?
Answer
Execute the date command on both machines.
They should agree.
Directions
Carefully follow the directions below.
# /labs/netfiles –r ORIGINAL
Answer:
# mkdir –p /depots/Rel_B.11.11/appl
Answer:
Answer:
4. List the contents of your new depot to verify that the software was copied properly.
Answer:
# swlist –s localhost:/depots/Rel_B.11.11/appl
5. Temporarily unregister your depot. What impact does this have on the depot list reported
by swlist –l depot?
Answer:
The new depot should no longer appear in the swlist –l depot output.
Answer:
7. Use a “pull” install to install the EchoApp product from your new depot on your
localhost. Watch the output carefully. What was installed as a result of your swinstall?
Answer:
# swinstall –s localhost:/depots/Rel_B.11.11/appl \
–x autoreboot=true EchoApp
# /opt/echoapp/bin/echoapp
9. Remove the EchoApp product. Watch the output carefully. What was removed as a
result of your swremove?
Answer:
Answer:
Answer:
Answer:
# swinstall –s server:/depots/Rel_B.11.11/appl \
-x autoreboot=true \
EchoApp @ partner
4. Use the remote swlist functionality to verify that EchoApp installed properly on your
partner’s system.
Answer:
5. Can you remotely remove EchoApp from your partner’s system, too? Try it!
Answer:
Part 3: Cleanup
1. Remove all of the software from your /depots/Rel_B.11.11/appl depot.
Answer:
# swremove –d \* @ /depots/Rel_B.11.11/appl
Answer:
# rm –rfp /depots/Rel_B.11.11/appl