Network Operating Systems
Network Operating Systems
O BOX 13495-
00100 GPO Nairobi. Email:[email protected], 0700-912353,0702-
041042.
©2014
Course Description
Network Operating Systems Module investigates the Network Operating system and how it makes
shared resources available to the network. It considers pre-installation and design issues for the
logical network structure and how the requirements can be represented within the network
operating system. It investigates how resources are securely shared over the network, and also
investigates services that are provided by the network operating system and additional systems.
Consideration is given to the management of the network from the operating system perspective,
including management tools. The process of developing new management tools will be investigated.
Detailed functional and anatomical aspects of the operating system are also investigated.
2
Course Structure
Contact hours: 42
Course Outline
WEEK 1 & WEEK 2
WEEK3
CHAPTER TWO: Evolution of Operating Systems
o Earliest Computers
o Von Neumann architecture
o Bare Hardware, Monitors, Microprocessors
o System Calls
o Device drivers and library functions
WEEK 4
3
o Spooling
WEEK 9
WEEK 10 &WEEK11
WEEK 12
4
o Clocks and Timers
o Blocking and Non-blocking I/O
o Kernel I/O Subsystem ,Scheduling
o Buffering ,Caching
o Spooling and Device Reservation ,Error Handling
o Device Drivers
Course Assessment
Examination - 70%
Assignments - 10%
Total - 100%
Reading Text
5
CHAPTER 1
Just as a computer cannot operate without a computer operating system, a network of computers
cannot operate without a network operating system. Without a network operating system of some
kind, individual computers cannot share resources, and other users cannot make use of those
resources. This section provides a general introduction to network operating systems (sometimes
referred to as NOSs). It describes the basic features and functions of a NOS and contrasts these
with the capabilities of a stand-alone operating system.
A network operating system (NOS) provides services to clients over a network. Both the
client/server and peer-to-peer networking models use network operating systems, and as such,
NOSes must be able to handle typical network duties such as the following:
Providing access to remote printers, managing which users are using which printers when,
managing how print jobs are queued, and recognizing when devices aren't available to the
network
Enabling and managing access to files on remote systems, and determining who can access
what—and who can't
Granting access to remote applications and resources, such as the Internet, and making those
resources seem like local resources to the user (the network is ideally transparent to the user)
Providing routing services, including support for major networking protocols, so that the
operating system knows what data to send where
Monitoring the system and security, so as to provide proper security against viruses, hackers,
and data corruption.
Providing basic network administration utilities (such as SNMP, or Simple Network
Management Protocol), enabling an administrator to perform tasks involving managing
network resources and users.
6
1. Introduction
Network Operating Systems extend the facilities and services provided by computer operating
systems to support a set of computers, connected by a network. The environment managed by a
network operating system consists of an interconnected group of machines that are loosely
connected. By loosely connected, we mean that such computers possess no hardware connections at
the CPU – memory bus level, but are connected by external interfaces that run under the control of
software. Each computer in this group run an autonomous operating system, yet cooperate with
each other to allow a variety of facilities including file sharing, data sharing, peripheral sharing,
remote execution and cooperative computation. Network operating systems are autonomous
operating systems that support such cooperation. The group of machines comprising the
management domain of the network operating system is called a distributed system. A close cousin
of the network operating system is the distributed operating system. A distributed operating system
is an extension of the network operating system that supports even higher levels of cooperation and
integration of the machines on the network (features include task migration, dynamic resource
location, and so on.
An operating system is low-level software controlling the inner workings of a machine. Typical
functions performed by an operating system include managing the CPU among many concurrently
executing tasks, managing memory allocation to the tasks, handling of input and output and
controlling all the peripherals. Applications programs and often the human user are unaware of the
existence of the features of operating systems as the features are embedded and hidden below many
layers of software. Thus, the term low-level software is used. Operating systems were developed, in
many forms, since the early
1960’s and have matured in the 1970’s. The emergence of networking in the 1970’s and its explosive
growth since the early 1980’s have had a significant impact on the networking services provided by
an operating system. As more network management features moved into the operating systems,
network operating systems evolved.
Like regular operating systems, network operating systems provide services to the programs that run
on top of the operating system. However, the type of services and the manner in which the services
7
are provided are quite different. The services tend to be much more complex than those provided by
regular operating systems. In addition, the implementation of these services requires the use
of multiple machines, message passing and server processes.
The set of typical services provided by a network operating system includes (but are not limited to):
1. Remote logon and file transfer
2. Transparent, remote file service
3. Directory and naming service
4. Remote procedure call service
5. Object and Brokerage service
6. Time and synchronization service
7. Remote memory service
The network operating system is an extensible operating system. It provides mechanisms to easily
add and remove services, reconfigure the resources, and has the ability of supporting multiple
services of the same kind (for example two kinds of file systems). Such features make network
operating systems indispensable in large networked environments.
2. History
In the early 1980’s network operating systems were mainly research projects. Many network and
distributed operating systems were built. These include such names as Amoeba, Argus, Berkeley
Unix, Choices, Clouds, Cronus, Eden, Mach, Newcastle Connection, Sprite, and the V-System.
Many of the ideas developed by these research projects have now moved into the commercial
products. The commonly available network operating systems include Linux (freeware),
Novell Netware, SunOS/Solaris, Unix and Windows NT.
In addition to the software technology that goes into networked systems, theoretical foundations of
distributed (or networked) systems has been developed. Such theory includes topics such as
distributed algorithms, control of concurrency, state management, deadlock handling and so on.
8
substrate used by those applications, which need to interact beyond the simplistic boundaries
imposed by the process concept.
A service is provided by a server and accessed by clients. A server is a process or task that
continuously monitors incoming service requests (similar to telephone operators). When a service
request comes in, the server process reacts to the request, performs the task requested and then
returns a response to the requestor. Often, one or more such server processes run on a computer
and the computer is called a server. However, a server process does not have to run on a server
and the two terms are often, confusingly used interchangeably.
What is a service? In regular operating systems, the system call interface or API (Application
Programming Interface) defines the set of services provided by the operating system. For example,
operating system services include process creation facilities, file manipulation facilities and so on.
These services (or system calls) are predefined and static. However, this is not the case in a network
operating system. Network operating systems do provide a set of static, predefined services, or
system calls like the regular operating system, but in addition provides a much larger, richer set of
dynamically creatable and configurable services. Additional services are added to the network
operating system by the use of server processes and associated libraries.
Any process making a request to a server process is called a client. A client makes a request by
sending a message to a server containing details of the request and awaiting a response. For each
server, there is a well-defined protocol defining the requests that can be made to that server and the
responses that are expected. In addition, any process can make a request; that is anyone can become
a client, even temporarily. For example, a server process can obtain services from yet another server
process, and while it is doing so, it can be termed a temporary client.
Services provided by a network operating system include file service, name service, object service,
time service, memory service and so on.
9
Peripherals connected to one computer are often shared by other computers, by the use of
peripheral sharing services. These services go by many names, such as remote device access, printer
sharing, shared disks and so on. A computer having a peripheral device makes it available by
exporting it. Other computers can connect to the exported peripheral. After a connection is made,
to a user on the machine connected to a shared peripheral, that peripheral appears to be local (that
is, connected to the users machine). The sharing service is the most basic service provided by a
network operating system.
The most common service that a network operating system provides is file service. File services
allow user of a set of computers to access files and other persistent storage object from any
computer connected to the network. The files are stored in one or more machines called the file
server(s). The machines that use these files, often called workstations have transparent access to
these files.
Note only is the file service a common service, but it is also the most important service in the
network operating system. Consequently, it is the most heavily studied and optimized service. There
are many different, often non-interoperable protocols for providing file service.
The first full-fledged implementation of a file service system was done by Sun Microsystems and is
called the Sun Network File System (Sun-NFS). Sun-NFS has become an industry standard network
file system for computers running the Unix operating system. Sun-NFS can also be used from
computers running Windows (all varieties) and MacOS but with some limitations.
A network of computers managed by a network operating system can get rather large. A particular
problem in large networks is the maintenance of information about the availability of services and
their physical location. For example, a particular client needs access to a database. There are many
different database services running on the network. How would the client know whether the
particular service it is interested in, is available, and if so, on what server?
10
Directory services, sometimes called name services address such problems. Directory services are
the mainstay of large network operating systems. When a client application needs to access a server
process, it contacts the directory server and requests the address of the service. The directory server
identifies the service by its name – all services have unique names. Then the directory server informs
the client of the address of the service – the address contains the name of the server. The directory
server is responsible for knowing the current locations and availability of all services and hence can
inform the client of the unique network address (somewhat like a telephone number) of the service.
The directory service is thus a database of service names and service addresses. All servers register
themselves with the directory service upon startup. Clients find server addresses upon startup.
Clients can retain the results of a directory lookup for the duration of its life, or can store it in a file
and thus retain it potentially forever. Retaining addresses of services is termed address caching.
Address caching causes gains in performance and reduces loads on the directory server. Caching also
has disadvantages. If the system is reconfigured and the service address changes, then the cached
data is wrong and can indeed cause serious disruptions if some other service is assigned that address.
Thus, when caching is used, clients and servers have to verify the accuracy of cached information.
The directory service is just like any other service, i.e. it is provided by a service process. So there are
two problems:
1. How does the client find the address of the directory service?
Making the address of the directory service a constant, solves the first problem. Different systems
have different techniques for doing this, but a client always has enough information about
contacting the directory service.
To ensure the directory service is robust and not dependent on one machine, the directory service is
often replicated or mirrored. That is, there are several independent directory servers and all of them
contain (hopefully) the same information. A client is aware of all these services and contacts any
one. As long as one directory service is reachable, the client gets the information it seeks. However,
11
keeping the directory servers consistent, i.e. have the same information is not a simple task. This is
generally done by using one of many replication control protocols (see section on Theoretical
Foundations).
The directory service has been subsequently expanded not just to handle service addresses, but
higher level information such as user information, object information, web information and so on. A
standard for worldwide directory services over large networks such as the Internet has been
developed and is known as the X.500 directory service. However the deployment of X.500 has been
low and thus its importance has eroded. As of this now, a simpler directory service called LDAP
(Lightweight Directory Access Protocol) is gaining momentum, and most network operating
systems provide support for this protocol.
A particular mechanism for implementing the services in a network operating system is called
Remote Procedure Calls or RPC. The RPC mechanism is discussed later in the section entitled
Mechanisms for Network Operating Systems. The RPC mechanism needs the availability of an RPC
server accessible by an RPC client. However, a particular system may contain tens if not hundreds or
even thousands of RPC servers. In order to avoid conflicts and divergent communication protocols
the network operating system provides support for building and managing and accessing RPC
servers.
Each RPC service is an application-defined service. However, the operating system also provides an
RPC service, which is a meta-service, which allows the application specific RPC services to be used
in a uniform manner. This service provides several features:
2. Tools for building client and server stubs for packing and unpacking (also known as marshalling
and unmarshalling) of arguments between clients and servers.
12
The RPC service defines a set of unique numbers that can be used by all RPC servers on the
network. Each specific RPC server is assigned one of these numbers (addresses). The operating
system manages the creation and assignment of these identifiers. The operating system also provides
tools that allow the programmers of RPC services to build a consistent client-server interface. This is
done by the use of language processing tools and stub generators, which embed routines in the
client and server code. These routines package the data sent from the client to the server (and vice
versa) in some predefined format, which is also machine independent.
When a client uses the number to contact the service, its looks up the directory and finds the name
of the physical machine that contains the service. Then it sends a RPC request to the RPC listener
on that machine. The RPC listener is an operating system provided service that redirects RPC calls
to the actual RPC server process that should handle the call.
RPC services are available in all network operating systems. The three most common types of RPC
systems are Sun RPC, DCE RPC and Microsoft RPC.
The success and popularity of RPC services coupled with the object-orientation frenzy of the mid-
1980’s led to the development of Object Services and then to Brokerage services. The concept of
object services is as follows.
Services in networked environments can be thought of as basic services and composite services.
Each basic service is implemented by an object. An object is an instance of a class, while a class is
inherited from one or more base or composite classes. The object is a persistent entity that stores
data in a structured form, and may contain other objects. The object has an external interface, visible
from clients and is defined by the public methods the object supports.
Composite services are composed of multiple objects (basic and composite) which can be embedded
or linked. Thus we can build a highly structured service infrastructure that is flexible, modular and
has unlimited growth potential.
13
3.6. Group Communication Service
Group communication is an extension of multicasting for communicating process groups. When the
recipient of a message is a set of processes the message is called a multicast message (a single
recipient message – unicast, all processes are recipients – broadcast). A process group is a set of
processes whose membership may change over time. If a process sends a multicast message to a
process group, all processes that are members of the group will receive this message. Simple
implementations of multicasting does not work for group communications for a variety of reasons,
such as follows:
1. A process may leave the group and then get messages sent to the group from a process who is
not yet aware of the membership change.
2. Process P1 sends a multicast. In response to the multicast, process P2 sends another multicast.
However, P2’s message arrives at P3 before P1’s message. This is causally inconsistent.
3. Some processes, which are members of the group, may not receive a multicast due to message
loss or corruption.
Group communication protocols solve such problems by providing several important multicasting
primitives. These include reliable multicasting, atomic multicasting, causally-related multicasting as
well as dynamic group membership maintenance protocols.
The main provision in a group communication system is the provision of multicasting primitives.
Some of the important ones are:
Reliable Multicast: The multicast is send to all processes and then retransmitted to processes that did
not get the message, until all processes get the multicast. Reliable multicasts may not deliver all
messages if some network problems arise.
Atomic Multicast: Similar to the reliable multicast, but guarantees that all processes will receive the
message. If it is not possible for all processes to receive the message, then no process will receive the
message.
14
Totally Ordered Multicast: All the multicasts are ordered strictly, that is all the receivers get all the
messages in exactly the same order. Totally ordered multicasting is expensive to implement and is
not necessary (in most cases). Causal multicasting is powerful enough for use by applications that
need ordered multicasting.
Causally Ordered Multicast: If two multicast messages are causally related in some way then
all recipients of these multicasts will get them in the correct order.
Imperative in the notion of multicasting is the notion of dynamic process groups. A multicast is sent
to a process group and all current members of that group receive the message. The sender does not
have to belong to the group.
Group communications is especially useful in building fault-tolerant services. For example, a set of
separate servers, providing the same service is assigned to a group and all service requests are sent
via causally ordered multicasting. Now all the servers will do exactly the same thing, and if one serer
fails, it can be removed from the group. This approach is used in the ISIS system (4).
Managing time on a distributed system is inherently conceptually difficult. Each machine runs its
own clock and these clocks drift independently. In fact there is no method to even “initially”
synchronize the clocks. Time servers provide a notion of time to any program interested in time,
based on one of many clock algorithms (see section on theoretical foundations). Time services
have two functions: provide consistent time information to all processes on the system and to
provide a clock synchronization method that ensures all clocks on all systems appear to be logically
synchronized.
Memory services provide a logically shared memory segment to processes not running on the same
machine. The method used for this service is described later. A shared memory server provides the
service and processes can attach to a shared memory segment which is automatically kept consistent
by the server.
15
There is often a need for locking a resource on the network, by a process. This is especially true in
systems using shared memory. While locking is quite common and simple in single computers, it is
not so easy on a network. Thus, networks use a locking service. A locking service is typically a single
server process that tracks all locked resources. When a process asks for a lock on a resource, the
server grants the lock if that lock is currently not in use, else it makes the requesting process wait till
the lock is released.
A plethora of other services exists in network operating systems. These services can be loosely
divided into two classes (1) services provided by the core network operating system and (2) services
provided by applications.
Services provided by the operating system are generally low-level services used by the operating
system itself, or by applications. These services of course vary from one operating system to
another. The following is a brief overview of services provided by most operating systems that use
the TCP-IP protocol for network communications:
1. Logon services: These include telnet, rlogin, ftp, rsh and other authentication services that allow
users on one machine to access facilities of other machines.
2. Mail services: These include SMTP (Simple Mail Transfer Protocol), POP (Post Office
Protocol), and IMAP (Internet Message Access Protocol). These services provide the underlying
framework for transmitting and accessing electronic mail. The mail application provides a nicer
interface to the end user, but uses several of these low-level protocols to actually transmit and
receive mail messages.
4. Publishing services: These include HTTP (Hyper Text Transfer Protocol), NNTP (Network
News Transfer Protocol), Gopher and WAIS. These protocols provide the backbone of the Internet
information services such as the WWW and the news network.
16
CHAPTER 2
Network operating systems provide three basic mechanisms that are used to the support the services
provided by the operating system and applications. These mechanisms are (1) Message Passing (2)
Remote Procedure Calls and (3) Distributed Shared Memory. These mechanisms support a feature
called Inter Process Communication or IPC. While all the above mechanisms are suitable for all
kinds of inter- process communication, RPC and DSM are favored over message passing by
programmers.
Message passing is the most basic mechanism provided by the operating system. This mechanism
allows a process on one machine to send a packet of raw, uninterpreted stream of bytes to another
process.
In order to use the message passing system, a process wanting to receive messages (or the receiving
process) creates a port (or mailbox). A port is an abstraction for a buffer, in which incoming
messages are stored. Each port has a unique system-wide address, which is assigned, when the port
is created. A port is created by the operating system upon a request from the receiving process and is
created at the machine where the receiving process executes. Then the receiving process may choose
to register the port address with a directory service.
After a port is created, the receiving process can request the operating system to retrieve a message
from the port and provide the received data to the process. This is done via a receive system call. If
there are no messages in the port, the process is blocked by the operating system until a message
arrives. When a message arrives, the process is woken up and is allowed to access the message.
A message arrives at a port, after a process sends a message to that port. The sending process
creates the data to be sent and packages the data in a packet. Then it requests the operating system
17
to deliver this message to the particular port, using the address of the port. The port can be on the
same machine as the sender, or a machine connected to the same network.
When a message is sent to a port that is not on the same machine as the sender (the most common
case) this message traverses a network. The actual transmission of the message uses a networking
protocol that provides routing, reliability, accuracy and safe delivery. Then most common
networking protocol is TCP- IP. Other protocols include IPX/SPX, AppleTalk, NetBEUI, PPTP
and so on. Network protocols use techniques such as packetizing, checksums, acknowledgements,
gatewaying, routing and flow control to ensure messages that are sent are received correctly and in
the order they were sent.
Message passing is the basic building block of distributed systems. Network operating system use
message passing for inter-kernel as well as inter-process communications. Inter-kernel
communications are necessary as the operating system on one machine needs to cooperate with
operating systems on other machines to authenticate users, manage files, handle replication and so
on.
Two better inter-process communication techniques are RPC and DSM, described below.
In concept, RPC works as follows: A client process wishes to get service from a server. It makes a
remote procedure call on a procedure defined in the server. In order to do this the client sends a
message to the RPC listening service on the machine where the remote procedure is stored. In the
message, the client sends all the parameters needed to perform the task. The RPC listener then
18
activates the procedure in the proper context, lets it run and returns the results generated by the
procedure to the client program. However, much of this task is automated and not under
programmer control.
An RPC service is created by a programmer who (let us assume) writes the server program as well as
the client program. In order to do this; he or she first writes an interface description using a special
language called the Interface Description Language (IDL). All RPC systems provide an IDL
definition and an IDL compiler. The interface specification of a server documents all the procedures
available in the server and the types of arguments they take and the results they provide.
The IDL compiler compiles this specification into two files, one containing C code that is to be used
for writing the server program and the other containing code used to write the client program.
The part for the server contains the definitions (or prototypes) of the procedures supported by the
server. It also contains some code called the server loop. To this template, the programmer adds the
global variables, private functions and the implementation of the procedures supported by the
interface. When the resulting program is compiled, a server is generated. The server loop is inserted
by the IDL compiler contains code to:
1. Register the service with a name server.
2. Listen for incoming requests (could be via the listening service provided by the operating
system).
3. Parse the incoming request and call the appropriate procedure using the supplied parameters.
This step requires the extraction of the parameters from the message sent by the client. The
extraction process is called unmarshalling. During unmarshalling some type-checking can also be
performed.
4. After the procedure returns, the server loop packages the return results into a message
(marshalling)
and sends a reply message to the client.
Note that all the above functionality is automatically inserted into the RPC server by the IDL
compiler and the programmer does not have to write any of these.
19
Then the programmer writes the client. In the client program, the programmer #include’s the
header file for clients generated by the IDL compiler. This file has the definitions and pseudo-
implementations (or proxies) of the procedures that are actually in the server. The client program is
written as if the calls to the remote procedures are in fact local procedure calls. When the client
program is run, the stubs inserted via the header files play an important role in the execution f the
RPC’s.
When the client process makes a call to a remote procedure, it actually calls a local procedure, which
is a proxy for the remote procedure. This proxy procedure (or stub) gets all the arguments passed to
it and packages them in some predefined format. This packaging is called marshalling. After the
arguments are marshaled, they are sent to the RPC server that handles requests for this procedure.
Of course, as described above, the RPC server unmarshals arguments, runs the procedure and
marshals results. The results flow back to the client, and the proxy procedure gets them. It
unmarshals the results and returns control to the calling statement, just like a regular local procedure.
One problem remains. How does the client know what is the address of the server handling a
particular procedure call? This function is automated too. The IDL compiler, when
compiling an interface definition, obtains a unique number from the operating system and inserts it
into both the client stub and the server stub, as a constant. The server registers this number with its
address on the name service. The client uses this number to look up the server’s address from the
name service.
The net effect is that a programmer can write a set of server routines, which can be used from
multiple client processes running on a network of machines. The writing of these routines take
minimal effort and calling them from remote processes is not difficult either. There is no need to
write communications routines and routines to manage arguments and handle type checking.
Automation reduces chances of bugs quite heavily. This has led to the acceptance of RPC as the
preferred distributed programming tool.
20
CHAPTER 3
4.3. Distributed Shared Memory (DSM)
While message passing and RPC are the mainstays of distributed programming, and is available on
all network operating systems, Distributed Shared Memory or DSM is not at all ubiquitous. On a
distributed system, DSM provides a logical equivalent to (real) shared memory, which is normally
available only on multiprocessor systems.
Multiprocessor systems have the ability of providing the same physical memory to multiple
processors. This is a very useful feature and has been utilized heavily for parallel processing and
inter-process communication in multiprocessor machines. While RPC and message passing is also
possible on multiprocessor systems, using shared memory for communication and data sharing is
more natural and is preferred by most programmers.
While shared memory is naturally available in multiprocessors, due to the physical design of
the computer, it is neither available nor was thought to be possible on a distributed system.
However, the DSM concept has proven that a logical version of shared memory, which works just
like the physical version, albeit at reduced performance, is both possible and is quite useful.
DSM is a feature by which two or more processes on two or more machines can map a single shared
memory segment to their address spaces. This shared segment behaves like real shared memory, that
is, any change made by any process to any byte in the shared segment is instantaneously seen by all
the processes that map the segment. Of course, this segment cannot be at all the machines at the
same time, and updates cannot be immediately propagated, due to the limitations of speed of the
network.
DSM is implemented by having a DSM server that stores the shared segment, that is, it has the data
contained by shared segment. The segment is an integral number of pages. When a process maps the
segment to its address space, the operating system reserves the address range in memory and marks
the virtual addresses of the mapped pages as inaccessible (via the page table). If this process accesses
any page in the shared segment, a page fault is caused. The DSM client is the page fault handler of
the process.
21
The workings of DSM are rather complex due to the enormous number of cases the algorithm has
to handle. Modern DSM systems provide intricate optimizations that make the system run faster but
are hard to understand. In this section, we discuss a simple, un-optimized DSM system –
which if implemented would work, but would be rather inefficient.
DSM works with memory by organizing it as pages (similar to virtual memory systems). The mapped
segment is a set of pages. The protection attributes of these pages are set to inaccessible, read-only
or read-write:
1. Inaccessible: This denotes that the current version of the page is not available on this machine
and the server needs to be contacted before the page can be read or written.
2. Read-only: This denotes that the most recent version of the page is available on this machine, i.e.
the process on this machine holds the page in read mode. Other processes may also have the page in
read-only mode, but no process has it in write mode. This page can be freely read, but not updated
without informing the DSM server.
3. Read-write: This denotes that this machine has the sole, latest version of the page, i.e. the
process on this machine holds the page in write mode. No other process has a copy of this page. It
can be freely read or updated. However, if this page is needed anywhere else, the DSM server may
yank the privileges by invalidating the page.
The DSM client or page fault handler is activated whenever there is a page fault. When activated, the
DSM client first determines whether the page fault was due to a read access or a write access. The
two cases are different and are described separately, below:
22
at the correct address, and sets the protection of the page as readonly. It then restarts the process
that caused the page fault.
If there is one client already holding the page in write mode (there can be at most one client in write
mode) then the server first asks the client to relinquish the page. This is called invalidation. The
client relinquishes the page by sending it back to the server and marking the page as inaccessible.
After the invalidation is done, the server sends the page to the requesting client, as before.
On a write access fault, the DSM client contacts the server and requests the page in write mode. If
the page is not currently used in read or write mode by any other process, the server provides a copy
of the page to the client. The client then copies the page to memory, sets the protection to read-
write and restarts the process.
If the page is currently held by some processes in read or write mode, the server invalidates all these
copies of the page. Then it sends the page to the requesting client, which installs it and sets the
protection to read-write.
1. Only pages that are used by a process on a machine migrate to that machine.
2. Pages that are read by several processes migrate to the machines these processes are running on.
Each machine has a copy.
3. Pages that are being updated, migrate to the machines they are being updated on, however there
is at most one update copy of the page at any point in time. If the page is being simultaneously read
and updated by two or more machines, then the page shuttles back and forth between these
machines.
23
Page shuttling is a serious problem in DSM systems. There are many algorithms used to prevent
page shuttling. Effective page shuttling prevention is done by relaxed memory coherence
requirements, such as release consistency. Also, with careful design of applications page shuttling
can be minimized.
The first system to incorporate DSM was Ivy (5). Several DSM packages are available, these include
TreadMarks, Quarks, Avalanche and Calypso.
24
CHAPTER 4
5. Kernel Architectures
Operating systems have been always constructed (and often still are) using the monolithic
kernel approach. The monolithic kernel is a large piece of protected software that implements all the
services the operating system has to offer via a system call interface (or API). This approach has
some significant disadvantages. The kernel, unlike application programs, is not a sequential program.
A kernel is an interrupt driven program. That is, different parts of the kernel are triggered and made
to execute at different (and unpredictable) points in time, due to interrupts. In fact, the entire kernel
is interrupt driven. The net effect of this structure is that:
1. The kernel is hard to program. The dependencies of the independently interrupt-triggerable parts
are hard to keep track of.
2. The kernel is hard to debug. There is no way of systematically running and testing the kernel.
When a kernel is deployed, random parts start executing quite unpredictably.
3. The kernel is crucial. A bug in the kernel causes applications to crash, often mysteriously.
4. The kernel is very timing dependent. Timing errors are very hard to catch problems that are not
repeatable and the kernel often contains many such glitches that are not detectable.
The emergence of network operating systems saw the sudden drastic increase in the size of kernels.
This is due to the addition of a whole slew of facilities in the kernel, such as message passing,
protocol handling, network device handling, network file systems, naming systems, RPC
handling, time management and so on. Soon it was apparent that this bloat led to kernel
implementations that are unwieldy, buggy and doomed to fail.
This rise in complexity, resulted in the development of an innovative kernel architecture, targeted at
network operating systems, called the microkernel architecture. A true microkernel places only those
features in the kernel, that positively have to be in the kernel. This includes low-level service such as
CPU scheduling, memory management, device drivers, network drivers. Then it places a low-
25
level message passing interface in the kernel. The user-level API is just essentially the message
passing routines.
All other services are built outside the kernel, using server processes. It has been shown that almost
every API service and all networking services can be placed outside the kernel. This architecture has
some significant benefits, a few of which are listed below:
1. Services can be programmed and tested separately. Changes to the service do not need
recompiling the microkernel.
2. All services are insulated from each other – bugs in one service do not affect another service.
This is not only a good feature, but makes debugging significantly easier.
Microkernel operating systems that proved successful include Amoeba (10), Mach (12) and the V-
System (14). A commercial microkernel operating system called Chorus is marketed by Chorus
Systems (France).
A hybrid system starts as a microkernel. Then as services are developed and debugged they are
migrated into the kernel. This retains some of the advantages of the microkernel, but the migration
of services into the kernel significantly improves the performance.
26
CHAPTER 5
7. System Features
The following paragraphs outline the salient features of a set of network (or distributed) operating
systems that either are in operation or have significant contributions to the state of the art.
7.1. Amoeba
7.2. Clouds
Clouds, developed at Georgia Tech (11), is a system designed to support persistent objects that are
large grained. Each object is an address space that is backed up on disk and hence is persistent. The
system paradigm uses a thread-object model, where threads are distributed and can access objects
via a modified RPC mechanism. The object invocation causes the thread to move between address
spaces rather than use a server for processing the RPC request. The entire system is supported on
top of a low-level Distributed Shared Memory mechanism thus making all objects available at all
computers. Services are built into objects and can be accessed using the RPC mechanism. Message
passing is not supported at the API level. Clouds has been used for research in reliability, transaction
processing, replication and distributed debugging.
7.3. Mach
Mach, developed at Carnegie-Mellon University (12), is a Unix compatible operating system that is
built on a microkernel. The microkernel supports message passing, tasks and threads. Mach supports
an innovative user-level external paging system that causes messages to be sent to a paging
27
process whenever there is a page-fault generated by a user process. These external pagers allowed
Mach to support a variety of emulation features. The Unix operating system is supported on top of
Mach as a user-level process, providing the Unix service. Mach is also heavily customizable, making
it an ideal platform for research with operating systems.
7.4. Sprite
Sprite, developed at University of California – Berkeley (13), is an operating system that provides a
single system image to a cluster of workstations. Much of the focus of research with Sprite has been
directed at improving file system performance. As a result, Sprite provides a very high performance
file system through client and server caching. It has process migration to take advantage of idle
machines. It was used as a testbed for research in log-structured file systems, striped file systems,
crash recovery, and
RAID file systems.
7.5. Unix
Unix is a commercial product of Unix Systems Laboratories. Various other companies sell variants
of Unix, using other trade names, the most well known being SunOS/Solaris. SunOS was the first
system to provide a commercial, robust, full-featured network file system (NFS). Linux is a free
Unix compatible operating system. The kernel of Unix is monolithic and most network-based
services are added as separate user processes. Unix is an older operating system, adapted for
network use. Due to the prevalence of Unix in research institutions, all services developed for
networking are developed on Unix platforms first. Hence, everything is available for Unix, though
not from the commercial providers of Unix. Unix is the mainstay of network operating systems in
the academic and research communities.
7.6. V-System
The V-System, developed at Stanford University (14), is a microkernel operating system with
support for fast message passing. Services are added to V by running user-level servers. The
innovative use of low- latency protocols for inter-machine messaging provides V with excellent
28
performance on a networked environment. Also innovative is the uniform support for input-output,
a capability based naming scheme and the clean design of the kernel.
7.7. Windows NT
Windows NT is a commercial product of Microsoft Corporation. This operating system has a hybrid
kernel, that is the inner core of the operating system follows the microkernel technology, but the
services are not at the user-level. Services are added to Windows NT as modules called DLLs
(dynamically loadable libraries). The operating system is extensible and allows for a variety of
pluggable modules at the level of device drivers, kernel extensions as well as services at the user
level. Windows NT provides many of the services described in this article in a commercial product
and competes with the various forms of Unix in the marketplace. Windows NT also has the ability
of running applications written for DOS, Windows 3.1 and Windows 95, all of which are completely
different operating systems. For network use, Windows NT provides file service, name service,
replication service, RPC service and messaging using several protocols.
29
CHAPTER 6
8.1. Distributed Operating Systems
Distributed operating systems are network operating systems with significantly more integration
between the autonomous operating system running on each machine. The distributed operating
system is hence able to provide services that are beyond the capability of network operating systems.
A few of the additional facilities are summarized below:
Dynamic Distributed Data Placement: A data item of file is located close to where it is used. Its
location changes dynamically as its usage pattern changes. The logical location (such as a file is in
one particular directory) is not an indicator of its physical locations. For example, a directory may
contain three files, but the files may be located at three different machines, at some point in time.
Process Scheduling: When a process is started, it is not started on the same machine as its parent,
but the process scheduler decides where to start the process. The chosen machine may be a machine
with the lightest load, or a machine that is close to the data the process will be accessing.
Process Migration: Processes may move from machine to machine (automatically) depending upon
its data access patterns, or resource needs, or just for load balancing.
Fault Tolerance: Failures of sites do not affect any of the computations. Failed computations are
automatically restarted, inaccessible data is made available through replicated copies. Users
connected to the failed machine are transparently relocated.
The bastion of parallel processing used to be large, expensive machines called parallel processors.
The advent of network operating systems has shifted the focus of parallel processing platforms to
cheaper hardware – a network of smaller machines. Parallel processing involves splitting a large task
into smaller units, each of which can be executed on a separate processor, concurrently. This
method uses more hardware, but causes the task to run faster and complete quicker. Parallel
30
processing is very necessary in applications such as weather forecasting, space exploration, image
processing, large database handling and many scientific computations.
Parallel processing on network operating system use toolkits, also known as middleware, which sits
between the application and the operating system and manages the control flow and the data flow. A
particularly popular package is called PVM (Parallel Virtual Machine) (15). PVM augments the
message passing system provided by the operating system with simpler to use primitives, that allow:
control of spawning processes on remote machines, transmission of data to the machine and
collection of results of the computations. Another package with similar characteristics is MPI (16).
An interesting system that uses a radically different approach to parallel processing is Linda (17).
Linda integrates the notion of work and data into a unified concept called the tuple-space. The
tuple-space contains work tuples and data tuples. Processes called workers run on many machines
and access the tuple-space to get work, to get input and to store the results.
Some recent parallel processing system use distributed shared memory to hold the data, mimicking
the facilities available on the large parallel processors. Such systems are easier to program as they
insulate the programmer from the idiosyncrasies of data placement and data transmission.
TreadMarks (18) is a product that provides a high-performance distributed shared memory system
using a method called release consistency. Calypso (19) is another system that supports easy to
program parallel processing, and also provides load balancing and fault tolerance with no additional
cost. Calypso uses a manager- worker model that creates a logical parallel processor, and can
dynamically change the number of workers depending upon physical network characteristics. Other
systems that are in use include Amber, Avalanche, GLU, P4, Piranha and Quarks.
Novell's NetWare is the most familiar and popular example of a NOS in which the client computer's
networking software is added on to its existing computer operating system. The desktop computer
needs both operating systems in order to handle stand-alone and networking functions together.
6. Theoretical Foundations
The theoretical study of autonomous but networked computing system was propelled by the need
for algorithms for use in networked environments. This active field of research has produced
some interesting and seminal results. Much of the foundational work has resulted in the
development of distributed algorithms (20). These algorithms are designed to allow a set of
31
independent processes, running on independent computers (or machines, or nodes) to
cooperate and interact to achieve a common goal. Many such algorithms are used for application
programming. Some of the algorithms are however relevant to management of distributed systems
and are used in network operating systems. In the following sections, we present a few algorithms,
which form the theoretical foundations of network and distributed operating systems. These include
time management, deadlock handling, mutual exclusion, check pointing, deadlocks detection,
concurrency control, consensus and replication control.
Each physical machine on a network has its own clock, which is a hardware counter. This clock runs
freely, and cannot be physically synchronized with other clocks. This makes the notion of time on a
distributed system hard to define and obtain. The first clock synchronization algorithm provided a
method of logically synchronizing clocks such that no application running on the system could ever
detect any drift amongst the physical clocks (even though, the clocks do drift). Clocks on systems
built using this technique are called Lamport Clocks after the inventor of the algorithm (6).
The Lamport Clock algorithm works by stamping a time on every message outgoing from any
machine. When the operating system on system Si sends out a message, it stamps it with the time Ti,
where Ti is the time according to the physical clock on Si.
Suppose the message is received by the operating system on system Sj. The operating system on Sj
checks the timestamp in the message with the time according to the local clock on Sj, i.e. Tj: If Ti <
Tj then no action is needed.
If Ti > Tj then the clock on Sj is incremented to Ti+1.
The above action, at the least, ensures that no messages are received “before” they are sent.
However, it also has some interesting side effects. These are:
32
The clocks are not physically synchronized, but they are logically synchronized. That is, to all
applications running on the systems, the clocks appear completely synchronized.
If two actions or events on two different machines are transitively related; that is there is a chain of
events from the occurrence of event i to the occurrence of event j; then the time of occurrence of i
will always be lower than the time of occurrence of j. Even if i and j happened on two different
machines with two different clocks.
The Lamport Clock is a very simple algorithm which produces properly synchronized
(logical) distributed clocks. However it has the shortcoming that clocks cannot be “set back”, and
hence real time clocks cannot use this method. In fact, setting back a clock, will cause it to race
ahead to catch up with the fastest clock. This problem is solved by the use of vector clocks.
In the vector clock scheme, each system clock is independent and is never updated by the
clock algorithm. Every system maintains its own time, and information about the time on other
systems. That is, there is a local clock on each system, as well as registers containing some
approximation of the time on the sibling systems.
The time is maintained as an n-tuple (or vector) where n is the number of systems on the network.
Each machine maintains this n-tuple. On machine Si, the n-tuple (or the time vector) is Tn. Tn, of
course has n fields and Tn[i] is the local clock time. The other fields are updated in accordance to
the following algorithm.
When a message is sent from Si to Sj, the value of Ti is sent along with the message. When Sj
receives the message, it updates its time vector Tj by updating each field in Tj to the larger of the
values contained in the corresponding fields of Ti and Tj.
Now it can be shown that any two timestamps can be compared using vector clock algebra. Suppose
we want to compare two timestamps Ta and Tb. Each has n fields, Ta[0] to Ta[n-1]. The
comparison operators are defined below.
33
Not Equal: For some i Ta[i] is not equal to Tb[i].
Less than or equal: For all i, Ta[i] is less than or equal to Tb[i].
Not less than or equal: For some i, Ta[i] is not less than or equal Tb[i]. Less than: (Ta is less than or
equal to Tb) and (Ta is not equal Tb). Concurrent: not (Ta less than Tb) and not (Tb less than Ta).
The vector clock thus provides all the functions of Lamport Clocks as far as timestamps and event
ordering is concerned. It is also just as simple to implement, but the time on one machine can be
adjusted without affecting the time on other machines.
The easy solution is to use a lock server. Each process asks the lock server for permission to enter.
The lock server permits only one process at a time. When a process leaves the critical section, it
informs the lock server and the lock server can now allow another process to enter. This solution is
called the centralized solution to the DME problem.
The above solution is called centralized, because all decisions are made at one site. In a problem
such as DME, we can define two sets for each site. A site i has a Request Set Qi and a Response set
Ri. Qi is the set of sites that i will contact when it wants to enter the critical section. Ri is the set of
sites that contact i if they want to enter the critical section (7).
In order for a mutual exclusion algorithm to be distributed, two rules must apply. These are:
Equal Responsibility Rule: For all i, j, |Qi| = |Qj|. Equal Effort Rule: For all i, j, |Ri| = |Rj|
34
In the centralized case, Ri for all i is the lock server site; and for all i, Qi is empty. Thus, the
centralized solution fails the two rules. Many different DME algorithms can meet such rules.
Lamport proposed the first solution. In the Lamport algorithm, there are three steps:
Step 1: When a process wants to enter the critical section, it sends a request message, along with a
timestamp, to all other processes, including itself. Upon receiving such a message, each process
queues the request in timestamp order in a local request queue and sends an acknowledgment. The
requesting process waits for all acknowledgments before proceeding.
Step 2: A process can enter when it notices that its own request is the first request in its own local
request queue.
Step 3: Whenever a process exits the critical section it informs all processes and they remove the
exiting processes request from their local request queues.
The above algorithm meets the equal responsibility and equal effort rules. It uses 3n messages per
entry into a critical section. The number of messages can be reduced to sqrt(n) by using a type of
algorithm first proposed by Maekawa (7). Currently there are a large number of algorithms each
having some advantage over the other.
Note that in most practical situations, the centralized algorithm works better and uses the lowest
number of messages (just 2 messages, per entrance). Thus, it is the most commonly used algorithm.
In a networked or distributed system this technique does not work. Consider two processes P1 and
P2. P1 sends a message to P2. We ask both P1 and P2 to stop and checkpoint themselves. P1 does
35
so, and then continues, and sends a message to P2. P2 receives the message from P1 and then
receives the checkpoint notification and then checkpoints itself. Now if we compare the checkpoints
of P1 and P2, we find P2 has received a message that has not yet been sent by P1. This is called an
inconsistent checkpoint.
The classic consistent checkpoint algorithm was proposed by Chandy and Lamport and is called the
Snapshot Algorithm (7). In the snapshot algorithm, to initiate a checkpoint, a marker message is sent
to any one process. When a process gets a marker message for the first time, it checkpoints itself and
then sends out marker messages to all the processes it communicates with. If a process receives a
marker message subsequent to its first time, it ignores the message. It can be shown that the markers
eventually disappear, and when the markers disappear, all processes have recorded a set of
consistent checkpoints.
Of course many other checkpointing algorithms have been propose since then, having
characteristics and features greater that the basic algorithm outlined above.
Resource management in operating systems can lead to deadlocks. A resource is any entity, such as
files, peripherals, memory and so on. Deadlocks occur, for instance when processes acquire locks on
resources. For example, suppose a process P1 locks resource x and then process P2 locks resource y.
Thereafter process P1 requests a lock on x and process P2 requests a lock on y. Now, neither P1 nor
P2 can progress any further and has to wait for ever. This situation is called a deadlock, and it needs
to be detected and then resolved by terminating one of the processes. Deadlock detection on
centralized systems are easier than deadlock detection on distributed systems.
Consider the following situation, similar to the deadlock described above, but in the context
of a distributed system. A process P1 requests and obtains a lock on resource x. The resource x is
located on a machine Mx and hence is controlled by a lock server running on machine Mx. Now,
process P2 requests and obtains a lock on resource y, which is located on a machine My and
controlled by a lock server on machine My. Then process P1 requests a lock on y and process P2
requests a lock on x.
36
The above situation is a deadlock. However, the lock servers cannot detect this deadlock by
themselves. At the lock server on machine Mx, a process (P1) holds a lock on x and another process
(P2) has requested a lock on x. This is a perfectly normal, legal situation, that is not a deadlock.
Similarly there is no deadlock at machine My. However, a global or distributed deadlock exists,
involving two lock servers.
In a system consisting of a large number of lock servers and large numbers of processes and
resources, detection of deadlocks becomes a serious issue. Most early distributed deadlock detection
algorithms tried to consolidate the data about resource allocation from multiple lock servers
in order to find deadlocks. Such algorithms proved to be complicated, expensive in terms of
computational complexity and prone to detect deadlocks even if there are no deadlocks (a
phenomenon called false deadlocks).
A distributed deadlock detection algorithm by Chandy and Misra was a breakthrough that solved the
deadlock problem in a simple fashion. The solution is called the probe algorithm(9). In this scheme,
a process waiting for a resource sends a probe message to the lock server handling the resource. The
lock server forwards the probe to a process that is currently holding the resource. When a process
receives a probe, and the process is not currently waiting for a resource, it ignores the probe. If the
process is currently waiting for a resource, then it forwards the probe to the lock server that controls
the resource. If the originator of the probe gets the probe returned to it, then there is a
deadlock. A careful implementation of this protocol can be shown to be free from detection of false
deadlocks.
37
Concurrency control ensures serializability. Serializability is a property that ensures that the
concurrent execution of a set of processes have results that are equivalent to some serial execution
of the same set of processes. Serializability is an important property for any system that handles
persistent, interrelated data. Provision of serializability is made possible by many techniques, the two
most well known are two- phase locking and time stamping.
In the two phase commit scheme, a process that reads or writes data have to obtain a lock on the
data item it accesses before they access the data item, and may release the lock after the access is
over. If multiple data items are accessed, then no lock can be released until all locks have been
acquired. This ensures serializable updates to the data.
In the timestamp scheme, all data items bear two timestamps, the read-timestamp and the
write- timestamp. All processes or transactions also bear timestamps. The process timestamp is the
time at which the process was created. The read-timestamp on a data item is the value, which is the
largest of all the process timestamps, of processes which have read the data item. The write-
timestamp is equal to the process timestamp of the process that last wrote this data item.
The timestamp protocol works as follows. Suppose a process bearing a timestamp pt wants to read a
data time with a read-timestamp rt and a write timestamp wt. If pt < wt then the process is aborted
or restarted. Otherwise it is allowed to read the item, and if pt > rt then the read timestamp of the
item is updated to be equal to rt. If the process tried to write a new value to the data item, then pt
must be higher than both rt and wt (else the process is aborted). After the write, both read and write
timestamps of the data item is set to pt. The timestamp protocol is termed an optimistic protocol, as
it does not have any locking delays and all operations are processed immediately or aborted.
The two-phase locking and timestamp protocol can be adapted to distributed systems. To
implement two- phase locking, one or more distributed lock servers have to be provided. If multiple
lock servers are provided, then distributed deadlock detection has to be added. In addition,
the two-phase commit protocol may have to be used for consensus (next section).
38
Even Lamport clocks can be used, but to ensure uniqueness, the site identifier of the site that assign
the timestamp is appended to the end of the timestamp.
machines. A process wants to update the value to 1 on all three machines. It tells servers on all the
three machines to do it. The servers now want to ensure all of them do it, or none of them do it (to
preserve consistency). So they need to agree (or arrive at a consensus) to either perform the
operation (flip the 0 to1) or abort the operation (leave it as 0).
In theory, it can be shown that consensus in distributed system is impossible to achieve, if there is
any chance of loosing messages on the network. The proof is quite involved, but consider the
following conversation:
Machine 1 to Machine 2: Flip the bit from 0 to 1, and tell me when you are done so that I will flip it
too.
Machine 2 to Machine 1: OK, I have flipped it. But, please acknowledge this message, or else I will
think you did not get my reply and you chose not to flip – in which case I will flip mine back to 0.
Machine 1 to Machine 2: Everything is fine. Got your message. But, please acknowledge this
message, as I need to know that you got this message, or you may flip the bit back.
Machine 2 to Machine 1: Got it. But now I need another acknowledgment, to ensure......
39
As is obvious, this bickering continues forever. It can be shown that there is no finite length
sequence of messages that achieves consensus, even if messages are not lost, as long as there is a fear
of a message getting lost.
In reality, however there is need for consensus, and impossibility is not a deterrence. Many systems
just assume messages are not lost and thus implement consensus trivially (machine 1 tells machine 2,
to flip it and assumes it will be done). In more critical applications, the two-phase commit protocol
is used.
The two-phase commit protocol works as follows. A machine is selected as the leader (e.g. the one
that started the process, that made updates) and the rest of the machines are cohorts. That leader
tells all the cohorts to “flip the bit”. All of them flip it, and retains a copy of the old value and sends
an OK the coordinator. This is called the pre-commit phase. At this point, all the cohorts have the
old value and the new value. After all the Ok’s are received, the leader sends a commit message
which causes all the cohorts to install the new (flipped) value. If some OK’s are not received, the
leader tells all the cohorts to abort, that is install the old value back. It can be shown that this
protocol (with some extensions for failure handling) works for most cases of message loss and
machine failure.
In distributed systems, data is often replicated, that is multiple copies of the same data are stored on
multiple sites. This is for reliability, performance, or both. Performance is enhanced if regularly
accessed data is scattered over the network, rather than in one place – it evens out the access load.
In addition, if one site having the data fails then the date is still available from the other sites.
Replication works very well for read-only data. But, to be useful, replication should work with read-
write data also. Replication control protocols ensure that data replication is consistent, in spite of
failures for read-write data. There are many protocols, a few are outlined below.
Read one, write all: In this scheme, a reader can read from any copy, but a writer has to update all
copies. If not all copies are available, the writer cannot update. Most commonly used.
40
Primary Copy: A variant of the above, read any copy, write to the primary copy. The machine
holding the primary copy then propagates the update.
Read majority write majority: If there are N copies, then read N/2+1 copies and take the value from
the most recent of the copies. Writing to any of the N/2+1 copies is good enough.
Voting: Each copy has a certain number of votes. The total number of votes is v. Choose a read
quorum r and a write quorum w such that r + w = q + 1. Now, to access, find enough copies such
that the total vote is equal (or greater) than r for reading, and w for writing.
Depending on the read traffic, the write traffic, and the failure probabilities, one of the above
protocols is chosen. Note that voting is a general protocol, where setting the votes of each item to 1
and r to 1 and w to N makes it the read-one-write-all protocol. Similarly, it can mimic the majority
protocol. There are other protocols that are more general than voting (such as quorum consensus).
Network operating system software is integrated into a number of popular operating systems
including Windows 2008 Server/Windows 2003 Server,Windows 2000 server, Windows NT
Server/Windows NT Workstation, and Linux/Unix platforms..
A computer's operating system coordinates the interaction between the computer and the programs
(applications) it is running. It controls the allocation and use of hardware resources such as:
Memory
CPU time
Disk space
Peripheral devices
In a networking environment, servers provide resources to the network clients, and client network
software makes these resources available to the client computer. The network and the client
operating systems are coordinated so that all portions of the network function properly.
41
1.2 Multitasking
A multitasking operating system, as the name suggests, provides the means for a computer to process
more than one task at a time. A true multitasking operating system can run as many tasks as there
are processors (CPUs). If there are more tasks than processors, the computer must arrange for the
available processors to devote a certain amount of time to each task, alternating between tasks until
all are completed. With this system, the computer appears to be working on several tasks at once.
Because the interaction between the stand-alone operating system and the NOS is ongoing, a pre-
emptive multitasking system offers certain advantages. For example, when the situation requires it,
the pre-emptive system can shift CPU activity from a local task to a network task.
In a stand-alone system, when the user types a command that requests the computer to perform a
task, the request goes over the computer's local bus to the computer's CPU. For example, if you
want to see a directory listing on one of the local hard disks, the CPU interprets and executes the
request and then displays the results in a directory listing in the window. In a network environment,
however, when a user initiates a request to use a resource that exists on a server in another part of
the network, the request has to be forwarded, or redirected, away from the local bus, out onto the
network, and from there to the server with the requested resource. This forwarding is performed by
the redirector.
42
A redirector processes forwarding requests. Depending on the networking software, this redirector is
sometimes referred to as the "shell" or the "requester." The redirector is a small section of code in
the NOS that:
Intercepts requests in the computer
Determines if the requests should continue in the local computer's bus or be redirected over
the network to another server
Redirector activity originates in a client computer when the user issues a request for a network
resource or service. Figure 1 shows how a redirector forwards requests to the network. The user's
computer is referred to as a client because it is making a request of a server. The request is intercepted
by the redirector and forwarded out onto the network. The server processes the connection
requested by client redirectors and gives them access to the resources they request. In other words,
the server services - or fulfils - the request made by the client.
Using the redirector, users don't need to be concerned with the actual location of data or
peripherals, or with the complexities of making a connection.
The role of the NOS on a server is to process and act upon requests from clients (redirectors) for
network resources managed by the server. For example, in Figure 2, a user is requesting a directory
listing on a shared remote hard disk. The request is forwarded by the redirector on to the network,
43
where it is passed to the file and print server containing the shared directory. The request is granted,
and the directory listing is provided.
The server is also responsible for controlling the way in which resources are shared over the network.
Sharing is the term used to describe resources made publicly available for access by anyone on the
network. Most NOSs not only allow sharing, but also determine the degree of sharing. For example,
an office manager wants everyone on the network to be familiar with a certain document (file), so
she shares the document. However, she controls access to the document by sharing it in such a way
that:
Some users will be able only to read it
Some users will be able to read it and make changes in it
It is the responsibility of the network administrator to ensure that network resources will be safe
from both unauthorised access and accidental or deliberate damage. Policies for assigning
permissions and rights to network resources are at the heart of securing the network.
Two security models have evolved for keeping data and hardware resources safe:
Password-protected shares
44
Access permissions
These models are also called "share-level security" (for password-protected shares) and "user-level
security" (for access permissions).
Access-permission security involves assigning certain rights on a user-by-user basis. A user types a
password when logging on to the network. The server validates this user name and password
combination and uses it to grant or deny access to shared resources by checking access to the
resource against a user- access database on the server. Access-permission security provides a higher
level of control over access rights. It is much easier for one person to give another person a printer
password, as in share-level security. It is less likely for that person to give away a personal password.
Because user-level security is more extensive and can determine various levels of security, it is
usually the preferred model in larger organizations.
Network operating systems also allow a network administrator to determine which people, or
groups of people, will be able to access network resources. A network administrator can use the
NOS to:
Create user privileges, tracked by the network operating system, that indicate who gets to use
the network
Grant or deny user privileges on the network
Remove users from the list of users that the network operating system tracks
45
To simplify the task of managing users in a large network, NOSs allow for the creation of user
groups. By classifying individuals into groups, the administrator can assign privileges to the group.
All group members have the same privileges, which have been assigned to the group as a whole.
When a new user joins the network, the administrator can assign the new user to the appropriate
group, with its accompanying rights and privileges.
46
CHAPTER 7
The major server-based network operating systems are Microsoft Windows NT 4 and Windows
2000 Server, Novell NetWare 3.x, 4.x and 5.x, and UNIX (including Linux and Solaris). The
principal peer-to-peer network operating systems are AppleTalk, Windows 95 and 98, and UNIX.
Each operating system has its own strengths and weaknesses, and its own supporters and detractors.
Windows 2000 Server is one of the most popular server-based network operating systems. When
you install and configure Windows 2000 Server it establishes a domain. The domain contains
information such as what users are allowed to use the network and what computers are parts of the
network. Computers must be joined to the domain before they can start to access its resources. The
server that is in charge of managing the domain is called the domain controller. The domain controller
provides a number of different services (i.e. programs) that carry out different network management
functions. Three of the most useful are the Active Directory, the Dynamic Host Configuration Protocol, and
the Domain Name Service.
The Active Directory service performs a number of functions. One of these is to keep a track of
which users are allowed to log on to the network, and what privileges and restrictions have been
placed on these users. As was discussed above it is usually desirable to restrict the network privileges
of some or all users, to prevent unauthorised access to sensitive information. Different user
accounts will have different sets of privileges and restrictions. There is normally one special account,
the administrator, which has access to do everything on the network. Only the network administrator
knows the password for this account.
47
Another function of the Active Directory is to manage which computers are joined to the domain.
Just because a computer is physically connected to the domain controller via some form of cabling it
does not mean that it is able to access all of the network resources available from it. First it must
request permission to join from the domain controller. This permission is only granted if the user
attempting to join it is using the administrator account, or another account with sufficient privileges.
Every computer on a network must have a unique address. This address is attached to any packets
of data that are intended for transmission to the computer. If the network is using the TCP/IP
protocol, these addresses will be IP addresses (i.e. they will consist of 4 numbers between 0 and 255
separated by dots).
There are two ways of assigning IP addresses to computers. The first is static addressing. In static
addressing the network administrator manually assigns a different IP address to each computer. The
computer will keep this IP address until the network administrator changes the software settings. If
two computers have the same IP address a conflict will occur. If the conflict goes undetected then
both computers will compete to receive packets of data sent to their IP address. However, normally
the NOS will detect when an IP conflict has occurred and warn the administrator. Static addressing
is a simple and easy solution and is commonly used in small networks where significant expansion is
not envisaged.
The second way of assigning IP addresses is called dynamic addressing. In dynamic addressing a
program run on the server is responsible for assigning IP addresses to each computer. When a
computer is first joined to the server’s domain, it requests an IP address from this program, which
then assigns an address chosen from a pool of free addresses that it maintains. The address is
typically leased to the computer, i.e. it is not permanently assigned. Eventually the computer’s IP
address lease will expire, and it will need to request a new one. This is why the scheme is called
dynamic addressing: the IP address of a given computer can change over time, whereas in the static
addressing scheme it is fixed, or static.
48
In Windows 2000 Server the program that is responsible for leasing IP addresses is called the
Dynamic Host Configuration Protocol (DHCP). DHCP maintains an address pool (a list of free IP
addresses) and a list of address leases (the addresses that have already been leased).
As well as having a unique IP address, each computer on a network has a unique computer name.
On a local network, this name can just be a single word, for example FBE-SERVER or AWASA.
On the Internet the name will consist of a sequence of words separated by dots, for example
www.yahoo.com or www.bbc.co.uk. There is a one-to-one mapping between these computer names
and IP addresses: every IP address corresponds to a single computer name and vice versa. The
reason for using computer names instead of just IP addresses to identify computers is that they are
easier for people to understand and remember.
If this one-to-one mapping exists then clearly the NOS must maintain a list of which IP address
maps to which computer name, so that it can translate between the two. For instance, if a user
requests a directory listing from the computer AWASA then the NOS must first find out the IP
address that corresponds to the name AWASA, and then send a request for the directory listing to
that IP address. The process of translating a computer name into an IP address is known as name
resolution.
In Windows 2000 Server the Domain Name Service (DNS) is responsible for keeping the list of IP
addresses and computer names and for providing a translation service between the two for client
computers.
1.5.3.1 Naming hierarchies
49
example, Figure 3 shows a sample naming hierarchy for the ‘.et’ domain. If there were a computer
called fbe-server in the fbe subdivision of the domain, it would have the name fbe-server.fbe.mekelle.edu.et.
The number of different segments to a computer name (in this example it is 5) is determined by the
naming hierarchy. There is no global standard. Each organisation can choose how to structure
names in its hierarchy.
The Internet contains a number of DNS servers. None of these servers knows the names and
addresses of every computer on the Internet. DNS uses a system known as distributed lookup to
enable every DNS server to be able to translate any address. This means that each DNS server is
responsible for providing a translation service for a certain subset of computers only. If it receives a
request that it cannot answer, it will forward the request to another DNS server that will know the
answer. For example, in Figure 3 the DNS server at mekelle.edu.et provides a translation service for
the ‘.edu.et’ subdivision. If it receives a request for an address that it does not end in ‘edu.et’ it will
forward it to the root DNS server for the ‘et’ domain.
2. Network Applications
50
Computer networking has revolutionised the way people use computers. This section will briefly
examine some of the applications of computer networking that have led to this massive change. In
particular we will look at the Internet and electronic mail (or email).
The Internet is a vast network of networks, the ultimate WAN, consisting of tens of thousands of
businesses, universities, and research organizations with millions of individual users and using a
variety of different network architectures.
What is now known as the Internet was originally formed in 1970 as a military network called
ARPAnet (Advanced Research Projects Agency network) as part of the United States Department
of Defence. The network opened to non-military users in the 1970s, when universities and
companies doing defence-related research were given access, and flourished in the late 1980s as
most universities and many businesses around the world started to use the Internet. In 1993, when
commercial Internet service providers were first permitted to sell Internet connections to
individuals, usage of the network grew tremendously. There were millions of new users within
months, and a new era of computer communications began. Today, it is estimated that over 500
million people use the Internet worldwide. The table below breaks this number down by region.
Every site on the Internet has an address, just like people have PO Box numbers at their local post
office. On the Internet addresses are called URLs (Uniform Resource Locators). URLs are written as
51
a number of words separated by dots, for example www.yahoo.com. The word after the final dot
(e.g. com) is the domain of the address. The domain indicates the category of the web site. The table
below lists some of the more common categories of address on the Internet.
The World Wide Web (WWW) is a way of browsing the information on the Internet in a pleasant,
easy to understand. Text can be mixed with graphics, video, and audio to provide multimedia (i.e.
many different media) Internet content.
This is all made possible by using a special communications protocol, called the Hypertext Transport
Protocol (HTTP). You may have noticed when using the Internet that many URLs begin with the
letters “http://” - this means that the page of information will be transmitted using the Hypertext
Transport Protocol. Pages of multimedia Internet content are commonly written in a special
language called HTML (the Hypertext Markup Language)
There are two other important communications protocols for use on the Internet. Both are quite old
now but still in common use.
52
The file transport protocol (FTP) uses the TCP protocol as the underlying transport protocol. (TCP is
part of the TCP/IP protocol suite.) The purpose of FTP is to safely and efficiently transport files
over computer networks.
Secondly the TELNET protocol is used for providing remote terminal access over a network. For
example, using TELNET a user can log in to another computer somewhere else on the network and
take part in an interactive session on that computer. TELNET also uses TCP as its underlying basis
for communications.
One of the more recent innovations in the use of the Internet is instant messaging. Using instant
messaging software two users in different parts of the world can take part in an on-line conversation
using their personal computers. Text typed at one computer will be “instantly” transmitted to the
screen of the other. Instant messaging provides for much faster and interactive communication than
electronic mail.
When most people think of applications of the Internet they probably think first of electronic mail,
or email. Originally email was a way of sending simple text messages to different users over local area
networks. However, nowadays email can be used to send multimedia content such as audio, video or
even computer software to a user anywhere in the world.
Email is made possible by using the Simple Mail Transport Protocol (SMTP). SMTP specifies how
electronic mail messages are exchanged between computers using TCP. In order to use email, it is
necessary to install software on both the sending and receiving computer. Email uses the client-
server method to allow mail to be exchanged. Client computers exchange messages with a mail server
that is responsible for ensuring that the message reaches its destination. On the server computer
each user is assigned a specific mailbox. This electronic mailbox is just like a normal PO Box – mail is
stored there until a user logs on to collect their mail. Each electronic mailbox has a unique email
53
address. Email addresses are divided into two parts: the user name and the mailbox name. These two
parts are separated by an “@” character. For example, [email protected] is a valid email
address. The user name is “Elizabeth”, and the mail server that is responsible for collecting the mail
is located at the computer called “telecom.net.et”. In this case “telecom.net.et” is a mail server
running at Ethiopian Telecom in Addis Ababa. Remember from Handout 4 (Protocols) that this
computer name will also have an associated IP address to identify it on the Internet.
SMTP is the protocol used to send email on the Internet. The receiving computer will need to use
another protocol to access the incoming mail. Two different protocols exist for this purpose: the
Post Office Protocol (POP3) and the newer alternative, Internet Message Access Protocol (IMAP).
The potential of the computer networks and the Internet to change our lives still further is great. As
processor speed and network bandwidth increases many new applications will undoubtedly emerge.
Already it is becoming possible to view television programs, films and other multimedia content on
demand over the Internet. Once this becomes more commonplace it will fundamentally change the
way we organise our leisure activities. In the workplace too further changes will occur. One
interesting current development is known as the grid. The Internet consists of hundreds of
thousands of computers, most of which are idle most of the time. The grid is a way of utilising this
unused processor power. In the future it may be possible to run complex and processor-intensive
software by simultaneously using CPUs in many different parts of the world.
54
CHAPTER 8
This chapter introduces a deeper look at Network operating systems installation for various
platforms and their distinguishing features.
The major server-based network operating systems are Microsoft Windows NT 4 and Windows
2000, 2008 Server, Novell NetWare 3.x, 4.x and 5.x, and UNIX (including Linux and Solaris). The
principal peer-to-peer network operating systems are AppleTalk, and UNIX. Each operating system
has its own strengths and weaknesses, and its own supporters and detractors.
55
Client/Server Communication
Logon process
Redirector
Intercepts requests, determines where to
handle
File access protocol
Windows XP client communication with
Windows Server 2008
CIFS (Common Internet File
System)
Older protocol SMB (Server
Message Block)
Broad support allows every client type to
authenticate, access resources
Middleware
Translates requests, responses between client, server
3-tier architecture
Client/server environment incorporating middleware
Users and Groups
After NOS client authentication
Client gains access to
NOS services,
resources
Administrator account
Most privileged user
account
Unlimited rights to
server, domain
resources, objects
Created by default
Root on UNIX or Linux
systems
User names
NOS grants each
network user access
to files and other shared resources
Groups
Basis for resource and account management
Assists in resource sharing and security control
Example: network administrator for public elementary school
Nesting or hierarchical group arrangement
Simplifies management
Group arrangement
Affects permissions granted to each group’s members
56
Inherited permissions
Passed down from parent group to child
group
After user, group restrictions applied
Client allowed to share network
resources
Identifying and Organizing Network
Elements
Modern NOSs
Similar patterns for organizing
information
Users, printers, servers, data
files, and applications
Directory
List organizing resources
Associates resources with
characteristics
Example: file system directory
LDAP (Lightweight Directory Access
Protocol)
Used to access information stored in directory
Object
Thing or person associated with network
Attributes
Properties associated with object
Schema
Set of definitions
Kinds of objects and object-related information contained in directory
Two types of definitions:
Classes (object classes): identifies object type specified in directory
Attributes: stores information about object
Containers (OUs or organizational units)
Logically defined receptacles
Assemble similar objects
Account
User record containing all
properties
LDAP standard
Directories and contents form
trees
Tree
Logical representation of
multiple, hierarchical
levels within directory
Root, branches, leaves
57
Identifying and Organizing Network Elements
Before installing NOS
Plan directory tree
Consider current, future
needs
Book example
New manufacturing firm: Circuits
Now
Sharing Applications
Shared applications
Often installed on file server
Specifically designed to run
applications
Application licensing types
Per user licensing
Per seat licensing
Site license
Installing application on server
Purchase appropriate type and number of
licenses
Verify server resources
Install application
Make application available
Provide users access to application
NOS responsible for arbitrating file access
Problem with shared file access
Multiple users simultaneously accessing same data files, same program files
Sharing Printers
Increases resource management efficiency; reduces costs
Print server
Manages print services
Printer attaches to print server
Directly
To convenient network location
All NOSs perform common tasks in managing printers
To create new printer
Install printer driver
Provides printer availability to users
Ensure appropriate printer queue user rights
Networked printers
Appear as icons in Printers folder
Client redirector
Determines where print request should transmitted
Network, workstation
58
Managing System Resources
Limited server system resources
Required by multiple users
Modern NOSs capabilities
Maximize server memory, processor, bus, and hard drive use
Accommodates more client requests faster
Improves overall network performance
Memory
Virtual memory can boost total memory available
Physical memory: RAM chips
Physical memory required by server varies
Task dependent
Virtual memory: stored on hard drive
Page file (paging file, swap file)
Managed by operating system
Paging
Moving blocks (pages) from RAM into virtual memory
Virtual memory advantages
Easily expands memory available to server applications
Engaged by default
Virtual memory disadvantage
Slows operations
Hard drive access versus physical memory access
Multitasking
Execution of multiple tasks at one time
All operating system perform
Does not mean performing more than one operation simultaneously
Preemptive multitasking (time sharing: UNIX)
Happens quickly
Appearance of tasks occurring simultaneously
Multiprocessing
Process
Routine of sequential instructions that runs until goal is achieved
Thread
Self-contained; well-defined task within process
Main thread
All processes have one
One processor systems
One thread handled at any time
Support use of multiple processors to handle multiple threads
Technique to improve response time
Splits tasks among more than one processor
Expedites single instruction completion
Symmetric multiprocessing
Splits all operations equally among two or more processors
Asymmetric multiprocessing
59
Assigns each subtask to specific processor
Multiprocessing advantage to servers with high processor usage
Numerous tasks simultaneously
Windows Server 2008
Released February 2008
Enhancement of Windows Server 2003
GUI (graphical user interface)
Pictorial representation of computer function
NOS GIUs
Enable administrator to manage files, users, groups, security, and printers
Enhanced security, reliability, remote client support, and performance
New server management features
Editions
Standard Edition
Web Edition
Enterprise Edition
Datacenter Edition
Popular NOS
Address most network administrator’s needs well
Well-established vendor
Device; program compatibility
Larger market offers technical support
General benefits
Offers several general benefits
Offers simple user interfaces
Disadvantage
Past criticism for performance, security
Hardware Requirements
60
Server components
Processing power, memory, and hard drive space
Windows Server Catalog
Windows Server 2008 compatible computer components
Available online
Consult it prior to hardware purchases
Memory Model
Addressing schemes
32-bit addressing scheme
64-bit addressing scheme
Assigns each application (process)
Own 32-bit memory area
Logical subdivision memory available to server
Important Windows Server 2008 feature
Install more server physical memory than allowed in earlier versions
Uses virtual memory
NTFS (New Technology File System)
File system
Methods of organizing, managing, and accessing files
Through logical structures, software routines
NTFS (New Technology File System)
Installed by default
Disk data distribution
Disks divided into allocation units (clusters)
Allocation units combine to form partition
Logically separate hard disk storage area
Advantages
Secure, reliable, and allows file compression
Handles massive files
Allow fast access to resources
Used on all Windows operating system versions
Since Windows NT
Offers many features
Drawback
Cannot be read by older operating systems (Win 98)
Active Directory
Directory service
Originally designed for Windows 2000 Server
Enhanced with Windows Server 2008
Windows Server 2008 network
Workgroup model
Domain model
Workgroups
Peer-to-peer network
Decentralized management
Each computer has own database
61
User accounts, security privileges
Significantly more administration effort
Practical for small networks
Few users
Simple to design, implement
Domains
Group of users, servers, and other resources
Share centralized account and security information database
Client/server network
Active directory
Contains domain
databases Domains
Easier to organize and
manage resources and
security
Domain not confined by
geographical boundaries
Domain controllers
Contains directory
containing
information about
objects in domain
Member servers
Do not store directory
information
Replication
Process of copying
directory data to
multiple domain
controllers
OUs (Organizational Units)
Hold multiple objects having
similar characteristics
Can be nested
Provides allows simpler, more
flexible administration
Trees and Forests
Directory structure above
domains
Large organizations use
multiple domains
Domain tree
Organizes multiple
domains
hierarchically
Root domain
62
Active Directory tree base
Child domains
Branch off from root domain OUs
Separate groups of objects with same policies
Forest
A collection of one or more domain trees
Share common schema
Domains within a forest can communicate
Domains within same tree
Share common Active Directory database
Trust Relationships
Relationship between two domains
One domain allows another domain to
authenticate its users
Active Directory supports two trust relationship
types
Two-way transitive trusts
Explicit one-way trusts
Naming Conventions
Active Directory naming
(addressing) conventions
Based on LDAP naming
Internet namespace
Complete hierarchical names
database
Used to map IP
addresses to hosts’
names
Active Directory namespace
Collection of object names, associated places in Windows Server 2003, Server 2008
network
Two namespaces are compatible
Windows Server 2008 network object
Three different names
DN (distinguished
name): DC
(domain
component) and
CN (common
name) – long and
complete name
RDN (relative
distinguished
name) – unique
within a container
63
UPN (user principal name) – like an email address
GUID (globally unique identifier)
128-bit number
Ensures no two objects have duplicate names
Server Management
Setting up and managing server
Choose role
Reflects server’s primary purpose
Conduct server management task
Server Manager: GUI tool
Many functions available
Use Server Manager window
64
Apple Computer: Mac OS X Server
Sun Microsystems: Solaris
IBM: AIX
Proprietary UNIX system advantages
Accountability and support
Optimization of hardware and software
Predictability and compatibility
Proprietary UNIX system drawback
No source code access
No customization
Open Source UNIX
Customizable
Not owned by any one company
No licensing fees
Open source software (freely distributable software)
UNIX GNU, BSD, and Linux
Variety of implementations
Run on wider range of systems
Key difference from proprietary implementations
Software license
Two Flavors of UNIX
Solaris
Sun Microsystems
Runs on SPARC-based servers
All commercially supported operating system benefits
Use: Runs intensive applications
Examples: large, multiterabyte databases, weather prediction systems, and large
economic modeling applications
Linux follows standard UNIX conventions
Highly stable, free
Developed by Linus Torvalds (1991)
All UNIX and Linux versions
Offer host of features
TCP/IP protocol suite
Applications to support networking infrastructure
Support non-IP protocols like SLIP and Appletalk
Programs necessary for routing, firewall protection, DNS services, DHCP
services
Operates over many different network topologies, physical media
Efficiently and securely handle growth, change, stability
Source code used, thoroughly debugged
Solaris Hardware Requirements
Similar to Windows Server 2003, Server 2008
Key differences
UNIX, Linux operating system can act as workstation or server operating system
GUI (graphical user interface) remains optional
65
No single “right” server configuration exists
Solaris Hardware Requirements
Computers containing Sun SPARC processors or Intel-based processors
Linux Hardware Requirements
UNIX Multiprocessing
UNIX and Linux
Support processes and threads
Allocate separate resources (memory space) to each process
When created
Manage access to resources
Advantage: prevents one program from disrupting system
Support symmetric multiprocessing
Different versions support different number of processors
The UNIX Memory Model
Use physical, virtual memory efficiently
66
Allocate memory area for each application
Share memory between programs when possible
Use 32-bit addressing scheme
Programs access 4 GB memory
Most systems also run on CPUs employing 64-bit addresses
18 exabytes (264 bytes) memory
Virtual memory
Disk partition or file
The UNIX Kernel
Kernel
Core of all UNIX and Linux systems
Kernel module
File containing instructions for performing specific task
Reading data from and writing data to hard drive
UNIX System File and Directory Structure
Hierarchical file system
Disk directories may
contain files, other
directories
/boot directory: kernel,
system initialization files
/sbin directory: applications,
services
/var directory: variable data
/home directory: created for
new users
UNIX File Systems
Two broad categories
Disk file systems
Network file systems
Disk File Systems
Organizing, managing, accessing files
Through logical structures, software routines
Linux native file system type
ext3: “third extended” file system
Solaris native file system
UFS (UNIX file system)
Network File Systems
Analogous to Windows shares
Attach shared file systems (drives)
From Windows, other UNIX servers
Share files with users on other computers
UNIX and Linux popular remote file system type
Sun Microsystems’ NFS (Network File System)
Open source application implementing Windows SMB, CIFS file system protocols
Samba
67
A UNIX and Linux Command Sampler
Many system administrators prefer command line
GUI executes commands
Responds to mouse clicks
Command interpreter (shell)
Accepts keyboard commands and runs them
Man pages (manual pages)
Full documentation of UNIX commands
Nine sections
apropos command
Helps find possible man page entries
Commands function like sentences
Rules guide UNIX command use
Significant UNIX and Windows command-line interface difference
Character separating directories
Windows separator character: ( \ )
UNIX separator character: ( / )
Most frequently used UNIX command
ls
Provides file information
Stores in file inode (information node)
ls –l command
Access permissions field
Files type designations
Pipe
Direct one command output to input of another command
Unix: vertical bar ( | )
68