0% found this document useful (0 votes)
29 views

OS Module 5

Uploaded by

shekhawatharsh54
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views

OS Module 5

Uploaded by

shekhawatharsh54
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Chapter

9 An Introduction to
Operating System

This chapter will make the reader familiar with the basic concepts, services and
types of operating systems along with their advantages and disadvantages. At the
end of the chapter structure of an operating system has been discussed in detail.

9.1 DEFINITION OF AN OPERATING SYSTEM


“An
the resources of a computer system. It acts as an interface between the computer
hardware and its users. The operating system’s job is to control the computer on
the most fundamental level. Operating System manages memory, controls access
to the peripheral devices and serves as an intermediate between the user and the
hardware, providing the means for the user and application programs to tell the
hardware what to do.”

9.1.1 Basic Concepts


The user of a computer system uses its components and peripheral devices to carry
on computing and various other operations. These devices, which are hardware
devices, usually require instructions to work as they cannot work by themselves.
The memory has to be managed, I/O operations have to be done and other computer
resources have to be managed, which can be done by giving necessary instructions,
which are given in machine understandable form. Each machine is different from

for the programmers to write instructions for the above operations, it became
necessary to search for a computer program which is capable to do these common

of which the operating systems have been developed.


An operating system is the most vital component of a computer system. It

operating system acts as a “Resource Manager”, as it manages all the resources


of a computer system. The operating system ensures that each application gets the
necessary resources. It also ensures that all the hardware components of a computer
system work in coordination with each other. Windows, Linux, UNIX, DOS, OS/2,
Macintosh are good examples of operating systems. These operating systems can run
9.2 An Introduction to System Software
on hardware provided by thousands of vendors. They can accommodate thousands
of different printers, disk drives and special peripherals in any possible combination.
The purpose of an operating system is to provide an environment in which
a user can execute programs. The primary goal of an operating system is to make
the computer system convenient to use and a secondary goal is to use the computer

9.1.2 Services Provided by an Operating System


The operating system provides certain services to programs and to the users of the
programs. The following are the common services provided by an operating system:
1. Program Execution: The purpose of a computer system is to allow the users
to execute programs. The operating system provides an environment where
the user can conveniently run his programs. Running a program involves
allocation and deallocation of memory and processor scheduling (in case of
multiprogramming). These things are taken care of by the operating system.
2. I/O Operations: A running program may require input/output (I/O). This I/O

usually cannot control I/O directly. Therefore, the operating system must
provide a means to do so.
3. File System Manipulation:

program do not have to worry about these tasks. The user just needs to give a

on behalf of the user.


4. Communication: In many circumstances, a process may need to exchange
information with another process. Such communication may take place
between processes running on the same computer or between processes
running on different computer systems connected by a computer network.
Communication may be implemented by shared memory or by the technique
of message passing, in which packets of information are moved between
processes by the operating system.
5. Error Detection: An error in a part of system may cause malfunctioning of the
complete system. To avoid such a situation, the operating system constantly

system takes the appropriate action to ensure correct and consistent computing.

9.1.3 Role of an Operating System as a “Resource Manager”


A computer system consists of a number of resources such as processor, memory,
disks, network interfaces, printers and a variety of other devices. The purpose of
An Introduction to Operating System 9.3
an operating system is to provide for an orderly and controlled allocation of these
resources among the various processes competing for them. In case of a multi-user
system, the need for managing and protecting these resources is even greater, since
the users might otherwise interfere with each other. In addition, users often need to
share not only hardware but information as well. This view of the operating system
holds that its primary task is to grant resources to the processes, keep track of which
process is using which resource, to account for usage of resources and to mediate

performs the following functions as a resource manager:


1. Resource Allocation: When multiple users are logged on the system or
multiple jobs are running at the same time, resources must be allocated to
each of them. Many different types of resources are managed by the operating
system. The operating system should allocate these resources to the processes

resources will be used by which process and for how long. The following are
the most commonly used resources:
Processor: A processor is the most important resource needed by a
process, which is ready to be executed. The operating system schedules
the processor among processes based on some scheduling policy such as
the priority of a process, its burst time, or it may schedule the processor
in such a way that each process receives an equitable fraction of the
available time.
Memory: Whenever a program is to be executed, it must be loaded
into memory. The operating system uses different memory management
schemes to allocate and deallocate memory to the various programs in
need. It also decides which programs should be loaded into memory when
memory space becomes available.
Input/Output devices: The operating system manages all the I/O devices.
It keeps track of requests of processes for I/O devices, issues commands
to the I/O devices and ensures that correct data transmits to and from I/O
devices.
2. Accounting: The operating system keeps a track of which users use how many
and what kind of computer resources. This record keeping may be used for
accounting so that the users can be billed according to the resource usage.
3. Protection: When several disjoint processes execute concurrently, it should not
be possible for one process to interfere with others. Protection of information
in a multi user computer system is also very important. The operating system
ensures that all access to the system resources is controlled. Security is also
a part of the functions performed by an operating system.
In a multiprogramming
system, two or more processes may request for a resource simultaneously.
9.4 An Introduction to System Software
It may also happen that the resource requested by a process be allocated
to another process waiting for some other unavailable resource. It is the
responsibility of the operating system to provide some mechanism to handle

strategy to decide which of the competing processes should be allocated the


requested resource when it becomes available.
5. File and Disk Management:
functions performed by an operating system. The operating system takes care

as disk. The operating system is also responsible for creation and deletion of

on stable storage media.


The modern computer systems uses disk as the principal online storage
medium for storing both programs and data. Hence, the proper management of disk
storage is of central importance to the computer system. The operating system is
concerned with allocation of storage space on the disk, management of free-space
and disk scheduling.

computer system can be measured by the following factors:


1. Throughput: It is the total volume of work performed by the system over a
given interval of time.
2. Turnaround time: It is the interval between the time a user submits his job to
the system for processing and the time he receives the results. It is especially
important in case of multi-user systems because the overall progress of their
work depends upon their receiving prompt results from the system.

9.2 TYPES OF OPERATING SYSTEMS


9.2.1 Batch Processing Systems
In the early days, the most common way of using computer had been operating
a program punched into a punch card, the user did not interact directly with the
system, but the job was prepared by the user that contained program, data and other
information about the job. This job was submitted to an operator. The operator
used to receive jobs from many users. To speed up processing, jobs with similar
requirements were batched together and were run through the computer as a batch.
Thus, the programmers would leave their programs with the operator. The operator
would sort programs into batches with similar requirements and as the computer
become available, would run each batch. At some later time, the output appeared
An Introduction to Operating System 9.5
which consisted of the result of the program, as well as dump of memory and
registers in case of a program error. The output from each job was sent back to the
appropriate programmer.
The operating system in these early computers was very simple. Its major
task was to transfer control automatically from one job to the next. The operating
system was always (resident) in memory. Actually, in those days, the compilers of
different languages were available on magnetic tapes, which needs to be physically
mounted on the tape reader and the compiler is moved to the memory through card
reader. Thereafter, the job of those languages can be compiled and executed. If the
jobs submitted in the sequence order are not of the same languages, then most of
the time is wasted in physically moving the compilers to memory and vice-versa.
So, the batch processing systems came into existence.
In batch processing systems, the operator would periodically collect all the
submitted programs and would batch them together and then load them all into the
input devices of the system one at a time. The operator would give a command to
the system to execute the job. The jobs were then automatically loaded from the
input device and executed by the system without operator intervention. When all
the jobs in the submitted batch were processed, the operator would separate and
send the output to the concerned users.
This is one of the oldest methods of running programs. This method reduced
the idle time of a computer system because transition of one program to another
does not require operator’s intervention. This method of processing was mostly
used in payroll applications or preparation of customer statements because it was
not necessary to update records on daily basis.

Fig. 9.1 Batch Processing Systems

Advantages of Batch System


1. Reduces the idle time of a computer system because transition from one job
to another does not require operator intervention.
2. Performance increases since it is possible for the job to start as soon as the
9.6 An Introduction to System Software
Disadvantages of Batch System
1. Turnaround time can be large from user’s stand point. The time required to
accumulate data into batches, destroys much of the value of the data. The
information that results from eventual processing is no longer timely.

high priority jobs were to be executed but were in separate batches, one would
have to wait until the other’s batch was completely processed.
3. Due to lack of any protection scheme, one batch job can affect the pending
jobs.

9.2.2 Multiprogramming Systems


As far as batch processing was concerned, a number of programs were loaded
in a sequence in the main memory, and the program remains occupant of the
main memory until the execution of program gets completed. This leads to under

rid of this problem, the concept of multiprogramming was introduced.


Multiprogramming refers to the interleaved execution of two or more
independent programs by the same computer. The idea of multiprogramming is as
follows: The operating system keeps several jobs in memory at a time. This set of
jobs is a subset of the jobs kept in the job pool. This pool consists of all the processes
residing on mass storage awaiting allocation of main memory. The operating
system picks and begins to execute one of the jobs in the memory. Eventually,
the job may have to wait for some task such as an I/O operation to complete. In a
non-multiprogramming system, the CPU remains idle. But in a multiprogramming
system, the operating system simply switches to and executes another job. When

job to execute, the CPU will never be idle.


Normally, any job has to handle various resources like I/O devices, memory
and processor. It was observed that a job does not need the CPU for entire duration
of its processing. This is because in addition to doing computation, a job often needs
to perform I/O operation during its processing.

CPU Bound Programs: These programs mostly perform numerical


calculations with little I/O operation. They are so called because they

engineering computations usually fall in this category, which can later restore
each preempted process in exactly the same state where it was interrupted.
An Introduction to Operating System 9.7
A preemptive multi tasking operating system takes control of the processor
from a task:
When a task’s time slice runs out- Any given task is only given control
for a set amount of time before the operating system interrupts it and
schedules another task to run.
When a task having higher priority becomes ready to run- The currently
running task loses control of the processor when a task with higher priority
is ready to run, regardless of whether it has time left in its quantum or not.
I/O Bound Programs: These programs do very little computation and most

Programs used for commercial data processing applications fall in this category.
In case of multiprogramming, more than one job is loaded into the main
memory. These jobs must be intermixed i.e., a few jobs should be CPU bound and
a few should be I/O bound.
9.2.2.1 Requirements of Multiprogramming Systems

than non-multiprogramming systems. However, these systems are sophisticated


because they require additional hardware and software features such as:
1. Large Memory:
is required to accommodate a good number of user programs along with the
operating system.
2. Memory Protection: Computers designed for multiprogramming must provide
some type of memory protection mechanism to prevent a job in one memory

something in completely independent job B or job C. In a multiprogramming


system, this is achieved by the memory protection feature, a combination of
hardware and software, which prevents one job from addressing beyond the
limits of its own allocated memory area.
3. Proper Job Mix: A proper job mix of I/O bound and CPU bound jobs is
required to effectively overlap the operations of the CPU and the I/O devices.
If all the loaded jobs need I/O at the same time, the CPU will be idle. It
is necessary that when a program is waiting for an I/O operation, another
program must have enough computation to keep the CPU busy. Hence, the
main memory should contain some CPU bound and some I/O bound programs

4. CPU Scheduling: In a multiprogramming system, often there will be situations


in which two or more jobs will be in the ready state, waiting for CPU to be
allocated for execution. When more than one process is in the ready state,
the operating system must allocate the CPU for execution. The part of the
9.8 An Introduction to System Software
operating system concerned with this decision is called the CPU scheduler
and the algorithm it uses is called the CPU Scheduling algorithm.
5. Job Status Preservation: In a multiprogramming system, when a running
job waits for I/O operation, the CPU is taken away from that program and
given to another program, ready for execution. Later, the former job will be
allocated the CPU to continue its execution. This requires preserving the
complete status information of a job when the CPU is taken away from it and
restoring this information back, before the CPU is given back to it again. This
is known as program status preservation.

9.2.2.2 Advantages of Multiprogramming

2. Increased throughput.

Fig. 9.2 Demonstration of the Concept of Multiprogramming


Here, Job A is busy in performing input/output operation on the disk and Job

get the CPU. As soon as Job B is completed or it requires doing I/O operation, the
CPU will start executing Job C. Similarly after Job A completes its I/O operation,
it will wait for the CPU for its turn. Thus in multiprogramming systems, the CPU
is almost busy and has very little idle time.

9.2.3 Time Sharing Systems


It is a technique of allocation of computer resources in a time-dependent fashion
to several programs simultaneously. Thus it helps to provide a large number of
users, access to the main computer. In a time-sharing system, the CPU is divided
among different users on a scheduled basis. Thus, each user is given a brief share
An Introduction to Operating System 9.9
of the CPU time. This very brief share of CPU time is called the time slice or time
. Since each user gets some time of CPU and time for the next time slice
is allotted after a few seconds so each user gets an illusion that he is the only user
of the computer system.

Fig. 9.3 Time-sharing System

In time-sharing systems, the users will get the CPU one by one in the circular
order i.e. CPU will switch over to user 2 after serving user 1 and then to user 3 as

The time slice for that user has expired.


That user/ program has encountered an I/O operation.
Since a large number of users are accessing the system simultaneously and
the total amount of memory available is limited so it is not possible to keep all
users programs into memory. The time-sharing operating system keeps only a few
programs in the memory and rest are stored on the disk. Until the program is not
ready, it remains on the disk and when the CPU is allocated to it, the program is
brought to the main memory. This process of transferring programs from disk to
main memory and vice-versa is called swapping. This process is repeated many
times within a few seconds.
Even though it may appear that several users are using the computer system
at the same time, a single CPU can execute only one instruction at a time. Thus
with a time-sharing system, only one program can be in control of the CPU at any
given time. As a result, at any instant, all the users who are using a time-sharing
system will fall in one of the following three states:
Active: The user’s program currently has control of the CPU. Only one user
will be in active state at a time.
9.10 An Introduction to System Software
Ready: The user’s program is ready to continue but is waiting for its turn to
get the CPU. More than one user can be in the ready state at a time.
Wait: The user has made no request for execution of his job or the user’s
program is waiting for some I/O operation. Again more than one user can be
in the wait state at a time.

Fig. 9.4 Process States

9.2.3.1 Advantages of Time-sharing Systems

different users.
2. Reduces CPU idle time – The CPU is busy most of the time as it switches
from one program to another in a rapid succession. This increases throughput
and lowers turnaround time.
3. Avoids duplication of software which is used by most of the users. This
software is stored in the system libraries.
4. Since time slice is of few milliseconds so users can get the output of programs
more quickly as compared to other systems.

9.2.3.2 Disadvantages of Time-Sharing Systems


1. Security and Integrity of user programs and data may not be maintained as a
large number of users access the system simultaneously.
2. Since a time-sharing system caters to the needs of several users. Hence, it
must provide some backup mechanism to provide continuous service in case
of trouble.
3. In a time-sharing system, the users interact with the main computer system
through remote terminals that require data transmission facilities. Data
transmission charges are very high in case of these systems.
4. Since regular swapping need to be done when a large number of users access
the CPU. And the CPU must be switched from user to user, overhead may be
An Introduction to Operating System 9.11
involved. If the system is overloaded with too many users, then the overhead
may get out of control, leading to a very poor response.

9.2.4 Multitasking Systems


The term “Multitasking” refers to the ability to execute more than one task at the
same time. While referring to the computers, multitasking means having more than
one application open at a time; for example, at some time, you might be downloading
something from the internet, as well as you are writing mail to your friend and
listening music also. In order to provide multitasking, a computer system must
have a good processing power and a large storage space.
In case of a computer with a single processor, only one task is said to be
running at any point in time, means the processor is actively executing instructions
for that task. Multitasking solves the problem by scheduling which task may be the
one running at any given time when another waiting task gets a turn.
The processor is switched from one program to another so quickly that it
gives the appearance of executing all the programs at the same time. The act of
reassigning the CPU from one task to another one is called Context Switch. When
context switches occur frequently, it gives the illusion of parallelism.
A multitasking operating system should provide some degree of protection of
one task from another to prevent tasks from interacting in unexpected ways such as
accidentally modifying the contents of each other’s memory areas.
Being able to do multitasking does not mean that an unlimited number of
tasks can be juggled at the same time. Each task consumes system storage and
other resources. As more tasks are started, the system may slow down or begin to
run out of shared storage.

9.2.4.1 Types of Multitasking Systems


There are two basic types of multitasking:
Non-preemptive or Cooperative Multitasking
Preemptive Multitasking
Non-preemptive Multitasking: In non-preemptive multitasking, the use of
the processor is never taken from a task; rather a task must voluntarily yield the
control of the processor before any other task can run.
Non-preemptive multitasking has many shortcomings. A non-preemptive
multitasking system must rely on each process to give time to other processes on
the system. Any single poorly designed program or any single “hung” process
can effectively bring the system to a halt because it can prevent a process from
relinquishing the CPU to other processes. Another disadvantage of this approach is
that the scheduler cannot make system wide decisions as to how long each process
9.12 An Introduction to System Software
Note: The scheduler is a part of the operating system that decides which
process should get the CPU next.
Preemptive Multitasking: In preemptive multitasking systems, some running
process may be involuntarily suspended so that other processes can be executed. In
case of preemptive multitasking, the hardware can interrupt an executing process
and instruct the CPU to execute a different task. Such a system does not need to rely
on the voluntary relinquishing of processor by individual processes. Rather, it can
be set to preempt currently running task and return control to the operating system.

CPU and other system resources when ordered by the operating system on a time

resources when they are needed by another program. It also permits the system
to respond immediately to important external events, such as incoming data from

programs and provides better program performance because processes can switch
in and out of the processor with less overhead i.e., with less and simpler code.
Multitasking introduces overhead because the processor spends sometime in
choosing the next job to run and in saving and restoring task state, but it reduces

before the next one starts. The concurrently running processes can represent
different programs, different parts of a single program and different instances of a
single program. The total number of processes that can run on the system depends

9.2.5 Parallel Systems


Parallel processing systems are designed to perform simultaneous data processing
of different jobs for increasing the computation speed of the computer system.
These systems are multi-processor systems having more than one processor in close
communication sharing the computer bus, the clock and sometimes memory and
peripheral devices. These systems are also known as Tightly Coupled Systems.
There are a number of advantages of building such systems. One advantage
is increased throughput. By increasing the number of processors; we hope to get
more work done in a shorter period of time. But the speed-up ratio with n processors
is not n, but rather is less than n. When multiprocessors co-operate on a task, a
certain amount of overhead is incurred in keeping all the parts working correctly.
This overhead, plus contention for shared resources, lowers the expected gain from
additional processors.
Multiprocessors can also save money as compared to multiple single systems
because the processors can share peripherals, cabinets and power supplies. If several
programs are to operate on the same set of data, it is cheaper to store that data on one
An Introduction to Operating System 9.13
disk and to have all of processors share them, rather than having many computers
with local disks and many copies of the data.
Another reason for using parallel systems is that they increase reliability. If
the functions can be distributed properly among several processors, then the failure
of one processor will not halt the system, but rather will only slow it down. If we
have ten processors and one fails, then each of the remaining nine processors must
pick up a share of the failed processor. Thus, the entire system runs only 10% slower,
rather than failing altogether. This ability to continue providing service proportional
to the level of surviving hardware is called .
Parallel processing systems are employed in supercomputers. They are useful

9.2.5.1 Advantages of Parallel Systems

1. As there are a number of processors, so a job can be completed in a shorter


span of time.
2. It saves money, as multiprocessors can share peripherals, cabinets and power
supplies.
3. These systems provide more reliability i.e., due to multiple processors the
system will never stop functioning and in case one of the processor fails, the
remaining processors will take up the job of the failed processor.

9.2.5.2 Disadvantages of Parallel Systems


1. A large amount of memory is needed.
2. An operating system, which can perform scheduling of jobs on multiple
processors, is required.
3. Due to large memory requirements and multiple processors, the initial expenses
are quite high.

9.2.6 Distributed Systems


A distributed system is a collection of autonomous computer systems capable of
communication and co-operation via their hardware and software interconnections.

Instead, each processor has its own local memory. The processors are connected
and communicate with each other through various communication lines such as
high speed buses or telephone lines. A distributed system is also referred to as
loosely coupled system.
In a distributed system, the users are not aware of where their programs are
9.14 An Introduction to System Software

referred to as sites, nodes, computers etc.


There are a variety of reasons for building distributed system, the important
ones being:
1. Resource Sharing: If a number of different sites are connected to one
another, then a user at one site may be able to use the resources available at

2. Computation Speed Up: If a particular computation can be partitioned into


a number of sub computations that can run concurrently, then a distributed
system may allow us to distribute the computation among the various sites
to run it concurrently.
In addition, if a particular site is currently overloaded with jobs, some of them
may be moved to other lightly-loaded sites. This movement of jobs is called
load-sharing.
3. Communication: There are many instances in which programs need to
exchange data with one another. Window systems are one example, since
they frequently share or transfer data between displays. When many sites
are connected to one another by a communication network, the processes
at different sites have the opportunity to exchange information. Users may

user can send mail to another user at the same site or a different site.
4. Reliability: If one site fails in a distributed system, the remaining sites can
continue operating. If the system is composed of a number of large general
purpose computers, the failure of one of them should not affect the rest. If,
on the other hand, the system is composed of a number of small machines,
each of which is responsible for some classical system function, then a single
failure may effectively halt the operation of the whole system. In general, if

operation, even if some of its sites have failed.

9.3 SYSTEM COMPONENTS


An operating system is a complex software package that manages the resources of
a computer system and provides the base upon which applications can be written.
Even though, not all systems have the same structure but many modern operating
systems share the same goal of supporting the following types of system components:
An Introduction to Operating System 9.15

Fig. 9.5 Components of an Operating System

9.3.1 Process Management


The operating system manages many kinds of activities ranging from user programs

activities is encapsulated in a process. A process includes the complete execution


context (code, data, program counter, registers, OS resources in use etc.).
It is important to note that a process is not a program. A process is only ONE
instant of a program in execution. There are many processes can be running the

management are:
Creation and deletion of user and system processes.
Suspension and resumption of processes.

A mechanism for process communication.


A mechanism for deadlock handling.

9.3.2 Main Memory Management


Primary memory or main memory is a large array of words or bytes. Each word
or byte has its own address. Main memory provides storage that can be accessed
directly by the CPU. That is to say for a program to be executed, it must be present
in the main memory.
The major activities of an operating in regard to memory management are:
Keep track of which part of memory are currently being used and by whom.
9.16 An Introduction to System Software
Decide which processes are loaded into memory when memory space becomes
available.
Allocate and deallocate memory space as needed.

9.3.3 File Management

examples of storage media are magnetic tape, magnetic disk and optical disk. Each
of these media has its own properties like speed, capacity, and data transfer rate
and access methods.

Creation and deletion of directories.

9.3.4 I/O System Management

assigned. The I/O subsystem of operating system consists of:


A memory management component that includes buffering, caching and
spooling.
A general device driver interface.

9.3.5 Secondary Storage Management


Generally speaking, systems have several levels of storage, including primary
storage, secondary storage and cache storage. Instructions and data must be placed
in primary storage or cache to be referenced by a running program. Because main
memory is too small to accommodate all data and programs, and its data are lost
when power is lost, the computer system must provide secondary storage to back up
main memory. Secondary storage consists of tapes, disks and other media designed
to hold information that will eventually be accessed in primary storage. Each
location in the primary storage has an address; the set of all addresses available to
a program is called an address space.
An Introduction to Operating System 9.17
The three major activities of an operating system in regard to secondary
storage management are:
Managing the free space available on the secondary storage device.

Scheduling the requests for memory access.

9.3.6 Networking
A distributed system is a collection of processors that do not share memory,
peripheral devices, or a clock. The processors communicate with one another
through communication lines called network. The communication network design
must consider routing and connection strategies and the problems of contention
and security.

9.3.7 Protection System


If a computer system has multiple users and allows the concurrent execution of
multiple processes, then the various processes must be protected from one another’s
activities. Protection refers to mechanism for controlling the access of programs,

9.3.8 Command Interpreter System


A command interpreter is an interface of the operating system with the user. The
user gives commands with are executed by operating system (usually by turning
them into system calls). The main function of a command interpreter is to get and

9.4 OPERATING SYSTEM STRUCTURE


The lowest level of any operating system is its kernel.
software loaded into memory when a system boots or starts up. The kernel provides
access to various core services to all other system and application programs. These
services include, but are not limited to: disk access, memory management, process
scheduling, and access to other hardware devices. Like the term “operating system”
itself, the question of what exactly should form the “kernel” is subject to some

in the kernel. Various camps advocate micro kernels, monolithic kernels, and so on.
A modern operating system must be engineered carefully if it is to function
9.18 An Introduction to System Software
small components rather than have one monolithic system. Each of these modules

and functions. We have already discussed the common components of operating


systems. In this section, we discuss how these components are interconnected and
melded into a kernel.

9.4.1 Simple Structure

operating systems started as small, simple, and limited systems and then grew beyond
their original scope. MS-DOS is an example of such a system. It was originally
designed and implemented by a few people who had no idea that it would become
so popular. It was written to provide the most basic functionality in the least space,
so it was not divided into modules carefully.
In MS-DOS, the interfaces and levels of functionality are not well separated.

directly to the display and disk drives. Such freedom leaves MS-DOS vulnerable
to errant (or malicious) programs, causing entire system to crash when a user
programs fail.
Another example of limited structuring is the original UNIX operating system.
UNIX is another system that initially was limited by hardware functionality. It
consists of two separable parts: the kernel and the system programs. The kernel is
further separated into a series of interfaces and device drivers, which have been
added and expanded over the years as UNIX has evolved. The kernel provides the

functions through system calls. Taken in sum, that is an enormous amount of

to implement and maintain.


The operating system formed a software layer between the user and the
computer system’s hardware. The user interface was provided by a command

processes. Both the command language interpreter and user processes invoked OS
functionalities and services through system calls.
Two kinds of problems with the monolithic structure surfaced over the
period of time. The operating system layer had an interface with the bare machine,
so architecture dependent code was spread throughout the operating system.
Consequently, the OS was highly architecture dependent and possessed poor
portability. Different functionalities and services of the OS use knowledge about
each other’s data in their code, so changes made in one functionality could affect

to high costs of maintenance and enhancement.


An Introduction to Operating System 9.19
These problems led to the search for alternative ways to structure an operating
system. In the following sections, we discuss two methods of structuring an operating
system that have been proposed to address these problems:
Layered Structure: The layered structure attacks the complexity and cost of
developing and maintaining an operating system by structuring it into a number
of layers. The THE multiprogramming system is a well known example of
a layered operating system.
Micro-kernel-based Operating System Structure: The micro-kernel-based
operating system structure provides many advantages of a kernel based
structure and also provides extensibility. Consequently, a micro kernel can
be used to build more than one operating system.

9.4.2 Layered Approach


A system can be made modular in many ways. One method is the layered approach,
in which the operating system is broken up into a number of layers (levels). The
bottom layer (layer 0) is the hardware; the highest (layer N) is the user interface.
An operating system layer is an implementation of an abstract object made up
of data and the operations that can manipulate those data. A typical operating
system layer—say, layer M—consists of data structures and a set of routines that
can be invoked by higher-level layers. Layer M, in turn, can invoke operations on
lower-level layers. Each layer uses the interface provided by the layer below it and
provides a more intelligent interface to the layers above it. The basic discipline in
a layered OS design is that the routines of one layer use only the facilities of the

access to routines of a lower layer must take place strictly through the interface
between layers. Thus, unlike in a monolithic design, a routine situated in one layer
does not know the addresses of data structures or instructions in the lower layer – it
only knows how to invoke a routine of the lower layer.

Fig. 9.6 Layered Approach


9.20 An Introduction to System Software
The layered structure provides good modularity; each layer of the operating

.
The internal details of a module i.e., the arrangement of its data and programs are
hidden from other modules. This property of a module prevents misuse or corruption
of one layer’s data by routines situated in other layers of the operating system.

e.g., incorrect value in a data element belonging to a layer, must lie within that layer
itself. Each layer is implemented with only those operations provided by lower level
layers. A layer does not need to know how these operations are implemented; it
needs to know only what these operations do. Hence, each layer hides the existence
of certain data structures, operations, and hardware from higher-level layers.
Information hiding also implies that an operating system module may be

debugging of an operating system. The layers are selected so that each uses functions

layer is debugged, its correct functioning can be assumed while the second layer is
debugged, and so on. If an error is found during the debugging of a particular layer,
the error must be on that layer, because the layers below it are already debugged.

The layered approach to an operating system design suffers from two


problems. The operation of a system may be slowed down by the layered structure.
Recall that each layer can only interact with adjoining layers. This implies that a
request for operating system service made by a user process must trickle down from
the highest numbered layer to the lowest before the required action is performed by

it executes a system call that is trapped to the I/O layer, which calls the memory
management layer, which in turn calls the CPU-scheduling layer, which is then

need to be passed, and so on. Each layer adds overhead to the system call; the net
result is a system call that takes longer than does one on a non layered system.
The second problem concerns .A
layer can access only the immediately lower layer, so all the features and facilities
needed by it must be available to the lower layers. This requirement may pose a
problem in ordering the layers. This problem is often solved by splitting a layer
into two and putting other layers between them.
These limitations have caused a small backlash against layering in recent
An Introduction to Operating System 9.21
9.4.3 Microkernel
A concept that has received much attention recently is the microkernel. A micro
kernel is a small operating system core that provides the foundation for modular
MAC

modularity.

9.4.3.1 Microkernel Architecture


The early operating systems developed in the mid to late 1950‘s were designed with
little concern about structure. In these monolithic operating systems, virtually any
procedure can call any other procedure. Such lack of structure was unsustainable as
operating systems grew to massive proportions. Modular programming techniques

and interaction only takes place between adjacent layers. With the layered approach,
most or all of the layers are executed in the kernel mode.
Problems remain even with the layered approach. Each layer possesses
considerable functionality. Major changes in one layer can have numerous effects,

implement tailored versions of a base operating system with a few functions added

between adjacent layers.


The philosophy underlying the microkernel is that only absolutely essential
core operating system functions should be in the kernel. Less essential services and
applications are built on the microkernel and execute in user mode. The result is a
smaller kernel. Although the dividing line between what is in and what is outside
the microkernel varies from one design to the next, the common characteristic is
that many services that traditionally have been part of the operating system are now
external subsystems that interact with the kernel and with each other; these modules

Operating system components external to the microkernel are implemented as


server processes; these interact with each other on a peer basis, typically by means
of messages passed through the microkernel. Thus, the microkernel functions as a
message exchanger. It validates messages, passes them between components and
grants access to hardware. The microkernel also performs a protection function; it

create a process or thread, it sends a message to the process server. Each of the
servers can send messages to other servers and can invoke the primitive functions
in the microkernel. This is client/server architecture within a single computer.
9.22 An Introduction to System Software
9.4.3.2 Advantages of a Micro-kernel Organization
Microkernels offer a number of advantages:
Uniform interfaces
Extensibility

Portability
Reliability
Microkernel design imposes a uniform interface on requests made by a process.
Processes need not distinguish between kernel-level and user-level services because
all such services are provided by means of message passing.
Any operating system will inevitably need to acquire features not in its current
design, as new hardware devices and new software techniques are developed. The
microkernel architecture facilitates extensibility, allowing the addition of new

available in the kernel. Thus, users can choose from a variety of services the one

Not only can new features be added to the operating system, but existing

microkernel based operating system is not necessarily a small system. Indeed, the
structure lends itself to adding a wide range of features. But not everyone needs,
for example, a high level of security or the ability to do distributed computing.
Portability becomes an attractive feature of an operating system. In the

the microkernel. Thus, changes needed to port the system to a new processor are
fewer and tend to be arranged in logical groupings.

its reliability. Although modular design helps to enhance reliability, even greater
gains can be achieved with microkernel architecture. A small microkernel can be
rigorously tested. Its use of a small number of application programming interfaces
(APIs) improves the chance of producing quality code for the operating system
services outside the kernel.

You might also like