OS Module 5
OS Module 5
9 An Introduction to
Operating System
This chapter will make the reader familiar with the basic concepts, services and
types of operating systems along with their advantages and disadvantages. At the
end of the chapter structure of an operating system has been discussed in detail.
for the programmers to write instructions for the above operations, it became
necessary to search for a computer program which is capable to do these common
usually cannot control I/O directly. Therefore, the operating system must
provide a means to do so.
3. File System Manipulation:
program do not have to worry about these tasks. The user just needs to give a
system takes the appropriate action to ensure correct and consistent computing.
resources will be used by which process and for how long. The following are
the most commonly used resources:
Processor: A processor is the most important resource needed by a
process, which is ready to be executed. The operating system schedules
the processor among processes based on some scheduling policy such as
the priority of a process, its burst time, or it may schedule the processor
in such a way that each process receives an equitable fraction of the
available time.
Memory: Whenever a program is to be executed, it must be loaded
into memory. The operating system uses different memory management
schemes to allocate and deallocate memory to the various programs in
need. It also decides which programs should be loaded into memory when
memory space becomes available.
Input/Output devices: The operating system manages all the I/O devices.
It keeps track of requests of processes for I/O devices, issues commands
to the I/O devices and ensures that correct data transmits to and from I/O
devices.
2. Accounting: The operating system keeps a track of which users use how many
and what kind of computer resources. This record keeping may be used for
accounting so that the users can be billed according to the resource usage.
3. Protection: When several disjoint processes execute concurrently, it should not
be possible for one process to interfere with others. Protection of information
in a multi user computer system is also very important. The operating system
ensures that all access to the system resources is controlled. Security is also
a part of the functions performed by an operating system.
In a multiprogramming
system, two or more processes may request for a resource simultaneously.
9.4 An Introduction to System Software
It may also happen that the resource requested by a process be allocated
to another process waiting for some other unavailable resource. It is the
responsibility of the operating system to provide some mechanism to handle
as disk. The operating system is also responsible for creation and deletion of
high priority jobs were to be executed but were in separate batches, one would
have to wait until the other’s batch was completely processed.
3. Due to lack of any protection scheme, one batch job can affect the pending
jobs.
engineering computations usually fall in this category, which can later restore
each preempted process in exactly the same state where it was interrupted.
An Introduction to Operating System 9.7
A preemptive multi tasking operating system takes control of the processor
from a task:
When a task’s time slice runs out- Any given task is only given control
for a set amount of time before the operating system interrupts it and
schedules another task to run.
When a task having higher priority becomes ready to run- The currently
running task loses control of the processor when a task with higher priority
is ready to run, regardless of whether it has time left in its quantum or not.
I/O Bound Programs: These programs do very little computation and most
Programs used for commercial data processing applications fall in this category.
In case of multiprogramming, more than one job is loaded into the main
memory. These jobs must be intermixed i.e., a few jobs should be CPU bound and
a few should be I/O bound.
9.2.2.1 Requirements of Multiprogramming Systems
2. Increased throughput.
get the CPU. As soon as Job B is completed or it requires doing I/O operation, the
CPU will start executing Job C. Similarly after Job A completes its I/O operation,
it will wait for the CPU for its turn. Thus in multiprogramming systems, the CPU
is almost busy and has very little idle time.
In time-sharing systems, the users will get the CPU one by one in the circular
order i.e. CPU will switch over to user 2 after serving user 1 and then to user 3 as
different users.
2. Reduces CPU idle time – The CPU is busy most of the time as it switches
from one program to another in a rapid succession. This increases throughput
and lowers turnaround time.
3. Avoids duplication of software which is used by most of the users. This
software is stored in the system libraries.
4. Since time slice is of few milliseconds so users can get the output of programs
more quickly as compared to other systems.
CPU and other system resources when ordered by the operating system on a time
resources when they are needed by another program. It also permits the system
to respond immediately to important external events, such as incoming data from
programs and provides better program performance because processes can switch
in and out of the processor with less overhead i.e., with less and simpler code.
Multitasking introduces overhead because the processor spends sometime in
choosing the next job to run and in saving and restoring task state, but it reduces
before the next one starts. The concurrently running processes can represent
different programs, different parts of a single program and different instances of a
single program. The total number of processes that can run on the system depends
Instead, each processor has its own local memory. The processors are connected
and communicate with each other through various communication lines such as
high speed buses or telephone lines. A distributed system is also referred to as
loosely coupled system.
In a distributed system, the users are not aware of where their programs are
9.14 An Introduction to System Software
user can send mail to another user at the same site or a different site.
4. Reliability: If one site fails in a distributed system, the remaining sites can
continue operating. If the system is composed of a number of large general
purpose computers, the failure of one of them should not affect the rest. If,
on the other hand, the system is composed of a number of small machines,
each of which is responsible for some classical system function, then a single
failure may effectively halt the operation of the whole system. In general, if
management are:
Creation and deletion of user and system processes.
Suspension and resumption of processes.
examples of storage media are magnetic tape, magnetic disk and optical disk. Each
of these media has its own properties like speed, capacity, and data transfer rate
and access methods.
9.3.6 Networking
A distributed system is a collection of processors that do not share memory,
peripheral devices, or a clock. The processors communicate with one another
through communication lines called network. The communication network design
must consider routing and connection strategies and the problems of contention
and security.
in the kernel. Various camps advocate micro kernels, monolithic kernels, and so on.
A modern operating system must be engineered carefully if it is to function
9.18 An Introduction to System Software
small components rather than have one monolithic system. Each of these modules
operating systems started as small, simple, and limited systems and then grew beyond
their original scope. MS-DOS is an example of such a system. It was originally
designed and implemented by a few people who had no idea that it would become
so popular. It was written to provide the most basic functionality in the least space,
so it was not divided into modules carefully.
In MS-DOS, the interfaces and levels of functionality are not well separated.
directly to the display and disk drives. Such freedom leaves MS-DOS vulnerable
to errant (or malicious) programs, causing entire system to crash when a user
programs fail.
Another example of limited structuring is the original UNIX operating system.
UNIX is another system that initially was limited by hardware functionality. It
consists of two separable parts: the kernel and the system programs. The kernel is
further separated into a series of interfaces and device drivers, which have been
added and expanded over the years as UNIX has evolved. The kernel provides the
processes. Both the command language interpreter and user processes invoked OS
functionalities and services through system calls.
Two kinds of problems with the monolithic structure surfaced over the
period of time. The operating system layer had an interface with the bare machine,
so architecture dependent code was spread throughout the operating system.
Consequently, the OS was highly architecture dependent and possessed poor
portability. Different functionalities and services of the OS use knowledge about
each other’s data in their code, so changes made in one functionality could affect
access to routines of a lower layer must take place strictly through the interface
between layers. Thus, unlike in a monolithic design, a routine situated in one layer
does not know the addresses of data structures or instructions in the lower layer – it
only knows how to invoke a routine of the lower layer.
.
The internal details of a module i.e., the arrangement of its data and programs are
hidden from other modules. This property of a module prevents misuse or corruption
of one layer’s data by routines situated in other layers of the operating system.
e.g., incorrect value in a data element belonging to a layer, must lie within that layer
itself. Each layer is implemented with only those operations provided by lower level
layers. A layer does not need to know how these operations are implemented; it
needs to know only what these operations do. Hence, each layer hides the existence
of certain data structures, operations, and hardware from higher-level layers.
Information hiding also implies that an operating system module may be
debugging of an operating system. The layers are selected so that each uses functions
layer is debugged, its correct functioning can be assumed while the second layer is
debugged, and so on. If an error is found during the debugging of a particular layer,
the error must be on that layer, because the layers below it are already debugged.
it executes a system call that is trapped to the I/O layer, which calls the memory
management layer, which in turn calls the CPU-scheduling layer, which is then
need to be passed, and so on. Each layer adds overhead to the system call; the net
result is a system call that takes longer than does one on a non layered system.
The second problem concerns .A
layer can access only the immediately lower layer, so all the features and facilities
needed by it must be available to the lower layers. This requirement may pose a
problem in ordering the layers. This problem is often solved by splitting a layer
into two and putting other layers between them.
These limitations have caused a small backlash against layering in recent
An Introduction to Operating System 9.21
9.4.3 Microkernel
A concept that has received much attention recently is the microkernel. A micro
kernel is a small operating system core that provides the foundation for modular
MAC
modularity.
and interaction only takes place between adjacent layers. With the layered approach,
most or all of the layers are executed in the kernel mode.
Problems remain even with the layered approach. Each layer possesses
considerable functionality. Major changes in one layer can have numerous effects,
implement tailored versions of a base operating system with a few functions added
create a process or thread, it sends a message to the process server. Each of the
servers can send messages to other servers and can invoke the primitive functions
in the microkernel. This is client/server architecture within a single computer.
9.22 An Introduction to System Software
9.4.3.2 Advantages of a Micro-kernel Organization
Microkernels offer a number of advantages:
Uniform interfaces
Extensibility
Portability
Reliability
Microkernel design imposes a uniform interface on requests made by a process.
Processes need not distinguish between kernel-level and user-level services because
all such services are provided by means of message passing.
Any operating system will inevitably need to acquire features not in its current
design, as new hardware devices and new software techniques are developed. The
microkernel architecture facilitates extensibility, allowing the addition of new
available in the kernel. Thus, users can choose from a variety of services the one
Not only can new features be added to the operating system, but existing
microkernel based operating system is not necessarily a small system. Indeed, the
structure lends itself to adding a wide range of features. But not everyone needs,
for example, a high level of security or the ability to do distributed computing.
Portability becomes an attractive feature of an operating system. In the
the microkernel. Thus, changes needed to port the system to a new processor are
fewer and tend to be arranged in logical groupings.
its reliability. Although modular design helps to enhance reliability, even greater
gains can be achieved with microkernel architecture. A small microkernel can be
rigorously tested. Its use of a small number of application programming interfaces
(APIs) improves the chance of producing quality code for the operating system
services outside the kernel.