Operating System
Operating System
An operating system is a set of program implemented either in software or firmware or both that
make the hardware usable. It is an organized collection of programs and data that acts as interface
between the computer hardware and the users providing users with a set of facilities for program
design, coding, maintenance, etc.
It can also be viewed as a resources manager. The resources it manages are:
Processor
secondary storage devices
I/O devices
memory
data
Feature of O.S
Defining the user interface
allowing user to share data
sharing hardware among users
scheduling resources among users
facilitating I/O
recovering from error
1
user
administrative personnel
computer operator
system programmer
application programmer
2 Categories of O.S
Single user O.S
Example
o MS-DOS
o OS/2
Multi-user O.S
Example
o UNIX
o XENIX
o PC MOS
o LINUX
o NOVELL Netware
o WINDOWS –NT
o WINDOWS-XP
Types of Operating System
The two main types of O.S. are:
(a) Command- driven O.S.: the commands or instruction to this type of O.S. is entered through the
keyboard or by using batch files. It is the most flexible way to use an O.S.It is also the most difficult
way because syntax of the command have to be learn e.g. MSDOS commands.
(b) Menu-driven O.S. some O.S. employs menu which allows the operator to use icon on the window.
It is a batch file selects O.S functions that form the list of choice displayed on the screen.
Some O.S. offer on-screen helps messages to remind the user on how to use system functions while
others employ windows as a menu to display the O.S. function available and the status of the files
which the user is working on.
2
A window is an area of the display screen set aside to show the content or status of files; such O.S.
allow user to display many windows(say 3) on the screen at the same time.
UNIT 2
HISTORICAL DEVELOPMENT OF OPERATING SYSTEM
O.S like computer hardware has undergone a series of revolutionary changes called generation. In
computer hardware, generations have been marked by major advance in component from Vacuum
Tube to Transistors to Integrate Circuits to Very Large Scales Integrated Circuit.
Setting up: It involved putting the machine in an active state and leading a job individually usually
from cards (which was introduced about 1880). This job had the whole memory during duration.
Tearing down: when a job runs to completion or terminates because of some error situation, an
operator will load a program to dump the memory. He then removes the cards and printed output. He
also takes the machine back to its initial state (interactive state), no other job should be run or should
be in the active state if another job is to be processed.
Thus, a small program requiring a little CPU time would take so long to complete because of the set
up and tear down times.
3
Computing systems had reduced in size (due to the introduction of transistors, current could now
flow through wires) though they were still large.
The first generation of operating system was design to automate the set up and tear down of jobs (i.e.
to smoothen the transistors between jobs). This was achieved through batch processing. Jobs were
gathered in ‘batches’ such that one job was processed after the other without users interfering.
This means once a job is running; it had total control of the machine. As each job terminated, control
was returned to the O.S. that assist the job (house keeping operations) and the next job is read in.
1. Single stream batch processing i.e. program to program transition capabilities in order to reduce the
overhead involved in starting a new job.
2. Error recovery techniques that automatically ‘cleared up’ after a job terminated abnormally and
allowed the next job to be initiated with minimal operator intersection.
3. Job control languages that allow users to specify much of the details for refining their jobs, the
resources the job requested and accounting.
4. Operating system had standard I/O routines called IDOS(I/O) control services so that users did not
have scope concerned with the messy details of machine level decoding of the input and output
resources.
5. Paging and virtual storage concepts were introduced but not implemented. Assembly language was
introduced.
These systems are often heavily under-utilized. It is far more important for them to be available
when needed and to respond quickly than for them to be busy throughout the time. This fact helps
explain their cost.
4
Multiprocessing systems emerged in which several processors cooperate sometimes as independent
computer systems communicating with each other, and sometimes as multiple processors sharing a
common memory.
Still in the early sixties, time sharing systems using an interactive mode were developed in which
user could interact directly with the computer through typewriter terminals. This helped to eradicate
the suffering of delays for hours or days on the batch processing environment. Users could now share
data and programs. This increased productivity and introduced creativity among users. Most time
sharing users then spend their time developing programs or running especially designed application
programs. Errors in the earliest phases of the projects were not located until long after the projects
were delivered to customers.
This led to the emergence of the field of software engineering in order to facilitate a disciplined and
structured approach to the construction of reliable, understandable and maintainable software, e.g.
Burroughs introduced an operating system called MCP (Master control program) in 1960.
Throughout the early 70’s vendors sold hardware and gave out O.S. Support programs, application
programs, documentation and educational manuals for no charge. Thus computer vendors didn’t take
much responsibility for the O.S.
5
Unbundling the software from the hardware:
IBM was the first company to unbundle its software from its hardware i.e. they charged separately
for each, although IBM continued to supply some basic software for no charge.
Customers taking responsibility for the quality of their software.
Vendor began to design their software more modularly so that they could be sold as individual units.
Users could now shop around for their software.
Others manufacturer unbundled rapidly.
Some concepts disappeared and later reappeared indifferent forms paging and virtual storage
concepts.
O.S was developed for families of machine.
The merging of multiprogramming, batch philosophy with time sharing technology form O.S.
capable of handling both batch and time sharing operations.
The highly symbolic mnemonics acronym-oriented user environment was replaced with menu-driven
system that guided users through various available options in English language. The concept of
virtual machines becomes widely used. Today’s user is not concerned with the internal functioning
of the machine, but to accomplished work with a computer. Database systems have gained wide
acceptance and importance.
Thousands of online databases have become available for access via terminals over communication
networks.
The concept of distributed data processing has become firmly established.
6
UNIT 3
BRIEF HISTORY OF MS-DOS AND WINDOWS O.S
A summary of the most significant features of versions of MS-DOS are listed below.
7
3.5 inch disks
3.2 1986 Support for IBM Token Ring Network.
-Support for new IBM PS/2 computers
3.3 1987 -1.44Mb floppies.
-Multiple 32MB disk partitions
-Support for expanded memory system.
4.0 1988 -Simple window based command shell
-Up to 2 Gigabyte disk partition
5.0 1991 -Improved memory management
-improved shell and extended command
-2.88MB floppies
6.0 1993 -improved memory management
-Doubling the disk space by compressing files for floppies and hard disks
-Interlink: a program that transfers files within
computers.
-Antivirus facility that can remove more than 800
viruses from your system
-Improved extended commands
-A diagnostic facility: a program that gathers and
displays technical information about your system.
-Power is a program that conserves battery power when applications and
hardware devices are idle
8
Part of DOS
DOS is made up of several programs, a command processor, an input/ output system and several
utilities.DOS is normally supplied in between 2-2 magnetic disk media. The first diskette contains
the command processor and other necessary system files for starting up the system.
The other diskettes are the supplementary diskettes that contain the several utility programs.
MSDOS.SYS, 10.SYS and the command processor/ shell (which is also known as
COMMAND.COM on the disk #1) are the only files needed to start up (or boot) the computer
system.
9
WINDOWS OPERATING SYSTEM
Definition of Windows
Windows is an Operating System that supervises other programs in the computer. Windows provides
a user friendly graphic environment, referred to as Graphical User Interface (GUI) for the user. An
interface is the common boundary between the user and the computer. Its features influence the
effectiveness of the user on the system.
Features/Benefits of Windows
Windows provides an easy to use graphical user interface (GUI) to run our programs and
applications.
Windows allows us to run more than one task or program at the same time. This is called multi-
tasking. For example, you can be typing your document and also listen to favourite CD playing on
the CD-ROM Drive.
Windows allows us to use more than one input device to work with. You can use the keyboard, a
mouse or joystick to work.
Windows allows us to add new equipment to our computer. Without switching it off. We can add a
printer, CD ROM etc. and the computer will recognize it automatically. This is called Plug and Play
capability.
Windows allows us to save our files with names we like. This name can be as long as 255 characters,
which is contrary to DOS that allowed a maximum of eight characters for filenames.
Enhanced backup and restore functionality supports more drives and the latest hardware.
Application loading, system startup, and shut down time are faster.
Support for DVD and digit audio delivers high-quality digital movies and audio direct to your TV or
PC monitor.
Windows supports Microsoft Internet Explorer, which includes Outlook, a new e-mail client and
collaboration tool. The Active Desktop Interface puts Internet and Intranet pages directly on a user’s
desktop.
10
A brief history of Windows System
11
provides pre-emptive scheduling, threads and memory protection.
95 1995 Also known as Chicago.
Upgrade from, and compatible with, 3.1 but with support for 32 bit
applications and ‘MS-Dos free’.
Usable on smaller machines than NT.
Support for OLE 2.
New design of user interface with more object-oriented features.
NT 4.0 1996 Introduction of new-look Windows 95-style user interface.
Incorporates support for Internet and DCOM
98 1998 Windows 98 introduced
12
UNIT 3
COMPONENT OF AN OPERATING SYSTEM
An O.S. is primarily a provider and manager of machines resources: processors, main memory,
secondary storage, I/O devices. Access to these resources is centralized and controlled by various
modules of the system. In addition, the O.S. provides other services such as interface, data security
etc. the O.S. structure or shell could therefore be illustrated as thus:
13
(c) Memory management
It is concerned with the management of RAM. It includes the allocation of RAM for various
purposes, background program priorities and virtual memory systems.
Its operations include:
Keeping track of the
Deciding on the allocation
Allocating memory
De-allocating the portion of memory used when it is no longer in use.
Functions
To provides the control from job to job or program to program;
It is enhanced by the availability of job control languages (JCL)
14
specialize in this activity alone. Most users and system programmers use such terms as system
commands and DOS commands when referring to instructions that tell the O.S. what to do.
Job control functions include executing programs on demand batch programs for command starts for
automatic execution of O.S. Functions.
e.g.
In an old mainframe system (IBM 360/370), the JCL for Fortran compiler include the following:
//job s426 name = Ayo, O. Dept=computer
// option link
// exce Fortran
FORTRAN program
//execlinked
//exec.
/* Data
/&
The job identifies a user’s job and its requirements to the system.
The JCL for GWBASIC (a microcomputer-based basic computer)
load Files
list Cnt
run
save
Sometimes, a sequential series of system commands are store as separate file for execution as a program.
Such a file of system commands is known as a batch file, exe. File, shell programs or JCL program.
Batch files are very useful for setting up memory configurations, I/O device identifiers, and other
15
housekeeping chores that the computer operator must perform regularly. They are also used to
established a sequence of application programs to run.
(g) Transient utilities –utilities on the system disks;COPY, FORMAT, DISKCOPY e.t.c.
(ii) Interpreter: The interpreter translates program written in high level language one at a time i.e.
completely translating and executing each other instruction before going to the next statement.
(iii) Assembler:These are programs which translate a source program written in an assembly or
programming language into a machine code object program.
Complier Interpreter
1 Translate all at once Translate one by one
2 Execution is faster Execution is slow
3 A little but tedious Easy to use and learn
4 All error’s given at once One at a time
16
(b) Application Packages
They are ready made programs designed in a standard way of applications which are common to a
group of people. They are either:
tailor made (developed by a team of computer people) or
bought off- shelf.
The disk on which the O.S. resides is usually called the system resident device, sys-res or DOS-res.
This disk is usually the diskette or hard disk. Not all programs making up the shell are available in
RAM at once, the program available in RAM are called resident or internal system programs. They
may be executed by the user directly or through application programs. The program that are not
available in RAM are called transient or external system program/ transients and have to be loaded
into the RAM prior to its execution.
More recently, a layer for the user interface was introduced between the basic functions of the O.S.
and the translators/application packages layer. The user interface is the user gateway into the
computer, enabling the required human computer interaction to take place later.
17
MODULE TWO
PROCESS MANAGEMENT 1
UNIT 1
BASIC CONCEPT
1.1 INTRODUCTION
The concept of a process is central to the study of a modern operating system. It was first used by the
designers of Multics in 1960. Since then, process (which is used interchangeably with task) has been
given many definitions.
Definition of a Process
a program in execution
that which a processor executes
that unit of code (i.e. program) that needs a processor
the entity to which processor are assigned
the animated spirit of a procedure
the locus of control of a procedure in execution
that which is manifested by the existence of a process that control the dispatched unit.
Many other definitions have been given and there is no universally agreed upon definition but the
program in execution concept seems to be most frequently used.
When a user initiative a program the O.S. operates a process to represent the execution of the
program. The process, thus created consists of the machine code image of the program in the
memory, the Process Control Block (PCB) structure (discussed below) and possibly other structures
used to manage the process during its lifetime.
The processor at any instant can only be executing one instruction from one program but several
processes can be sustained over a period of time by assigning each processwhich becomes
temporarily inactive. The process whenstarted has complete control of the processor until either:
(i) the process issued I/O request by means of system call or
(ii) interrupt occurs.
18
Each process has a data structure called the process control block (PCB)which is created by the O.S.
This block contains the following information that assist the O.S. to locate all key information about
the process:
the cursor state of the process
the process identification
the priority level of the process
pointers to allocate the process in memory
pointers to allocate resources
a register save area
High level scheduling (long term or job scheduling) (H.L.S) decide whether a new job should be
admitted into the system or not. It sometimes known as admission scheduling. It was only useful in
the older systems.
Medium level scheduling (or intermediate scheduling) is concerned with the decision to
temporarily remove a process from the system (in order to reduce the system load or to process).
Low level scheduling (short term or processor scheduling) decides on the ready process to be
assigned to the processor. This level is often called the dispatcher but the term is more accurately
refers to as the actual activity of transferring control to the selected process.
19
UNIT 2
PROCESS STATE
2.1 INTRODUCTION
A process goes through a series of discrete state during its lifetime in an unprocessed system (i.e. a
mono or multiprogramming system with a single CPU), only one process can be running at any
instant of time. Several processes may be ready while many may be blocked.
This is illustrated in a 3-state model referred to as the process state diagram below.
BLOCKED
H.L.S
READY
entry
dispatch
I/O wait (blocked)
Time out
RUNNING termination
A process is in the READY state when it could use a CPU if one was available.
A process is in the RUNNING state if it currently has the CPU
A process is in the BLOCKED state if it waited for an event to happen (e.g. I/O completion)
before it can proceed.
The following state transitions are therefore possible from the model above.
Dispatch (process name): ready running
Time out (process name): running ready
Wake Up(process name): blocked ready
Blocked (process name): running blocked
20
The only state transition initiated by the user process is block while other three transitionsare
initiated by entities external to the process.
An example of the life cycle of process in UNIX is given below:
We assume that there are two processes involved:
(a) User A arrives and using the shell command interpreter and types in a program name, say, SOLVIT.
(b) The shell attempts to find the program and if successful, the program code will be loaded and a
system call used to generate a process corresponding to the execution of the SOLVIT program.
(c) In order to physically represent the process within the computer, the O.S creates a data structure
called process control block (PCB) in memory.
(d) The process ‘SOLVIT’ will now run, the process is said to be in the RUNNING STATE.
(e) After a while, SOLVIT needs to read some data from the hard disk (or diskette) and issues an
appropriate system call. Since it will now have to wait until the file management subsystem complies
with its request, the process is unable to continue. Process is now in the BLOCKED STATE.
(f) In the meantime, user B want to run a program say, myprog and type a suitable command. A now
process is created for MYPROG and since SOLVIT is currently idle, executions begin with myprog
which is now RUNNING.
(g) The I/O delay which is blocking SOLVIT now end and SOLVIT want to restart. However, it cannot,
because MYPROG is using the processor. SOLVIT is now said to be the READY state processes in
the READY and held in a queue and dealth with using various scheduling schemes which will be
discuss later.
(h) The O.S scheduling now decides that MYPROG has enough processor time and moves MYPROG to
the READY queue. Note that MYPROG become READY not blocked, since it did not issue an I/O
request.
(i) SOLVIT is restarted and enter the RUNNING states once more
(j) The action of switching between active processes and waiting for I/O transfers continue throughout
the lifetime of processes. Usually there are more than 2 processes competing. For the processor but
the same general principle apply.
(k) Eventually SOLVITcompletes its task and terminates. It leaves the running state and disappears from
the system.
21
Create a process
Destroy a process
Suspend a process
Resume a process
Change a process
Block a process
Wakeup a process
Dispatch a process
Process could also be initiated by a user process such that a single program activated by a user could
result eventually in several separate processes running simultaneously.
22
(c) Suspend a process
When a process is suspended, it cannot process until another process resumes it. Suspension is an
important operation and has been implemented in a variety of ways on different systems.
Suspension normally last for brief periods of time. They are often performed by the O.S to remove
certain process temporarily during a peak loading situation. For long time suspension, the process
resources and memory space are freed (this depends on the nature of the resource).
When a process is suspended, it becomes dormant until it is resumed by the system or user. A
process can be suspended for a number of reasons which are:
(a) The most significant: the process is swapped out of the memory by the memory management system
in order to free memory for other processes ( this decision is taken by the scheduling system).
(b) The process is suspended by the user for a number of reasons such as during debugging, to
investigate the partial result of the process.
(c) Some processes are designed to run periodically e.g. monitor system usage etc.
A process can be suspended while in one of the following states: READY, RUNNING or
BLOCKED. This giving rise to two other states, namely, READY SUSPENDED and BLOCKED
SUSPENDING.A RUNNING process which is suspended becomes READY SUSPENDED.
23
I/O completion
ENTRY BLOCKED
READY
Time out
I/O wait
(blocked)
dispatch Running
Suspend
Suspend
Resume Resume
Suspend termination
I/O Completion
READY BLOCKED
SUSPEND SUSPENDED
The following transitions are therefore possible from the model above.
A ready process may be suspended only by another process SUSPEND (process name): ready
suspended.
A ready process may be made ready by another process RESUME (process name):ready-
suspendedready.
A blocked process may be suspended by another process SUSPEND (process name):blocked
blocked-suspended.
A blocked-suspended process may be made to resume by another process RESUME (process
name): blocked-suspended blocked.
A running process may be suspended by another process SUSPEND (process name): running ready-
suspended
Including all previous transition discussed in the 3-state model.
A ready process may be dispatched by another process DISPATCH (process name): ready running
A running process may be made time out by another process TIME OUT (process name): running
ready
A running process may be blocked by another process BLOCK (process name): runningblock
A block process may be completed by another process COMPLETION (process name): block
ready
A blocked suspended process may be completed by another process COMPLETION(process
name): blockedsuspendedready suspended.
24
UNIT 3
INTERRUPT AND THEIR PROCESSING
25
DO 10 I = 1, N, 2
10 CONTINUE
(iii) Arithmetic overflow: An attempt to divide a number with zero
An I/O channel ends its job i.e. An I/O channel end interrupt occurs, when the channel finishes its
job before the device does which is normal e.g. The output channel transfer data from the memory to
the printer’s interrupt buffer. The channel will finishes transferring the last batch of data before the
printer finishes printing the job.
26
A machine check interrupt is generated by the malfunctioning of the hardware.
Typical events are:
the screen showing ‘fixed disk controller bad’
the VDU showing ‘keyboard bad’
The sequences of Events that occur when an interrupt occurs (interrupt processing) are:
(i) The processor stops executing the job or program,
(ii) The O.S. saves the current state of the CPU (i.e. the interrupted process) in the OLD PSW.
(iii)Control is transferred to the I.H.
(iv) The interrupt becomes the current process. The appropriate I.H. required is selected. The address of
this I.H. is stored in the NEW PSW.
(v) The I.H. load the NEW PSW from its position to the CURRENT PSW(the CURRENTPSW now
contains the address of the appropriate I.H)
(vii) The IH analyses and processes the interrupts (the problem is solved)
(viii)The IH signals the system at the completion of its task.
(ix) The OLD PSW is reloaded to the CURRENT PSWand the next instruction on the
interrupted program is executed.
This implies that there is only one CURRENT PSW though its content changes periodically. The
action of holding the current state of a process which has been temporarily stopped and the starting
of another process is called content switching (or content change).
27
MODULE THREE
PROCESS MANAGEMENT 2
UNIT 1
SCHEDULING
Objectives
The overall scheduling is intended to meet some objectives in terms of the system’s performance and
behaviour. The scheduling system should:
Objectives of scheduling
The overall scheduling effort is intended to meet some objectives in terms of the system’s
performance and behavior. The scheduling system should-
Maximize the system throughput.
Be ‘fair’ to all users. This does not mean all users must be treated equally, but consistently, relative
to the importance of the work being done.
Provide tolerable response (for on-line users) or turn-around time (for batch users).
Degrade performance gracefully. If the system becomes overloaded, it should not ‘collapse’, but
avoid further loading (e.g. by inhibiting any new jobs or users) and/or temporarily reduce the level of
service (e.g. response time).
Be consistent and predictable. The response and turn-around time should be relatively stable from
day to day.
28
I/O or CPU bound; i.e. whether job uses predominately I/O time or processor time. This criterion is
often of consequence because of the need to balance the use of processor and the I/O system. If the
processor is absorbed in CPU- intensive work, it is unlikely that the I/O devices are being serviced
frequently enough to sustain maximum throughput.
Resources used to date; e.g. the amount of processor time already consumed.
Waiting time to date; i.e. the amount of time spent waiting for service so far
It can be seen that some of these factors are ‘static’ characteristics which can be assessed prior to
commencement of the process execution. Of particular interest in this respect is the notion of a
priority. This is a value which can assign to each process and indicates the relative ‘importance’ of
the process, such that a high priority process will be selected for execution in preference to a lower
priority one. Scheduling in this way on the basis of a single priority value enables rapid decisions to
be made by the scheduler. An initial priority can be assigned to each process, in some schemes, the
priority is static and is used as a basis for scheduling throughout the life of the process, and while in
other schemes the priority is dynamic, being modified to reflect the changing importance of the
process. The priority can be supplied by a user or could be derived from the characteristics of the job,
or both.
29
UNIT 2
SCHEDULING SCHEME
New jobs entered into the system will be put into a queue awaiting acceptance by the HLS. The
principal control which the HLS exercises is ensuring that the computer is not overloaded, in the
sense that the number of active processes (the degree of multiprogramming) is below a level
consistent with efficient running of the system.
If the system loading is considered to be at its acceptable maximum, new processes may only be
admitted when a current process terminates.
If the loading level is below maximum, a waiting process will be selected from the queue on the
basis of some selected algorithm. This may be a simple decision such as First-Come-First-Served
(FCFS) or it may attempt to improve the performance of the system using a more elaborate scheme.
A possibility in this respect is known as shortest job first (SJF) which selects the waiting job which
has the shortest estimated run time.
30
3. LOW LEVEL SCHEDULING (LLS)
The low level scheduling (LLS) is the most complex and significant of the scheduling levels.
Whereas the high and medium level schedulers operate over time scale of seconds or minutes, the
LLS makes critical decisions many times every second. The LLS will invoke whenever the current
process relinquished control, which, as we have seen, will occur when the process calls for an I/O
transfer or some other interrupt arises. A number of different policies have been devised for use in
low level schedulers, each of which has its own advantages and disadvantages. These policies can be
categorized as either Preemptive or Non-Preemptive policies.
A preemptive scheme will incur greater overheads since it will generate more context switches but is
often desirable in order to avoid one (possibly long) process from monopolizing the processor and to
guarantee a reasonable level of service for all processes. In particular, a preemptive scheme is
generally necessary in an on-line environment and absolutely essential in a real-time one.
Cooperative scheduling
Earlier versions of windows (up to version 3.11) appear to provide non-preemptive scheduling. In
fact, the technique used is rather primitive; the responsibility for releasing control is placed in the
hands of the application programs and is not managed by the operating system. That is, each
application, when executing and therefore holding the processor, is expected periodically to,
relinquish control back to the windows scheduler. The operation is generally incorporated into the
event processing loop of each application; if there are no event messages for that application
requiring action; control is passed back to the scheduler. This mode of working is called cooperative
scheduling. Its main disadvantage is that the operating system does not have overall control of the
situation and it is possible for the whole computer to freeze. Note that it differ from a conventional
31
non-preemptive system in that in the latter, the process will lose control as soon as it requires an I/O
operation. At this point, the operating system regains control.
If we make the nationally fair assumption that the waiting time for a process should be
commensurate with its run time, then the ratio of waiting- time/run- time should be about the same
for each job. However, we can see from column (c) above that this ratio for small jobs 3 and 4 is
32
very large, while being reasonable for long jobs. The above example is admittedly somewhat
contrived but it does indicate how method can be unfair to short processes.
Another problem with FCFS is that if a CPU-bound process gets the processor, it will run for
relatively long periods uninterrupted, while I/O bound processes will be unable to maintain I/O
activity at a high level. When an I/O-bound process eventually gets the processor, it will soon incur
an I/O wait, possibly allowing a CPU- bound process to re-start. Thus the utilization of the I/O
devices will be poor.
FCFS is rarely used on its own but is often employed in conjunction with other methods.
Merits/Demerits
it favors long jobs over short ones
if a CPU-bound process gets the processor, it will run for relatively long period uninterrupted while
an I/O-bound process will unable to maintain high I/O activity.
FCFS is rarely used on its own but is often employed in conjunction with other methods.
It is not useful in a time sharing environment because it cannot guarantee a good response to time
33
This appears to be much more equitable, with no process having a large wait to run-time ratio. The
example does not reveal difficultyin the scheme, however, a long job in the queue may be delayed
indefinitely by a succession of smaller jobs arriving in the queue. In the example of table 2, it is
assumed that the job list is constant, but, in practice, before time 3 is reached when job 5 is due to
start, another job of length, say, 10 minutes could arrive and be placed ahead of job5. This queue
jumping effect could recur many times, effectively preventing job 5 from starting at all; this situation
is known as starvation.
SJF is more applicable to batch working since it requires that an estimate of time be available, which
could be supplied in the job control language (JCL) commands for the job. It is possible for the
operating system to derive a substitute measure for interactive processes by computing an average of
run durations (i.e. periods when the process is in the RUNNING state) over a period of time. This
measures likely to indicate the amount of time the process is likely to use when in next gets the
processor.
Merits/Demerits
A long job may be delayed indefinitely by a succession of smaller jobs arriving in the queue.
It is more applicable to batch jobs. Since this method requires that an estimated of run time be
available which could be supplied in the JCL commands for the job.
Merits/Demerits
It favors short jobs better than SJF. Since a currently running long job could be ousted (put cut) by a
new shorter one.
The danger of starvation of long jobs also exists in this scheme. The implementation of SJF requires
an estimate of total run time and measurement of elapsed run time.
34
(d) Highest Response Ratio Next (HRN)
This scheme is derived from the SJF method, to reduce SJF’s bias against long jobs and to avoid the
danger of starvation. In effect, HRN derives a dynamic priority value based on the estimated run time
and the incurred waiting time. The priority for each process is calculated from the formula:
Time waiting + run time
Priority, P = run time
The process with the highest priority value will be selected for running. When process first appears
in the READY queue, the ‘time waiting’ will be zero and hence P will be equal to 1 for all processes.
After a short period of waiting however, the shorter jobs will be favored; e.g. consider two jobs A
and B, with run times of 10 and 50 minutes respectively. After each has waited 5 minutes, their
respective priorities are:
A: P = (5 + 10) = 1.5
10
B: P = (5 + 50) = 1.1
50
On this basis, the shorter job A selected. Note, however, if A had just started (wait time = 0)B would
be chosen in preference to A. as time passes, the wait time will become more significant. If B had
been waiting for, say, 30 minutes then its priority would be:
Merits/Demerits
A job cannot be staved since ultimately the effect of the wait in the numerator of the priority
expression will predominate over short jobs with a smaller wait time.
35
(e) Round Robin (RR)
In the Round Robin scheme, a process is selected for running from the READY queue in FIFO
sequence. However, if the process runs beyond a certain fixed length of time, called the time
quantum, it is interrupted and returned to the READY queue. In other words, each active process is
given a ‘time slice’ in rotation. The RR technique is illustrated in figure 6 below:
READY
QUEUE
DISPATCH
CPU
TIMEOUT
The timing required by this scheme is obtained by using a hardware timer which generates an
interrupt at pre-set intervals. RR is effective in timesharing environments, where it is desirable to
provide an acceptable response time for every user and where the processing demands of each user
will often be relatively low and sporadic. The RR scheme is preemptive, but preemption occurs only
by expiry of the time quantum.
By its nature, RR incurs a significant overhead since each time quantum brings a context switch.
This raises the question of how long the time quantum should be. As is often the case, this decision
has to be compromise between conflicting requirements. On the one hand, the quantum should be as
large as possible to minimize the overheads of context switches, while on the other hand, it should
not be so long as to reduce the users’ response times. It is worth noting that if the quantum size is
increased sufficiently, the scheduling approaches FCFS. In the FCFS scheme, context switches will
take place when the current process cannot continue due to issuing an I/O request. If the time
quantum in the RR scheme is comparable in length to the average time between I/O requests, then
the two schemes will be performing in a similar fashion. Ideally, in an interactive environment, most
processes will be I/O bound, so that they will be incurring I/O waits, and hence yielding the
processor, before expiry of the time quantum. This indicates the general order of size for the time
quantum, but depends in a somewhat unpredictable way on the particular loading and job mix on the
system. In practice, the quantum is typically of the order of 10 to 20 milliseconds.
36
Merits/Demerits
i. It is effective in time sharing environment, where it is desirable to produce an acceptable response
time for every user and where processing demands each user will often be relatively low and
sporadic.
ii. It incurs a significant overhead since each time slice brings a context switch. However, the overhead
is kept low by efficient context switching mechanism and providing adequate storage for the
processes to reside in RAM at the same time.
NEW
DISPATCH
LEVEL 1 CPU
PROCESS
FIFO
TIMEOUT
QUEUE
CPU
LEVEL FIFO
2 QUEUE
CPU
LEVEL FIFO
3 QUEUE
LEVEL 4
(LOWES
T) ROUND-ROBIN CPU
QUEUE
37
Figure 4 above show a typical set up for a MFQ system. It consists of a number of separate queues of
entries which represent active processes. Each queue represents different priority, with the top queue
being highest priority and lower queues successively lower priorities. Within each queue, the queued
processes are treated in a FIFO fashion, with a time quantum being applied to limit the amount of
processor time given to each process. Processes in a lower level queue are allocated the processor
unless all the queues above that queue are empty. A new process enters the system at the end of the
top queue and will eventually work its way to the front and be dispatched. If it uses up its time
quantum, it moves the end of the queue in the next lower level, with the exception of the lowest
queue where a round robin scheme applies (i.e. it simply moves to the end of that queue).if it
relinquishes the processor due to a wait condition, the process leaves the queuing system. As a
process uses more and more CPU time, it will migrate down the levels, thus obtaining a reducing
level of access.
This arrangement militates to some extent against long processes and starvation is a possibility.
Various modifications exist to the basis scheme which attempt to meet some of its problems. One
option is to use increasing size of time quantum on lower levels, so that when a lower queue process
does get a chance it holds the processor for a longer period. Also, some systems promote processes
to a higher level queue when they have spent a certain amount of time in a queue without being
serviced.
Merit/Demerits
The scheme does not favor long jobs.
Starvation is a possibility.
38
PROCESSES IN WINDOWS
Current versions of window systems have very elaborate process management facilities. The major
features are:
every process is created as a single executing thread; the process can create additional threads
the scheduler operates over all threads.
In contrast to earlier Windows versions, each process has its own virtual address space, so that one
process cannot affect the memory space of another.
Like UNIX, OS/2 creates processes in a hierarchical fashion. When a process spawns another
process, the later is considered a ‘child’ of the former and it inherits its environment. In contrast,
Windows 95 and NT maintain no formal parent-child relationship between processes although the
environment is copied.
Tutorial Questions
1. What would be the effect of the system running too many I/O intensive jobs
Answer
The jobs be sustained by relatively little processor activity and the processor will be under-utilized.
3. What would be the effect, using the FCFS scheme, if the running process got stuck in an infinite
CPU loop?
39
Answer
The process once running would dominate the processor. Not interrupts may occur (from I/O
devices, say) and will be serviced, but the process will be re-started thereafter. The process could be
stopped by kill command.
40
MODULE FOUR
MEMORY MANAGEMENT
UNIT 1
REALMEMORY MANGEMENT
Introduction
The term ‘memory’ and ‘storage’ have been used interchangeable in the literature. There are two
major types of storage, thus:
(a) primary storage (RAM, MAIN MEMORY)
(b) Secondary storage (BACKING STORAGE)
The main memory is essential within the computer for various reasons:
to enable process exist,
to store instructions which are interrupt by the processor, and
a work space and transient storage medium for various kinds of data9objects) such as O.S. Data
(process table, file description tables etc), user program code and data, video storage space etc.
In systems with several level of storage, a great deal of shuffling goes on which programs and data
are moved back and front between various level. The shuffling consumes system resources such as
CPU time that would otherwise be used for reproduction.
Diagram below
CACHE
PRIMARY
SECONDARY
41
Cache storage was introduce in 1960’s. It’s is a high speed storage that is faster than primary storage.
It’s is also extremely expensive when compared with RAM and therefore only relatively small cache
are used. Instruction that need to process many times over 1000 or more are moved to cache
memory. Historically, the number of different memory management technique have been used, and
in the processes of evolution has been superseded by superior methods. To some extent, history has
repeated itself in that many older techniques were resurrected for application in microcomputer
which appears in the late 1970’s.
Note that although several processes may be active and hence occupying memory space, at instant of
time only one instruction is been executed.
Storage Allocation
There are two broad types of processes
contiguous storage allocation
non-contiguous storage allocation
Earliest computing systems required contiguous storage allocation each program had to occupy a
single contiguous block of storage locations.
In non-contiguous storage allocation, a program is divided into several blocks or segment that may
be placed throughout main storage in pieces not necessarily adjacent to one another.
42
UNUSED
USER
PROCESS
OPERATING
SYSTEM
Merits/Demerits
Programs were limited to the size of the memory
Computing resources were generally wasted.
Protection facilities
A single boundary register in the CPU contained the highest numbered of instruction used in the O.S.
each time a user program referred to a storage address, the boundary register was checked to
ascertain that user program was not about to destroy the O.S.If the user tried to enter into the O.S, the
instruction will be intercept and the job terminates with an appropriate error message.
However, the user needs to access the O.S., for services such as I/O for which the user is given a
specific instruction (called the SVC inst) with which to request for services.
Usefulness
In simple systems such as games computer.
Early MSDOS operated this way.
600
43
The memory above is shown consisting of three areas of sizes 200k, 300k and 400k respectively,
each of which hold a process.
In practice, the number of partition s will be controlled by the system manager. This control would
depend on:
The amount of memory available and
The size of processes to be run.
The partitions would be set up with a range of partition size, so that a mixture of large and small
processes could be accommodated. Each partition would typically contain unused space which might
be large when put together. The occurrence of wasted space is referred to as ‘internal fragmentation’.
The word ‘internal’ refers to wastage within the space allocate to a process.
Several processes resides in the memory and compete for system resources such as a job currently
waiting for I/O will yield the CPU to another job that is ready to perform calculations. Thus both I/O
and CPU operations can occurs simultaneously. This greatly increased system throughput and
CPU/I/O utilization.
Merits/Demerits
Utilization of CPU and I/O devices are greatly improved.
System throughput (work/unit time) increases.
Multiprogramming required more storage space than a single user system.
The fixed partition sizes can prevent a process from running due to the unavailability of a sufficient
partition sizes.
Internal fragmentation wastes space, which collectively could accommodate another process.
Protection Facilities
Several bound (limit registers are employed; one for each process. Each limit register contains the
address of the low and high boundaries of each fixed partition.
Usefulness
Employed in early multiprogramming computer systems such as the IBM 360 machine.
When the process terminates, the space it occupied is freed and becomes available for the loading of
a new process. As processes terminate and space is freed, the free spaces appear as a series of ‘hole’
between the active memory area. The O.S must attempt to load an incoming process into a space
large enough to accommodate it. It can often happen when a process cannot be started because none
of the holes is large enough though the total free space is more than the required size. This situation
is illustrated in the figure below in which process B and D have terminated. Distribution of the free
memory space is termed ‘external fragmentation’. The term ‘external’ refers to space outside any
process allocation.
44
0 O.S.
O.S.
100K 0
250K PROCESS A 100K
250K PROCESS
A
450K
PROCESS C
750K
1000K
It will frequently happen that a process adjacent to one or more holes will terminate and free its
space allocation. This results in two or three adjacent holes, which can then be viewed and utilized as
a single hole, as indicated in the fig. above. The process of merging adjacent holes to form a single
larger hole is called coalescing and is a signification factor in maintaining fragmentation within
usable limit.
Merit/Demerits
External fragmentation reduces the utilization of the memory.
Variable number of processes can be handled at any time.
Usefulness
It has been successfully used in many computer systems.
45
OS
PROCESS A
200K
PROCESS B
300K
PROCESS C
200K
PROCESS D
400K
Overhead
Any memory management schemes will have inner some operating overhead. In the case of the
variable partition scheme, the system must keep tract of the position and size of each hole, taking
account of the effect of coalescing
The first fit scheme would simply select the first item on the list. While the best fit and the worst fit
would need to scan the full list before deciding.
Overhead
It is clear that compaction has desired effort of making the free space more usable by incoming
processes but this is achieved at the expense of large scale memory of current processes. All
processes would be suspended while the reshuffle takes place, with attendant updating of process
context information such as load address. Such activity would be a major overhead in any system.
Merits/Demerits
Total free space is more usable for incoming processes.
It consumes system resources that could otherwise be used productivity.
The system must stop everything whenperforming the compaction. This can result in response times
for interactive users and could be devasting in real time systems.
Compaction involves relocating processes in storage.
Reduced fragmentation.
With a rapidly changing process mix, it is necessary to compact frequently. The consumed system
resources might not satisfy the benefit of compaction.
46
Protection Facilities
Several limit registers are employed one for each process. Each limit register is established when the
process is dispatched and variable partition.
In practice, the compaction scheme has been used due to the fact that its overhead and added
complexity tend to minimizes its advantage over the non-compacted scheme.
5. Simple Paging
In a page system, each process is divided into a number of fixed sizes ‘chuck’ called pages, typically
4kb in length. The memory space is also viewed as a set of page frames of the same size.
The loading process now involves transferring each process page to some memory page frame.
The figure above shows that there are 3 free pages in memory which are available for suppose that
process B terminates and releases its allocation of pages, giving us the situation on it figure below.
We now have two disconnected regions of free pages. However, this is not a program in a paging
system because the allocation is a page by page basis. The pages of processes held in memory frames
do not be contiguous or even in the correct order.
ASSUME THAT TWO processes require to be loaded. Process D needs three pages and process E,
for pages. These are allocated to any free memory pages (fig. b). Paging reduces the problem of
fragmented free space, since a process can be distributed over a number of separates holes. AFTER
A PERIOD OF OPERATION, the pages of active processes could become extensively intermixed;
producing (fig. c).
Merits/Demerits
1. Paging alleviative the problem of fragmented free space since a process can be distributed over a
number of operate holes.
2. Space utilization and consequently the system throughput are improved.
Usefulness
1. It is seldom used in practice, virtual systems preferred. This is because having got to the level of
sophistication required by these schemes, superior systems can be obtained with relatively little
effort.
6. Simple Segmentation
Paging achieves its objective by subdividing a process into a number of fixed sized chucks.
Segmentation subdivided a process into a lot of variable length chuck called segments.
Segmentation is similar to the variable partition allocation method except that a process can be
loaded in several partitions segments solving the problem of allocation unto available free space.
Segments can be of any length, up to a maximum value determined by the design of the system.
They can be positioned independently in the memory therefore provides more efficient utilization of
free areas.
Under a paging system, the process subdivisions are physical entities not related to the logical
structure of the process in any way. However the segment is a segmentation scheme correspond to
the logical divisions of the process and defined explicitly by the programmer.
47
Typically, the segments defined by a programmer would reflect the modular structure of the process:
e.g. data in one segment, each subroutine or a group of related subroutines in a number code
segments. The programmer must be aware of the maximum segment size during design of the
segments.
Merits/Demerits
1. Full process is stored in memory.
2. Full defined scheme
3. Some external fragmentation is possible.
48
UNIT 2
Virtual memory is an extension of the main memory. It is a storage space that is not used within the
CPU but is treated as such. The extended storage space is on devices such as diskettes, hard disks,
tapes, etc. this is needed to meet the user’s various needs because primary storage is expensive.
The benefits of virtual memory are obtained at some cost in system complexity. This is explain
further below:
1. Virtual Paging
When a new process is initiated, the system loader must load at least one page from secondary
storage into real memory; i.e. the page containing the execution start point for the process. This
process is known as PAGE IN. when execution of the process commences execution will processed
through subsequent instructions beyond the start point. This can continue as long as memory
reference generated by this page are also within the same page. However, after some time, the
references (addresses) generated will refer to page outside the real memory; virtual address is
created. An interrupt is generated indicating that the requested page is not in memory.
Page Fault: a signal demanding for the requested page which is not in RAM: hence, the term
demand paging is used for this technique. The system loader will try to oblige by loading the request
into a free memory page frame (PAGE IN) and execution can proceed. A series of pages faults are
generated this way are accumulated in real memory. This subset is referred to as the resident set of
the process. When the process terminates, the O.S. releases all pages belonging to the process
making them available for other processes.
Usually, there will be many processes competing for real memory space. Consequently, the available
real memory will become full of pages belonging to these processes. If a page fault then occurs and
removed the currently loaded page from the page frame to the data set (secondary storage), it is
referred to as PAGE OUT and this event is called the page replacement.
Merits/Demerits
1. Minimal space wastage.
2. Large virtual address space
3. Page replacement scheme required.
4. Protection facilities
A hardware register beholds the page number for the current process
Sharing
While it is possible for 2 or more processes to share real memory pages, this is rarely attempted.
Since the contents of the pages is generally unknown. One of the merits of paging system is that it is
largely transparent to its programmer.
49
Page Replacement Policy
When a new page required to be brought into the memory, it may be necessary to remove one
currently in residence. When the page is remove from memory, it is be necessary to write it back to
secondary storage if it is ‘dirty i.e. it has modified while in memory. The M bit in the page table is
used to indicate dirty pages.
The algorithm to chosen which page will be replaced are referred to as page replacement policies:
Least recently used (LRU) :replace the page which has least recently been used
Not recently used (NRU):replace the page which has been used least frequently during some
immediately preceding time internal
First-in-first-out (FIFO) : replace the page which has been resident longest.
The Least Recently Used (LRU) Policy selects for replacement of page whose time since last
reference is greatest. This would notionally require that a time stamp recordings is made for a page
frame at the time of each reference. The selected page would then have the oldest time stamp. The
overhead of maintaining such a value would be considerable, as the time taken to find out the oldest
value.
In practice, a related but simple policy is used NRU. Each page frame has associated with it, a page
‘page referenced’ bit; at intervals, the operating system resets all of these bits to zero. Subsequent
reference to a page will set its page referenced bit top 1, indicating that this page has been used
during the current interval. The NRU policy simply selects for replacement any page with a page
referenced bit of zero.
The CPU two has a storage protection key. While the storage protection key in the CPU is say, 2
corresponding to user B.
User B’s program may refer only to other blocks of storage with the same storage protection key of
2, these key are strictly under control of the O.S.
The First-in-First-out (FIFO) method selects for removal of the page which has been resident in
memory for the longest time. The motivation for this approach is the assumption that such a page is
likely to be no longer in use.
50
Execute access: the segment may be executed.
Append access: the segment may have data added to the end.
It is useful in systems:
Where several users require the same software e.g. a computer or text editor. These can be loaded as
a sharable segment and accessed by each on line.
Window environment: this is because they offer the use of shared libraries which contain a number
of routines necessary for commonly required functions e.g. window management.
Merits
A page segmented systems is quite complex but it provides a powerful environment for modern
computers, giving the programmer control over process structure while efficiently managing memory
space e.g. OS/2 Microsoft windows. IBM MVS/ESA.
Usefulness
1. Provides continuity in the use of earlier machines or O.S. IBM VM allowed users to continue to run
other application systems based or the earlier IBM 360 processes such as DOS VS while developing
new applications using the DOS / VSE(virtual storage excention) and DOS/VMS (virtual machines
storage). O.S. it was also used in ICL VME systems.
2. The enhanced mode of the Intel 80386 processor can emulate multiple 8086 processor.
51
UNIT 3
MS DOS MEMORY MANAGEMENT
MS DOS evolved from a humble beginning and now incorporates many features not envisaging the
beginning of its history. As a consequence, it is in many respects untidy due to fundamental
limitation of its early design.
The original Intel 8080 processor on which MS DOS was designed used a simple 16 bit address
scheme, giving an addressable range of only 64kb. In order to improve on this the newer Intel 8086
and 8088 chips were later employed which were able to address up to IBM while presenting the
same basic 16 bit address scheme, thus being compatible with the older system. This was achieve by
the introduction of segment registers to the chip architecture. The Intel 8086 and 8088 processors use
a set of 4 segment registers, each of 16 bits, which provide a base address for the addressing of
separate segments of the active process. The segment registers are.
The basic memory model of MS DOS was that a process consists of 4 segments, locatable
independently within the available address space. The 16 bits of the segment registers provides only
IB addressability to obtain IMB which needs a 20 bits (2 =mb), the 16 bits segment register value is
shifted left 4 bit, effectively multiplying it, by 16. Hence, the effective base addresses can only adopt
values at intervals of 16 called paragraphs, but these values extend up to 1Mbyte. E.g.
The maximum segment address is 1111 1111 1111 1111 0000 (Hex FFFO) which is 16 bytes less
than 1 Mbyte. Based on this addressing scheme, MS DOS was mapped out within 1Mb space.
Note that the available user programs space called the transient program area (TPA) has 640kb less
space for O.S e.t.c. prior to MS DOS V5; this left about 560kb, with the introduction of V5 much of
the OS and the devices drivers have repositioned above the 640kb and consequently about 600kb are
available for user program.
52
Upper Memory Area
The 88k of space above the conventional memory area of 640kb is called the upper memory area
(UMA)
The UMA is not considered as part of the total memory of your computer because program cannot
store information in this area. This area is normally reserved for running our systems hardware such
as the monitor. Information can mapped or copied from another memory to part of the UMA left by
the system. These unused parts are called upper memory blocks.
Usefulness
For running programs that use expanded memory.
Overlaying
This technique was available in older systems. MS DOS reintroduced it as a means of overcoming
the 640kb limit. The essence of overlaying is that the object program is constructed as a number of
separate modules called overlays which can be loaded individually and selectively into the same
memory area. MS DOS provides a system call enable a program to load another object file, execute
it and then regain control.
The object program consists of a roof section which is always in memory and two or more loadable
overlays. The whole system has to be managed at the program level and care must be taken to avoid
frequent use of overlays.
Merits/Demerits
1. To reduce the code size by putting it in used frequently into separate overlays e.g. initialization and
error routines which are rarely required
2. Useful in splitting large object program with small sizes of data
3. Not useful in applications and large number of data memory such as spreadsheet and programs
Merits
Fast and efficient for programs that uses it.
A program called the XMS manager then HIHEM.SYS makes it easier for programs to use XMS.
53
the choice of these being dependent on the processor available and other factors. These modes are
described belows:
Real Mode: In this mode, Windows use only the basic 640 Kbytes of main memory accessible to
MS-DOS, and can run using an Intel 8086 or better.
Standard Mode: This is the normal Windows 3 mode; it allows use of extended memory (i.e. main
memory above 1 MByte).
Enhanced Mode: This mode utilizes the virtual memory capabilities of the Intel 80386 processors or
better. In addition to the processor, enhanced mode required at least 2 Mbytes of memory. It allows
multi-tasking of non-Windows programs.
Although the enhanced mode utilized virtual memory techniques, all the running processes share the
same address space. To control fragmentation Windows is capable of moving blocks of code and
data within memory. Memory addressing is performed using 16-bit segmented addressing; i.e.
addresses consist of the contents of a segment register plus a 16-bit displacement as in MS-DOS. The
application programmer’s interface to this memory system is termed the Win16 APL.
Windows also uses DLLs (dynamic link libraries) to conserve memory space; DLLs are executable
program files containing shareable code that is linked with an application program at run time.
Windows itself consists of a number of DLLs, and ‘common’ code such as device drivers are
implemented as DLLs. This technique reduces the demands on memory space since the same code is
used by several running processes.
With the introduction of Windows NT and 95 and OS/2, memory management has improved
dramatically. These systems use 32-bit, flat memory addressing, providing a massive 4 Gbytes of
address space without the need to use segmented addressing. This memory model can be managed
using the Win32 API.
54
MODULE FIVE
INPUT – OUTPUT
Organization of I/O software and hardware
The Input-Output system constitutes one of the four pillars on which a computer stands, the others
being the processor, the main memory and the file system. It is generally viewed as being the least
satisfactory member of this quarter because of its relative slowness and lack of consistency. These
characteristics are consequence of the nature of I/O devices and their role in trying to provide
communication between the microsecond domains of the computer. The range of I/O devices and the
variability of their inherent nature, speed, specific design, etc, make it difficult for the operating
system to handle them with any generality.
Efficiency
Perhaps the most significant characteristic of the I/O system is the speed disparity between it and the
processor and memory. Because I/O devices inevitable involve mechanical operations, they cannot
compete with the microsecond or nanosecond speed of the processor and memory. The design of the
I/O system largely reflects the need to minimize the problems caused by this disparity. Of central
importance is the need to make the I/O devices – in other words to operate them at maximum
efficiency.
55
The principal objectives of the I/O systems are:
To maximize the utilization of the processor
To operate the devices at their maximum speed and
To achieve device independence as far as possible.
OPERATING SYSTEM
INPUT-
APPLICATION OUTPUT DEVICE DEVICE
PROGRAM CONTROL DRIVER CONTROLLER
SYSTEM (hardware)
DEVICE
(hardware)
Application Program
Within the application, I/O activity is expressed in user-oriented terms, such as ‘read record 21 from
file xyz’. Such instructions in a high level language are translated into corresponding system calls
which invoke operating system functions. Note that even at the system call level, the instructions are
expressed in logical terms, largely independent of the device used.
Device drivers
A device driver is a software module which manages the communication with, and the control of, a
specific I/O device, or type of device. It is the task of the device driver to convert the logical requests
from the user into specific commands directed to the device itself. For example, a user request to
write a record to a floppy disk would be realized within the device driver as a series of actions, such
as checking for the presence of a disk in the drive, locating the file via the disk directory, positioning
the heads etc.
Device controllers
A device controller is a hardware unit which is attached to the I/O bus of the computer and provides
a hardware interface between the computer and the I/O device itself. Since it is connects to the
computer bus, the controller is designed for the purposes of a particular computer system while at the
same time it conforms in interface terms with the requirements of the actual I/O device.
56
Device
I/O devices are generally designed to be used in a wide range of different computer systems. For
example, the same laser printer could be used on MS-DOS, APPle and UNIX systems.
57