0% found this document useful (0 votes)
13 views23 pages

Operating System Overview-1-23

An operating system (OS) is a crucial software that manages computer hardware and provides an interface for application programs, enabling efficient resource utilization and user convenience. It performs essential functions such as process management, memory management, file management, device management, and security, while evolving through various generations from mechanical systems to modern multiuser environments. The document outlines the OS's objectives, user perspectives, system views, and its historical evolution, highlighting key features and advancements in operating systems over the decades.

Uploaded by

dikshadeware
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views23 pages

Operating System Overview-1-23

An operating system (OS) is a crucial software that manages computer hardware and provides an interface for application programs, enabling efficient resource utilization and user convenience. It performs essential functions such as process management, memory management, file management, device management, and security, while evolving through various generations from mechanical systems to modern multiuser environments. The document outlines the OS's objectives, user perspectives, system views, and its historical evolution, highlighting key features and advancements in operating systems over the decades.

Uploaded by

dikshadeware
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

1

Module 1
Operating system Overview

# Introduction

What is an operating system?


➔ An operating system is a program that controls the execution of application programs and acts as
an interface between applications and the computer hardware.
➔ An operating system is a program that manages a computer’s hardware.
➔ An operating system is a set of software programs.
➔ Mainframe operating systems are designed primarily to optimize utilization of hardware whereas
Personal computer (PC) operating systems support complex games, business applications. Operating
systems for mobile computers provide an environment in which a user can easily interface with the
computer to execute programs. Thus, some operating systems are designed to be convenient, others
to be efficient, and others to be some combination of the two.
➔ Operating system is a system software that manages, operates and communicates with the
computer hardware and software.
➔ It is also called a Resource Manager.
➔ Modern operating systems allow multiple programs to run at the same time. Example, imagine what
would have happened if 3 programs running on some computer, all tried to print their output
simultaneously, on the same printer. The first few lines of printout might be from program 1, the
next few from program 2, then some from program 3, and so forth. The result would be chaos. The
operating system can bring order to the potential chaos by buffering all the output destined for the
printer on the disk. When one program is finished, the operating system can then copy its output
from the disk file where it has been stored to the printer, while at the same time the other
program can continue generating more output, oblivious to the fact that the output is not really
going to the printer.
➔ A computer system can be divided roughly into four components:
1) the hardware,
2) the operating system,
3) the application programs, and
4) the users

2

Fig: Abstract view of a computer system components.

➔ The hardware—the central processing unit (CPU), the memory, and the input/output (I/O)
devices—provides the basic computing resources for the system.
➔ The application programs—such as word processors, spreadsheets, compilers, and Web
browsers—define the ways in which these resources are used to solve users’ computing problems.
➔ The operating system controls the hardware and coordinates its use among the various application
programs for the various users.
➔ The users are the actual users of the applications. Such users can perform limited tasks as defined
by the application programs they are using.

[ The operating system is the one program running at all times on the computer—usually called the kernel.
The two other types of programs: system programs, which are associated with the operating system but are
not necessarily part of the kernel, and application programs, which include all programs not associated with
the operation of the system. ]
3

# Objectives
➔ Convenience: An OS makes a computer more convenient to use.
➔ Efficiency: An OS allows the computer system resources to be used in an efficient manner.
➔ Ability to evolve: An OS should be constructed in such a way as to permit the effective development,
testing, and introduction of new system functions without interfering with service.
Users’ View
➔ The user’s view of the computer varies according to the interface being used.
➔ The users sit in front of a PC, consisting of a monitor, keyboard, mouse, and system unit. Such a
system is designed for one user to monopolize its resources.
➔ A user sits at a terminal connected to a mainframe or a minicomputer. Other users are accessing
the same computer through the other terminals. These users share resources and may exchange
information. The operating system in such cases is designed to maximize resource utilization— to
assure that all available CPU time, memory, and I/O are used efficiently and that no individual user
takes more than its fair share.
➔ The users sit at workstations connected to networks of other workstations and servers. These
users have dedicated resources at their disposal, but they also share resources such as networking
and servers, including file, compute, and print servers. Therefore, their operating system is designed
to compromise between individual usability and resource utilization.

Systems’ View
➔ From the computer’s point of view, the operating system is the program most intimately involved
with the hardware. In this context, we can view an operating system as a resource allocator. A
computer system has many resources that may be required to solve a problem: CPU time, memory
space, file-storage space, I/O devices, and so on. The operating system acts as the manager of
these resources. Facing numerous and possibly conflicting requests for resources, the operating
system must decide how to allocate them to specific programs and users so that it can operate the
computer system efficiently.
➔ A different view of an operating system emphasizes the need to control the various I/O devices and
user programs. An operating system is a control program. A control program manages the execution
of user programs to prevent errors and improper use of the computer.
4

# Functions
1) Process Management
2) Memory Management
3) File Management
4) Device Management
5) Protection and Security
6) User Interface or Command Interpreter
7) Booting the Computer
8) Perform basic computer tasks

1) Process Management:
➔ To provide control access to shared resources like file, memory, I/O and CPU.
➔ Control the execution of user applications.
➔ Creation, execution and deletion of user and system processes.
➔ Resume a process execution or cancels it.
➔ Scheduling of a process.
➔ Synchronization, interposes communication and deadlock handling for processes.
2) Memory Management:
➔ It allocates the primary memory as well as the secondary memory to the user and system
process.
➔ Reclaims the allocated memory from all the processes that have finished its execution.
➔ Once used, the block becomes free. OS allocates it again to the processes.
➔ Monitoring and keeping the track of how much memory is used by the process.
3) File Management:
➔ Files and directories are created and deleted by the OS.
➔ OS offers the services to access the files and also it allocates the storage space for the files by
using different methods of allocation.
➔ It keeps back-up of files.
➔ It offers security for files.
4) Device Management:
➔ Device drivers are opened, closed and written by the OS.
➔ Keep an eye on the device driver.
➔ Communicate , control and monitor the device driver.
5) Protection and Security:
➔ The resources of the system are protected by the OS.
5

➔ In order to offer the needed protection, operating system make use of user authentication, file
attributes such as read, write, encryption and back-up of data.
6) User Interface or Command Interpreter:
➔ User interacts with the computer system through the operating system.
➔ Hence, the OS acts as an interface between the user and the computer hardware.
➔ This user interface is offered through a set of commands or a graphical user interface(GUI).
➔ Through this interface the user makes an interaction with the applications and the machine
hardware.
7) Booting the Computer:
➔ The process of starting or restarting a computer is known as booting.
➔ If the computer is switched off completely and if turned on then it is called cold booting.
➔ A warm booting is the process of using the operating system to restart the computer.
8) Perform basic computer tasks:
➔ The management of various peripheral devices such as the mouse, keyboard and printers is
carried out by the operating system.
➔ Today most of the operating systems are plug and play. These operating systems automatically
recognize and configure the devices with no user interference.


6

# Evolution of Operating System


Sr.No Decade Machines Operating System and Features

1. 1970 Mainframes MULTICS. Multiuser, Timesharing.

2. 1980 Minicomputers UNIX. Multiuser, Timesharing and Networked.

3. 1990 Desktop Computers DOS, MS-WINDOWS, Windows-95, Windows 98,


Windows XP (for desktop users), Windows NT,
UNIX/Linux etc. Multiuser, Timesharing,
Networked, Clustered.

4. 2000 Handheld Computers Windows XP (for desktop users), Windows 2000,


Linux, Multiprocessor, Multiuser and Networked.

Zeroth Generation– Mechanical Parts


➔ The first digital computer was designed by Charles Babbage (1791-1871), an English
mathematician. It had a mechanical design with wheels. gears. cogs. etc. were used. As this
computer was slow and unreliable, this design could not really become very popular. There was
no question of any Operating System of any kind for this machine.
First Generation (1945-1955) – Vacuum Tubes
➔ Several decades later, a solution evolved which was electronic rather than mechanical. This
solution emerged out of the concerted research carried out as part of the war effort during the
Second World War. Around 1945, Howard Aiken at Harvard, John Von. Neumann at Princeton,
J. Eckert and William Mauchely at the University of Pennsylvania, and K. Zuse in Germany
succeeded in designing calculating machines with vacuum tubes as the central components.
➔ These machines were huge and their continued use generated a great deal of heat. The vacuum
tubes also used to get burnt very fast. (During one computer run, as many as 10,000-20,000
tubes could be wasted!) The programming was done only in machine language, which could be
termed "The First Generation" language. 'This was neither an assembly language nor any higher
level language. Again, there was no Operating System for these machine tools. They were
single-user machines, which were extremely unfriendly to both the users and the programmers.
Second Generation (1955-1965) – Transistors
➔ Around 1955, transistors were introduced in the USA at AT&T. The problems associated with
vacuum tubes vanished overnight. The size and the cost of the machine dramatically dwindled.
7

The reliability improved. For the first time, new categories of professionals called systems
analysts, designers, programmers and operators came into being as distinct entities. Until
then, the functions bandied by these categories of people had been managed by a single
individual. Assembly language, as a second generation language, and FORTRAN, as a High Level
Language (third generation language), emerged, and the programmer's job was extremely
simplified. However, these were batch systems. The IBM-1401 belonged to that era.
Third Generation (1955-1965) – Transistors
➔ IBM announced the System 360 series of computers in 1964. IBM had designed various
computers in this series which were mutually compatible so that the conversion efforts for
programs from one machine to the other in the same family were minimal. This is how the
concept of 'family of computers' came into being. ffiM-370. 43XX, and 30XX systems belong to
the same family of computers.
➔ IBM faced the problem of converting the existing 1401 users to System 36O, and there were
many. IBM provided the customers with utilities such as simulators (totally software driven and
therefore. a little slow) and emulators (using hardware modifications to enhance the speed at
extra cost) to enable the old 1401 based software to run on the ffiM-36O family of computers.
➔ Initially, IBM had plans for delivering only one Operating System for all the computers in the
family. However, this approach proved to be practically difficult and cumbersome. The Operating
System for the larger computer in the family. meant for managing larger resources. was found
to create far more burden and overheads if used on the smaller computers. Similarly, the
Operating System that could run efficiently on a smaller computer would not manage the
resources for a large computer effectively. At least, IBM thought so at that time. Therefore,
IBM was forced to deliver four Operating Systems within the same range of computers. These
were:
➔ •CP-67/CMS for the powerful 360167. using virtual storage.
➔ •OS/MVT for the bigger 360 systems.
➔ •OS/MFT for the medium 360 systems.
➔ •OOS1360 for the small 360 systems.

➔ The major advantages/features and problems of this computer family and its Operating
Systems were as follows:
➔ Integrated Circuits : The Systeml360 was based on Integrated Circuits (ICs) rather than
transistors. With ICs, the cost and the size of the computer shrank substantially. and the
performance was also improved.
➔ Portability: The Operating Systems for the System!360 were written in assembly language. The
routines were therefore complex and time-consuming to write and maintain. Many bugs
8

persisted for a long time. As these were written for a specific machine and in the assembly
language of that machine, they were tied to the hardware. They were not easily portable to
machines with a different architecture not belonging to the same family.
➔ Job Control Language: Despite the portability problems, the user found them acceptable,
because the operator intervention (for 'setup' and 'teardown') decreased. A Job Control
Language (JCL) was developed to allow communication between the user/programmer and the
computer and its Operating System. By using the JCL, a user/programmer could instruct the
computer and its Operating System to perform certain tasks, in a specific sequence for creating
a file, running a job or sorting a file.
➔ Multiprogramming: The Operating System supported mainly batch programs but it made
'multiprogramming' very popular. This was a major contribution. The physical memory was
divided into many partitions, each holding a separate program. One of these partitions was
holding the Operating System. However, because there was only one CPU, at a time only one
program could be executed. Therefore, there was a need for a mechanism to switch the CPU
from one program to the next. This is exactly what the Operating System provided. One of the
major advantages of this scheme was the increase in the throughput. If the same three
programs were to run one after the other, the total elapsed time would have been much more
than under a scheme which used multiprogramming. The reason was simple. In a
uniprogramming environment, the CPU was idle when any I/O operation for any program was
going on (and that was quite a lot I), but in a multiprogramming Operating System, when the
110 for one program was going on, the CPU was switched to another program. This allowed
for the overlapped operations of 110 for one program and the other processing for some other
program by the CPU, thereby increasing the throughput.

Fig. Physical Memory in Multiprogramming


➔ Spooling: The concept of Simultaneous Peripheral Operations Online (spooling) was fully
developed during this period. The only advantage of spooling was that you no longer had to carry
9

tapes to and fro the 1401 and 7049 machines. Under the new Operating System, all jobs in the
form of cards could be read into the disk first (shown as a in Fig.) and later on, the Operating
System would load as many jobs in the memory. one after the other. until the available memory
could accommodate them (shown as b in Fig. 1.6). After many programs were loaded in
different partitions of tile memory, the CPU was switched from one program to another to
achieve multiprogramming. We will later see different policies used to achieve this switching.
Similarly, whenever any program printed something. it was not written directly on the printer,
but the print image of the report was written onto the disk in the area reserved for spooling
(shown as c in Fig. 1.6). At any convenient time later. the actual printing from this disk file
could be undertaken (shown as d in Fig. 1.6). Spooling had two distinct advantages. Firstly. it
allowed smooth multiprogramming operations. Imagine if two programs, say. Stores Ledger and
Payslips Printing, were allowed to issue simultaneous instructions to write directly on the
printer, the kind of hilarious report that would be produced with intermingled lines from both
the reports on the same page. Instead, the print images of both the reports were written on to
the disk at two different locations of the Spool file first, and the Spooler program subsequently
printed them one by one. Therefore, while printing the printer was allocated only to the Spooler
program. In order to guide this subsequent printing process, the print image copy of the report
on the disk also contained some preknown special characters such as for skipping a page. These
were interpreted by the Spooler program at the time of producing the actual report. Secondly,
the VO of aJl the jobs was essentially pooled together in the spooling method and therefore this
could be overlapped with the CPU bound computations of all the jobs at an appropriate time
chosen by the Operating System to improve the throughput.

Fig. Spooling
➔ Time Sharing: One of the first time sharing systems was the Compatible Time Sharing System
(CTSS)- developed at the Massachusetts Institute of Technology (M.I.T.). It was used on the
ffiM-7094 and it supported a large number of interactive users. Time sharing became popular
10

at once. Multiplexed Information and Computing Service (MULTICS) was the next one to
follow. It was a joint effort of M.I.T., Bell Labs and General Electric. The aim was to create a
computer utility which could support hundreds of simultaneous time sharing users.
Fourth Generation (1980 – present) Large Scale Integration
➔ When Large Scale Integration (LSI) circuits came into existence, thousands of transistors
could be packaged on a very small area of a silicon chip. A computer is made up of many units
such as a CPU. memory, I/O interfaces, and so on. Each of these is further made up of
different modules such as registers, adders, multiplexers, decoders and a variety of other digital
circuits. Each of these, in turn, is made up of various gates (e.g. one memory location storing
I-bit is made up of as many as seven gates). Those gates are implemented in digital electronics
using transistors. As the size of a chip containing thousands of such transistors shrank.
obviously the size of the whole computer also shrank. But the process of interconnecting these
transistors to form all the logical units became more intricate and complex. It required
tremendous accuracy and reliability. Fortunately, with Computer Aided Design (CAD)
techniques. One could design these circuits easily and accurately, using other computers
themselves, Mass automated production techniques reduced the cost but increased the
reliability of the produced computers. The era of microcomputers and Personal Computers (PC)
had begun.
➔ Desktop Systems:
➔ With the hardware. you obviously need the software to make it work. Fortunately,
many Operating System designers on the microcomputers had not worked extensively
on the larger systems and therefore, many of them were not biased in any manner.
They started with fresh minds and fresh ideas to design the Operating System and
other software on them.
➔ Control Program for Microcomputers (CPIM}-was almost the first Operating
System on the microcomputer platform. It was developed on Intel 8080 in 1974 as a
File System by Gary Kindall. Intel Corporation decided to use PLIM instead of the
assembly language for the development of systems software and needed a compiler
for it badly. Obviously, the compiler needed some support from some kind of utility
(Operating System) to perform all the file-related operations. Therefore, CP/M was
born as a very simple, single user Operating System. It was initially only a File
System to support a resident PUM compiler. This was done at "Digital Research Inc.
(DRl)".
11

➔ After the commercial licensing of CP/M in 1975, other utilities such as editors,
debuggers, etc. were developed, and CP/M became very popular. CPflvl went through a
number of versions. Finally, a 16-bit multiuser, time sharing MPIM was designed
with real time capabilities, and a genuine competition with the minicomputers
started. In 1980, CPINET was released to provide networking capabilities With
MPI M as the server to serve the requests received from other CPIM machines.
➔ One of the reasons for the popularity of CP/M was its user friendLIness. This had a
lot of impact on all the subsequent Operating Systems on microcomputers.
➔ After the advent of the WM-PC based on Intel 8086 and then its subsequent
models, the Disk Operating System (DOS) was written. IBM's own PC-DOS, and
MS-DOS by Microsoft are close cousins with very similar features. The development
of PC-DOS again was related to CP/M. A company called "Seattle Computer"
developed an Operating System called QDOS for Intel 8086. The main goal was to
enable the programs developed under CP/M on Intel 8080 to run on Intel 8086
without any change. Intel 8086 was upward compatible with Intel 8080. QDOS,
however, had to be faster than CP/M in disk operations. Microsoft Corporation was
quick to realise the potential of this product, given the projected popularity of Intel
8086. It acquired the rights for QDOS which later became MS-DOS (the WM
version is called PC-DOS).
➔ MS-DOS is a single user, user-friendly Operating System, In quick succession, a
number of other products such as Database Systems (dB AS E), Word Processing
(WORDST AR), Spreadsheet (LOTUS .1-2-3) and many others were developed under
MS-DOS, and the popularity of MS-DOS increased tremendously. The subsequent
development of compilers for various High Level Languages such as BASIC, COBOL,
and C added to this popularity, and, in fact, opened the gates to a more serious
software development process. This was to play an important role after the advent of
Local Area Networks (LANs). MS-DOS later was influenced by UNIX and it has
been evolving towards UNIX over the years. Many features such as hierarchical file
systems have been introduced in MS- DOS over a period of time.
➔ With the advent of Intel 80286, the IBM PClAT was announced. The hardware
had the power of catering simultaneously to multiple users, despite the name
"Personal Computer". Microsoft quickly adapted UNIX on this platform to announce
"XENIX". WM joined hands with Microsoft again to produce a new Operating
System called "OS/2". Both of these run on 286-and 386-based machines and are
multi-user systems. While XENIX is almost the same as UNIX, OS/2 is fairly
12

different, even though influenced by MS-DOS, which runs on the IBM PCI AT as
well as the PS/2.
➔ With the advent of386 and 486 computers bit mapped graphic displays became
faster and therefore, more realistic. Therefore, Graphical User Interfaces (GUIs)
became possible and in fact necessary for every application. With the advent of
GUIs, some kind of standardisation was necessary to reduce development and
training time. Microsoft again reacted by producing MS-WINDOWS. MSWINDOWS
3.x became extremely popular. However, MS-WINDOWS was actually not an
Operating System. Internally it used MS-DOS to execute various system calls. On the
top of DOS, however, MSWINDOWS enabled a very user-friendly GUI-(a.~ against
the earlier text based ones) and also allowed windowing capability.
➔ MS-WINDOWS did nor lend a true multitasking capability to the Operating System.
WINDOWSNT developed a few years later incorporated this capability in addition to
being windows based. OS/2, UNIX provided multitasking, but were not windows
based. They had to be used along with Presentation Manager or X-
WINDOWS/MOT(P, respectively, to achieve that capability.
➔ In the last few years, the Intel Pentium processor and its successors have offered
tremendous power and speed to the designers of the Operating Systems. As a result,
new versions of the existing Operating Systems have emerged, and have actually
become quite popular. Microsoft has released Windows 2000, which is technically
Windows NT Version 5.0. Microsoft had maintained two streams of its Windows
family of Operating Systems-one was targeted at the desktop users. and the other
was targeted at the business users and the server market. For the desktop users.
Microsoft enhanced its popular Windows
➔ 3.11 Operating System to Windows 95, then 10 Windows 98, Windows ME and
Windows XP. For the business users, Windows NT was developed. and its Version 4.0
had become extensively popular. This meant that Microsoft had to support it. two
streams of Windows Operating Systems-one was the stream of Windows
51981ME1XP, and the other was the Windows NT stream. To bring the two
streams together, Microsoft developed Windows 2000, and it appears that going
forward, Windows 2000 would be targeted at both the desktop users, as well as the
business users.
➔ On the UNIX front, several attempts were made to take its movement forward. Of
them all, the Linux Operating System has emerged as the major success story. Linux
13

is perhaps the most popular UNIX variant available today. The Free software
movement has also made Linux more and more appealing.
➔ Consequently, today, there are two major camps in the Operating System world:
Microsoft Windows 2000 and Linux. It is difficult to predict which one of these
would eventually emerge as the winner. However, a more likely outcome is that both
would continue to be popular, and continue to compete with each other.

➔ Multiprocessor Systems:
➔ The various configurations discussed so far in this chapter are examples of tile
Uniprocessor Systems. The Uniprocessor Systems consist of only one CPU, memory
and peripherals. However, in the last few years, Multiprocessor Systems, which
consist of two or more CPUs sharing the memory and peripherals have become
popular. Figure 1.7 depicts the architecture of a typical Multiprocessor System.
➔ The Multiprocessor Systems have potential to provide much greater system
throughput than their uniprocessor counterparts because multiple programs can run
concurrently on different processors. However, it must be noted that some overhead
is incurred during dividing a task amongst many CPUs as well as during contention
for shared resources ..As a result, the speedup obtained by using Multiprocessor
Systems is not linear, that is, it is not directly proportional to the number of CPU s.
➔ A Multiprocessor System can be implemented in one of the two ways-Master-8lave
architecture and Symmetric Multiprocessing (SMP) architecture. 10 the
Master-Slave architecture. one processor, known as Master Processor assumes
overall control on the system. It allocates work to all the slave processors. The SMP
architecture, which is the most common architecture for Multiprocessor Systems,
employs a different mechanism. No master-slave relationship exists in the SMP
architecture. Instead, all the processors operate at the same level of hierarchy and
run an identical copy of the underlying operating system. While this sounds very easy
in theory, utmost care needs to be taken while designing the Operating System for
SMP architecture. It is the job of the Operating System to ensure isolation, fairly
equal work distribution amongst all the processors and protection from possible
corruption by not allowing two or more programs to write to the same memory
location simultaneously.
➔ Nearly all the modem Operating Systems like Windows 2000, Linux, Solaris, and Mac
OS X provide built-in SMP support.
14

➔ Distributed Processing:
➔ With the era of smaller but powerful computers. Distributed Processing started
becoming a reality. Instead of a centralized large computer, the trend towards
having a number of smaller systems at different work sites but connected through a
network became stronger.
➔ There were two responses to this development. One was the Network Operating
System (NOS) and the other Distributed Operating System (DOS). There
is a fundamental difference between the two. In the Network Operating System. the
users are aware that there are several computers connected to each other via a
network. They also know that there are various databases and files on one or more
disks and also the addresses where they reside. But they want to share the data on
those disks. Similarly. there is one or more printers shared by various users logged on
10 different computers. NOVELL's NetWare 286 and the subsequent NetWare 386
Operating Systems fall in this category. In this case, if a user wants to access a
database on some other computer, he has to explicitly state its address.
➔ Distributed Operating System, on the other hand. represents a leap forward. It
makes the whole network transparent to the users. The databases, files, printers.
and other resources are shared amongst a number of users actually working on
different machines, but who are not necessarily aware of such sharing. Distributed
systems appear to be simple, but they actually are not. Quite often, distributed
systems allow parallelisms i.e, they find out whether a program can be segmented
into different tasks which can then be run simultaneously on different machines. On
the top of it, the Operating System must hide the hardware differences which exist
in different computers connected to each other. Normally, distributed systems have
to provide for a high level of fault tolerance, so that if one computer is down. the
Operating System could schedule the tasks on the other computers. This is an area
in which substantial research is still going on. This clearly is the future direction in
the Operating System technology .

➔ Clustered Systems:
➔ Clustered Systems combine best features of both Distributed Operating Systems and
Multiprocessor Systems. Although there is no concrete definition of Clustered
Systems, we will refer to a group of connected computers working together as one unit
as a Clustered System. The connected computer systems can be either Uniprocessor
15

or Multiprocessor Systems. Clustered Systems were originally developed by the DEC in


the late 1980s for the VAX/VMS Operating System.
➔ Clustered Systems provide an excellent price-performance ratio. A system consisting
of hundreds of nodes can easily give a traditional supercomputer a run for money. An
example of such a system is System X assembled by Virginia Tech University in 2003
using 1100 Apple Macintosh G5 computers. It is capable of performing a whopping 10
trillion (10.000,000,000,000) floating point operations per second Yet, the cost of
the system is about $5 million, which is much cheaper than a traditional
supercomputer.
➔ Clustered Systems also provide excellent fault tolerance, which is the ability
to.continue operation at an acceptable quality despite an unexpected hardware or
software failure. The cluster software running on tDP of the clustered nodes monitors
one or more nodes. If a node fails, then the monitoring node acquires the resources of
the failed node and resumes operations.
➔ One of the most popular implementations of Clustered Systems is Beowulf Clusters
which is a group of mostly identical PCs running an Open Source Operating System
such as Linux and a cluster management software running on top to implement
parallelism. Apart from this, many ether commercial clustering products like Grid
Engine, Open S51 are available and extensive research and development is going on in
this area.

➔ Handheld Systems
➔ The quest fer smaller size of Personal Computers has spawned an entirely new type of
systems over the years, known as Handheld SysteDL~.The Newton MessagePad
released by Apple Computers in 1993 heralded the era of Handheld Systems. Today,
these systems encompass a vast category of devices like Personal Digital Assistant
(PDA), Handheld Personal Computer (HPC), Pocket PC and even modern cellular
phones with network connectivity. These systems are small computers with
applications such as word processing, spreadsheets, personal organizers, and
calculators. One major advantage of using these systems is their ability to
synchronize data with desktop computers.
➔ Compared to.desktop computers, the Handheld Systems have much smaller memory,
smaller display screen and slower processor. As a result, the designers of the
Operating Systems for these systems are faced with often-contradictory requirements
of managing memory and processing power very efficiently yet providing rich GUI.
16

➔ The two. most popular Operating Systems for handheld systems today are Palm OS
➔ frOIDPalmsource and Windows CE from Microsoft. Palm OS is shipped with one of
the most commonly used PDAs, Palm Pilot. Windows CE, accompanying the rival PDAs
and HPCs, allows the ease and familiarity of a typical desktop Windows system like
Windows 95 and includes scaled down versions of popular Microsoft applications like
Pocket Word, Pocket Excel, Pocket Powerpoint etc. Recently, Linux has been
successfully ported to handheld devices and it is steadily making inroads with a
number of handheld device manufacturers announcing support for Linux.

Serial Processing
➔ With the earliest computers, from the late 1940s to the mid-1950s, the programmer interacted
directly with the computer hardware; there was no OS. These computers were run from a
console consisting of display lights, toggle switches, some form of input device, and a printer.
Programs in machine code were loaded via the input device (e.g., a card reader). If an error
halted the program, the error condition was indicated by the lights. If the program proceeded
to a normal completion, the output appeared on the printer.
➔ These early systems presented two main problems:
➔ Scheduling: Most installations used a hardcopy sign-up sheet to reserve computer time.
Typically, a user could sign up for a block of time in multiples of a half hour or so. A user might
sign up for an hour and finish in 45 minutes; this would result in wasted computer processing
time. On the other hand, the user might run into problems, not finish in the allotted time, and
be forced to stop before resolving the problem.
➔ Setup time: A single program, called a job, could involve loading the compiler plus the high-level
language program (source program) into memory, saving the compiled program (object
program), and then loading and linking together the object program and common functions.
Each of these steps could involve mounting or dismounting tapes or setting up card decks. If an
error occurred, the hapless user typically had to go back to the beginning of the setup sequence.
Thus, a considerable amount of time was spent just in setting up the program to run.
➔ This mode of operation could be termed serial processing, reflecting the fact that users have
access to the computer in series. Over time, various system software tools were developed to
attempt to make serial processing more efficient. These include libraries of common functions,
linkers, loaders, debuggers, and I/O driver routines that were available as common software for
all users.
17

Simple Batch Systems


➔ The central idea behind the simple batch-processing scheme is the use of a piece of software
known as the monitor. With this type of OS, the user no longer has direct access to the
processor. Instead, the user submits the job on cards or tape to a computer operator, who
batches the jobs together sequentially and places the entire batch on an input device, for use
by the monitor. Each program is constructed to branch back to the monitor when it completes
processing, at which point the monitor automatically begins loading the next program.
➔ To understand how this scheme works, let us look at it from two points of view: that of the
monitor and that of the processor.
➔ Monitor point of view: The monitor controls the sequence of events. For this to be so, much of
the monitor must always be in main memory and available for execution (Figure 2.3). That
portion is referred to as the resident monitor. The rest of the monitor consists of utilities and
common functions that are loaded as subroutines to the user program at the beginning of any
job that requires them. The monitor reads in jobs one at a time from the input device (typically
a card reader or magnetic tape drive). As it is read in, the current job is placed in the user
program area, and control is passed to this job. When the job is completed, it returns control
to the monitor, which immediately reads in the next job. The results of each job are sent to an
output device, such as a printer, for delivery to the user.
➔ Processor point of view: At a certain point, the processor is executing instructions from the
portion of main memory containing the monitor. These instructions cause the next job to be
read into another portion of main memory. Once a job has been read in, the processor will
encounter a branch instruction in the monitor that instructs the processor to continue
execution at the start of the user program. The processor will then execute the instructions in
the user program until it encounters an ending or error condition. Either event causes the
processor to fetch its next instruction from the monitor program. Thus the phrase “control is
passed to a job” simply means that the processor is now fetching and executing instructions in
a user program, and “control is returned to the monitor” means that the processor is now
fetching and executing instructions from the monitor program.
➔ The monitor performs a scheduling function: A batch of jobs is queued up, and jobs are executed
as rapidly as possible, with no intervening idle time. The monitor improves job setup time as well.
With each job, instructions are included in a primitive form of job control language (JCL). This
is a special type of programming language used to provide instructions to the monitor. A simple
example is that of a user submitting a program written in the programming language FORTRAN
plus some data to be used by the program. All FORTRAN instructions and data are on a
separate punched card or a separate record on tape. In addition to FORTRAN and data lines,
18

the job includes job control instructions, which are denoted by the beginning $. The overall
format of the job looks like this:

➔ $JOB
➔ $FTN
➔ FORTRAN instructions
➔ $LOAD
➔ $RUN
➔ DATA
➔ $END

➔ To execute this job, the monitor reads the $FTN line and loads the appropriate language
compiler from its mass storage (usually tape). The compiler translates the user’s program into
object code, which is stored in memory or mass storage. If it is stored in memory, the operation
is referred to as “compile, load, and go.” If it is stored on tape, then the $LOAD instruction is
required. This instruction is read by the monitor, which regains control after the compile
operation. The monitor invokes the loader, which loads the object program into memory (in
place of the compiler) and transfers control to it. In this manner, a large segment of main
memory can be shared among different subsystems, although only one such subsystem could be
executing at a time. During the execution of the user program, any input instruction causes one
line of data to be read. The input instruction in the user program causes an input routine that
is part of the OS to be invoked. The input routine checks to make sure that the program does
not accidentally read in a JCL line. If this happens, an error occurs and control transfers to
the monitor. At the completion of the user job, the monitor will scan the input lines until it
encounters the next JCL instruction. Thus, the system is protected against a program with too
many or too few data lines.
➔ The monitor, or batch OS, is simply a computer program. It relies on the ability of the
processor to fetch instructions from various portions of main memory to alternately seize and
relinquish control. Certain other hardware features are also desirable:
➔ Memory protection: While the user program is executing, it must not alter the memory area
containing the monitor. If such an attempt is made, the processor hardware should detect an
error and transfer control to the monitor. The monitor would then abort the job, print out an
error message, and load in the next job. r Timer: A timer is used to prevent a single job from
monopolizing the system. The timer is set at the beginning of each job. If the timer expires, the
user program is stopped, and control returns to the monitor.
19

➔ Privileged instructions: Certain machine level instructions are designated privileged and can be
executed only by the monitor. If the processor encounters such an instruction while executing a
user program, an error occurs causing control to be transferred to the monitor. Among the
privileged instructions are I/O instructions, so that the monitor retains control of all I/O
devices. This prevents, for example, a user program from accidentally reading job control
instructions from the next job. If a user program wishes to perform I/O, it must request that
the monitor perform the operation for it.
➔ Interrupts: Early computer models did not have this capability. This feature gives the OS more
flexibility in relinquishing control to and regaining control from user programs.

[ Considerations of memory protection and privileged instructions lead to the concept of modes of
operation. A user program executes in a user mode, in which certain areas of memory are protected
from the user’s use and in which certain instructions may not be executed. The monitor executes in a
system mode, or what has come to be called kernel mode, in which privileged instructions may be
executed and in which protected areas of memory may be accessed. ]

Multiprogrammed Batch Systems


➔ Suppose that there is room for the OS and two user programs. When one job needs to wait for
I/O, the processor can switch to the other job, which is likely not waiting for I/O.
Furthermore, we might expand memory to hold three, four, or more programs and switch among
all of them. The approach is known as multiprogramming, or multitasking. It is the central
theme of modern operating systems.
➔ To illustrate the benefit of multiprogramming, we give a simple example. Consider a computer
with 250 Mbytes of available memory (not used by the OS), a disk, a terminal, and a printer.
Three programs, JOB1, JOB2, and JOB3, are submitted for execution at the same time, with the
attributes listed in Table 1 below. We assume minimum processor requirements for JOB2 and
JOB3 and continuous disk and printer use by JOB3. For a simple batch environment, these jobs
will be executed in sequence. Thus, JOB1 completes in 5 minutes. JOB2 must wait until the 5
minutes are over and then complete 15 minutes after that. JOB3 begins after 20 minutes and
completes at 30 minutes from the time it was initially submitted. The average resource
utilization, throughput, and response times are shown in the uniprogramming column of Table 2.
Device-by-device utilization is illustrated in the figure below. It is evident that there is gross
underutilization for all resources when averaged over the required 30-minute time period.

JOB1 JOB2 JOB3


20

Type of job Heavy compute Heavy I/O Heavy I/O

Duration 5 min 15 min 10 min

Memory Required 50 M 100 M 75 M

Disk Needed? No No Yes

Terminal Needed? No Yes No

Printer Needed? No No Yes


Table 1: Sample Program Execution Attributes

Uniprogramming Multiprogramming

Processor Use 20 % 40%

Memory Use 33 % 67%

Disk Use 33% 67%

Printer Use 33% 67%

Elapsed Time 30 min 50 min

Throughput 6 jobs/hr 12 jobs/hr

Mean Response Time 18 min 10 min


Table 2.2 Effects of Multiprogramming on Resource Utilization
21

Figure : Utilization Histograms

➔ Now suppose that the jobs are run concurrently under a multiprogramming OS. Because there
is little resource contention between the jobs, all three can run in nearly minimum time while
coexisting with the others in the computer (assuming that JOB2 and JOB3 are allotted enough
processor time to keep their input and output operations active). JOB1 will still require 5
minutes to complete, but at the end of that time, JOB2 will be one-third finished and JOB3 half
finished. All three jobs will have finished within 15 minutes.
➔ As with a simple batch system, a multiprogramming batch system must rely on certain
computer hardware features. The most notable additional feature that is useful for
multiprogramming is the hardware that supports I/O interrupts and DMA (direct memory
access). With interrupt-driven I/O or DMA, the processor can issue an I/O command for one
job and proceed with the execution of another job while the I/O is carried out by the device
controller. When the I/O operation is complete, the processor is interrupted and control is
passed to an interrupt-handling program in the OS. The OS will then pass control to another
job.
➔ Multiprogramming operating systems are fairly sophisticated compared to single-program, or
uniprogramming, systems. To have several jobs ready to run, they must be kept in main memory,
22

requiring some form of memory management. In addition, if several jobs are ready to run, the
processor must decide which one to run, and this decision requires an algorithm for scheduling.

Time-Sharing Systems
➔ Just as multiprogramming allows the processor to handle multiple batch jobs at a time,
multiprogramming can also be used to handle multiple interactive jobs. In this latter case, the
technique is referred to as time sharing, because processor time is shared among multiple users.
In a time-sharing system, multiple users simultaneously access the system through terminals,
with the OS interleaving the execution of each user program in a short burst or quantum of
computation. Thus, if there are n users actively requesting service at one time, each user will
only see on the average 1/n of the effective computer capacity, not counting OS overhead.
However, given the relatively slow human reaction time, the response time on a properly designed
system should be similar to that on a dedicated computer.
➔ One of the first time-sharing operating systems to be developed was the Compatible
Time-Sharing System (CTSS) [CORB62], developed at MIT by a group known as Project MAC
(Machine-Aided Cognition, or Multiple-Access Computers). The system was first developed for
the IBM 709 in 1961 and later ported to an IBM 7094.
➔ Compared to later systems, CTSS is primitive. The system ran on a computer with 32,000
36-bit words of main memory, with the resident monitor consuming 5000 of that. When
control was to be assigned to an interactive user, the user’s program and data were loaded into
the remaining 27,000 words of main memory. A program was always loaded to start at the
location of the 5000th word; this simplified both the monitor and memory management. A
system clock generated interrupts at a rate of approximately one every 0.2 seconds. At each
clock interrupt, the OS regained control and could assign the processor to another user. This
technique is known as time slicing. Thus, at regular time intervals, the current user would be
preempted and another user loaded in. To preserve the old user program status for later
resumption, the old user programs and data were written out to disk before the new user
programs and data were read in. Subsequently, the old user program code and data were
restored in main memory when that program was next given a turn.
➔ To minimize disk traffic, user memory was only written out when the incoming program would
overwrite it. This principle is illustrated in Figure 2.7. Assume that there are four interactive
users with the following memory requirements, in words:
➔ JOB1: 15,000
➔ JOB2: 20,000
➔ JOB3: 5000
➔ JOB4: 10,000
23

➔ Initially, the monitor loads JOB1 and transfers control to it (a). Later, the monitor decides to
transfer control to JOB2. Because JOB2 requires more memory than JOB1, JOB1 must be
written out first, and then JOB2 can be loaded (b). Next, JOB3 is loaded in to be run. However,
because JOB3 is smaller than JOB2, a portion of JOB2 can remain in memory, reducing disk
write time (c). Later, the monitor decides to transfer control back to JOB1. An additional
portion of JOB2 must be written out when JOB1 is loaded back into memory (d). When JOB4 is
loaded, part of JOB1 and the portion of JOB2 remaining in memory are retained (e). At this
point, if either JOB1 or JOB2 is activated, only a partial load will be required.In this example, it
is JOB2 that runs next. This requires that JOB4 and the remaining resident portion of JOB1 be
written out and that the missing portion of JOB2 be read in (f).

Figure CTSS Operation

➔ The CTSS approach is primitive compared to present-day time sharing, but it was effective. It
was extremely simple, which minimized the size of the monitor. Because a job was always loaded
into the same locations in memory, there was no need for relocation techniques at load time
(discussed subsequently). The technique of only writing out what was necessary minimized disk
activity. Running on the 7094, CTSS supported a maximum of 32 users.
➔ Time sharing and multiprogramming raise a host of new problems for the OS. If multiple jobs
are in memory, then they must be protected from interfering with each other by, for example,
modifying each other’s data. With multiple interactive users, the file system must be protected
so that only authorized users have access to a particular file. The contention for resources,
such as printers and mass storage devices, must be handled.

You might also like