CSC 203 Week 4-5
CSC 203 Week 4-5
Types of OS
4.1 Types of Operating Systems Based on the Types of Computer they Control and the
Sort of Applications they Support
4.1.1 Real-Time Operating Systems (RTOS)
4.1.2 Single-User, Single-Tasking Operating System
4.1.3 Single-User, Multi-Tasking Operating System
4.1.4 Multi-User Operating Systems
4.2 Types of OS based on the Nature of Interaction that takes place between the
Computer User and His/Her Program during its Processing
4.2.1 Batch Processing OS
4.2.2 Time Sharing OS
4.2.3 Real Time OS
4.4 Conclusion
1
OS can be categorized in different ways based on perspectives. Some of the major ways
in which the OS can be classified are explored and introduced in this below.
4.1 Types of Operating Systems Based on the Types of Computer they Control and
the Sort of Applications they Support
Based on the types of computers they control and the sort of applications they support,
there are generally four types within the broad family of operating systems. The broad
categories are as follows:
2
between multi-user operating systems and single-user operating systems that support
networking. Windows 2000 and Novell Netware can each support hundreds or
thousands of networked users, but the operating systems themselves are not true multi-
user operating systems. The system administrator is the only user for Windows 2000 or
Netware. The network support and the entire remote user logins the network enables are,
in the overall plan of the operating system, a program being run by the administrative
user.
4.2 Types of OS based on the Nature of Interaction that takes place between the
Computer User and His/Her Program during its Processing
Modern computer operating systems may be classified into three groups, which are
distinguished by the nature of interaction that takes place between the computer user and
his or her program during its processing. The three groups are: called batch, time-shared
and real time operating systems.
3
abackground batch system running in conjunction with one of the other two on the same
computer.
4
The distributed computing environment and its operating systems, like networking
environment, are designed with more complex functional capabilities. However, a
distributed operating system, in contrast to a network operating system, is one that
appears to its users as a traditional uniprocessor system, even though it is actually
composed of multiple processors. In a true distributed system, users should not be aware
of where their programs are being run or where their files are located; that should all be
handled automatically and efficiently by the operating system.
True distributed operating systems require more than just adding a little code to a
uniprocessor operating system, because distributed and centralized systems differ in
critical ways. Distributed systems, for example, often allow program to run on several
processors at the same time, thus requiring more complex processor scheduling
algorithms in order to optimize the amount of parallelism achieved.
4.4 Conclusion
The earliest operating systems were developed for mainframe computer architectures in
the 1960s and they were mostly batch processing operating systems. The enormous
investment in software for these systems caused most of the original computer
manufacturers to continue to develop hardware and operating systems that are compatible
with those early operating systems. Those early systems pioneered many of the features
of modern operating systems..
5
Week Five
Types of OS
5.1 Disk Operating System (specifically) and disk operating system (generically), most
often abbreviated as DOS (not to be confused with the DOS family of disk operating
systems for the IBM PC compatible platform), refer to operating system software used in
most computers that provides the abstraction and management of secondary storage
devices and the information on them (e.g., file systems for organizing files of all sorts).
Such software is referred to as a disk operating system when the storage devices it
manages are made of rotating platters (such as hard disks or floppy disks).
In the early days of microcomputing, memory space was often limited, so the disk
operating system was an extension of the operating system. This component was only
loaded if needed. Otherwise, disk-access would be limited to low-level operations such as
reading and writing disks at the sector-level.
In some cases, the disk operating system component (or even the operating system) was
known as DOS.
Sometimes, a disk operating system can refer to the entire operating system if it is loaded
off a disk and supports the abstraction and management of disk devices. Examples
include DOS/360 and FreeDOS. On the PC compatible platform, an entire family of
operating systems was called DOS.
In the early days of computers, there were no disk drives; delay lines, punched cards,
paper tape, magnetic tape, magnetic drums, were used instead. And in the early days of
microcomputers, paper tape or audio cassette tape (see Kansas City standard) or nothing
were used instead. In the latter case, program and data entry was done at front panel
switches directly into memory or through a computer terminal/keyboard, sometimes
controlled by a ROM BASIC interpreter; when power was turned off after running the
program, the information so entered vanished.
Both hard disks and floppy disk drives require software to manage rapid access to block
storage of sequential and other data. When microcomputers rarely had expensive disk
drives of any kind, the necessity to have software to manage such devices (i.e. the 'disks’)
carried much status. To have one or the other was a mark of distinction and prestige, and
so was having the Disk sort of an Operating System. As prices for both disk hardware
and operating system software decreased, there were many such microcomputer systems.
Mature versions of the Commodore, SWTPC, Atari and Apple home computer systems
all featured a disk operating system (actually called 'DOS' in the case of the Commodore
64 (CBM DOS), Atari 800 (Atari DOS), and Apple II machines (Apple DOS)), as did (at
the other end of the hardware spectrum, and much earlier) IBM's System/360, 370 and
(later) 390 series of mainframes (e.g., DOS/360: Disk Operating System / 360 and
DOS/VSE: Disk Operating System / Virtual Storage Extended). Most home computer
DOS'es were stored on a floppy disk always to be booted at start-up, with the not able
exception of Commodore, whose DOS resided on ROM chips in the disk drives
themselves, available at power-on.
6
In large machines there were other disk operating systems, such as IBM's VM, DEC's
RSTS / RT-11 / VMS / TOPS-10 / TWENEX, MIT's ITS / CTSS, Control Data's assorted
NOS variants, Harris's Vulcan, Bell Labs' Unix, and so on. In microcomputers, SWTPC's
6800 and 6809 machines used TSC's FLEX disk operating system, Radio Shack's TRS-
80 machines used TRS-DOS, their Color Computer used OS-9, and most of the Intel
8080 based machines from IMSAI, MITS (makers of the legendary Altair 8800),
Cromemco, North Star, etc used the CP/M-80 disk operating system. See list of operating
systems.
Usually, a disk operating system was loaded from a disk. Only a very few comparable
DOSes were stored elsewhere than floppy disks; among these exceptions were the British
BBC Micro's optional Disc Filing System, DFS, offered as a kit with a disk controller
chip, a ROM chip, and a handful of logic chips, to be installed inside the computer; and
Commodore's CBM DOS, located in a ROM chip in each disk drive.
The history of Microsoft disk operating system (MS-DOS) is closely linked to the IBM
PC and compatibles. Towards the end of the 1970’s, a number of PCs appeared on the
market, based on 8-bit microprocessor chips such as Intel 8080. IBM decided to enter this
market and wisely opted for a 16-bit microprocessor, the Intel 8088. IBM wanted to
introduced the PC to the market as quickly as possible and released it without having
enough time to develop its own OS.
At that time, CP/M (by Digital Research) dominated the market. In 1979, a small
company Seattle the Computer Products developed its own OS, 86-DOS to test some of
its Intel based products (86-DOS was designed to be similar to CP/M). IBM purchased
86-DOS and in collaboration with Microsoft developed a commercial product. MS-DOS
Version 1.0 was also referred to as PC-DOS MS-DOS had some similarities to CP/M,
(such as the one level file storage system for floppy disks) which was important in terms
of market acceptance in those days although MS-DOS did offer several improvements
over CP/M such as:
A larger disk sector size (512 bytes as opposed to 128 bytes)
A memory-based file allocation table
Both of which improved disk file performance
7
Here is a summary of the most significant features of versions of MS-DOS:
8
5.1.2 Examples of disk operating systems that were extensions to the OS
The DOS operating system for the Apple Computer's Apple II family of computers.
This was the primary operating system for this family from 1979 with the introduction of
the floppy disk drive until 1983 with the introduction of ProDOS; many people continued
using it long after that date. Usually it was called Apple DOS to distinguish it from MS-
DOS.
Commodore DOS, which was used by 8-bit Commodore computers. Unlike most other
DOS systems, it was integrated into the disk drives, not loaded into the computer's own
memory.
Atari DOS: which was used by the Atari 8-bit family of computers. The Atari OS only
offered low-level disk-access, so an extra layer called DOS was booted off a floppy that
offered higher level functions such as filesystems.
MSX-DOS, for the MSX computer standard. Initial version, released in 1984, was
nothing but MS-DOS 1.0 ported to Z80; but in 1988 it evolved to version 2, offering
facilities such as subdirectories, memory management and environment strings. The
MSX-DOS kernel resided in ROM (built-in on the disk controller) so basic file access
capacity was available even without the command interpreter, by using BASIC extended
commands.
Disc Filing System (DFS) This was an optional component for the BBC Micro, offered
as a kit with a disk controller chip, a ROM chip, and a handful of logic chips, to be
installed inside the computer. See also Advanced Disc Filing System.
AMSDOS, for the Amstrad CPC computers.
GDOS and G+DOS, for the +D and DISCiPLE disk interfaces for the ZX
Spectrum.
9
5.2 Real-Time Operating System (RTOS)
A real-time operating system (RTOS) is a multitasking operating system intended for
real-time applications. Such applications include embedded systems (programmable
thermostats, household appliance controllers, mobile telephones), industrial robots,
spacecraft, industrial control (see SCADA), and scientific research equipment.
An RTOS facilitates the creation of a real-time system, but does not guarantee the final
result will be real-time; this requires correct development of the software. An RTOS does
not necessarily have high throughput; rather, an RTOS provides facilities which, if used
properly, guarantee deadlines can be met generally (soft real-time) or deterministically
(hard real-time). An RTOS will typically use specialized scheduling algorithms in order
to provide the real-time developer with the tools necessary to produce deterministic
behavior in the final system. An RTOS is valued more for how quickly and/or predictably
it can respond to a particular event than for the given amount of work it can perform over
time. Key factors in an RTOS are therefore at minimal interrupt latency and a minimal
thread switching latency.
An early example of a large-scale real-time operating system was the so-called "control
program" developed by American Airlines and IBM for the Sabre Airline Reservations
System. Debate exists about what actually constitutes real-time computing.
5.2.2 Scheduling
In typical designs, a task has three states:
1) Running
2) Ready
3) Blocked.
10
Most tasks are blocked, most of the time. Only one task per CPU is running. In simpler
systems, the ready list is usually short, two or three tasks at most.
The real key is designing the scheduler. Usually the data structure of the ready list in the
scheduler is designed to minimize the worst-case length of time spent in the scheduler's
critical section, during which preemption is inhibited, and, in some cases, all interrupts
are disabled. But, the choice of data structure depends also on the maximum number of
tasks that can be on the ready list (or ready queue).
If there are never more than a few tasks on the ready list, then a simple unsorted
bidirectional linked list of ready tasks is likely optimal. If the ready list usually contains
only a few tasks but occasionally contains more, then the list should be sorted by priority,
so that finding the highest priority task to run does not require iterating through the entire
list. Inserting a task then requires walking the ready list until reaching either the end of
the list, or a task of lower priority than that of the task being inserted. Care must be taken
not to inhibit preemption during this entire search; the otherwise-long critical section
should probably be divided into small pieces, so that if, during the insertion of a low
priority task, an interrupt occurs that makes a high priority task ready, that high priority
task can be inserted and run immediately (before the low priority task is inserted).
The critical response time, sometimes called the flyback time, is the time it takes to
queue a new ready task and restore the state of the highest priority task. In a well-
designed RTOS, readying a new task will take 3-20 instructions per ready queue entry,
and restoration of the highest-priority ready task will take 5-30 instructions. On a 20MHz
68000 processor, task switch times run about 20 microseconds with two tasks ready. 100
MHz ARM CPUs switch in a few microseconds.
In more advanced real-time systems, real-time tasks share computing resources with
many non-real-time tasks, and the ready list can be arbitrarily long. In such systems, a
scheduler ready list implemented as a linked list would be inadequate.
11
greater system call efficiency and also to permit the application to have greater control of
the operating environment without requiring OS intervention.
On single-processor systems, if the application runs in kernel mode and can mask
interrupts, often that is the best (lowest overhead) solution to preventing simultaneous
access to a shared resource. While interrupts are masked, the current task has exclusive
use of the CPU; no other task or interrupt can take control, so the critical section is
effectively protected. When the task exits its critical section, it must unmask interrupts;
pending interrupts, if any, will then execute. Temporarily masking interrupts should only
be done when the longest path through the critical section is shorter than the desired
maximum interrupt latency, or else this method will increase the system's maximum
interrupt latency. Typically this method of protection is used only when the critical
section is just a few source code lines long and contains no loops. This method is ideal
for protecting hardware bitmapped registers when the bits are controlled by different
tasks.
When the critical section is longer than a few source code lines or involves lengthy
looping, an embedded/real-time programmer must resort to using mechanisms identical
or similar to those available on general-purpose operating systems, such as semaphores
and OS-supervised interprocess messaging. Such mechanisms involve system calls, and
usually invoke the OS's dispatcher code on exit, so they can take many hundreds of CPU
instructions to execute, while masking interrupts may take as few as three instructions on
some processors. But for longer critical sections, there may be no choice; interrupts
cannot be masked for long periods without increasing the system's interrupt latency.
A binary semaphore is either locked or unlocked. When it is locked, a queue of tasks can
wait for the semaphore. Typically a task can set a timeout on its wait for a semaphore.
Problems with semaphore based designs are well known: priority inversion and
deadlocks.
In priority inversion, a high priority task waits because a low priority task has a
semaphore. A typical solution is to have the task that has a semaphore run at (inherit) the
priority of the highest waiting task. But this simplistic approach fails when there are
multiple levels of waiting (A waits for a binary semaphore locked by B, which waits for
abinary semaphore locked by C). Handling multiple levels of inheritance without
introducing instability in cycles is not straightforward.
In a deadlock, two or more tasks lock a number of binary semaphores and then wait
forever (no timeout) for other binary semaphores, creating a cyclic dependency graph.
The simplest deadlock scenario occurs when two tasks lock two semaphores in lockstep,
but in the opposite order. Deadlock is usually prevented by careful design, or by having
floored semaphores (which pass control of a semaphore to the higher priority task on
defined conditions).
The other approach to resource sharing is for tasks to send messages. In this paradigm,
the resource is managed directly by only one task; when another task wants to interrogate
or manipulate the resource, it sends a message to the managing task. This paradigm
suffers from similar problems as binary semaphores: Priority inversion occurs when a
task is working on a low-priority message, and ignores a higher-priority message (or a
12
message originating indirectly from a high priority task) in its in-box. Protocol deadlocks
occur when two or more tasks wait for each other to send response messages. Although
their real-time behavior is less crisp than semaphore systems, simple message based
systems usually do not have protocol deadlock hazards, and are generally better behaved
than semaphore systems.
13
varies based on the type of object, whose implementation details are hidden from the
caller, and might even use inheritance in their underlying code.
5.3.1 Examples
5.3.1.1 NeXTSTEP
During the late 1980s, Steve Jobs formed the computer company NeXT. One of NeXT's
first tasks was to design an object-oriented operating system, NEXTSTEP. They did this
by adding an object-oriented framework on top of Mach and BSD using the Objective-C
language as a basis.
NEXTSTEP's basis, Mach and BSD, are not object-oriented. Instead, the object-oriented
portions of the system live in userland. Thus, NEXTSTEP cannot be considered an
object-oriented operating system in the strictest terms.
The NeXT hardware and operating system were not successful, and, in search of a new
strategy, the company re-branded its object-oriented technology as a cross-platform
development platform.
Though NeXT's efforts were innovative and novel, they gained only a relatively small
acceptance in the marketplace. NeXT was later acquired by Apple Computer and its
operating system became the basis for Mac OS X most visibly in the form of the "Cocoa"
frameworks.
5.3.1.2 Choices
Choices is an object-oriented operating system that was developed at the University of
Illinois at Urbana-Champaign. It is written in C++ and uses objects to represent core
kernel components like the CPU, Process and so on. Inheritance is used to separate the
kernel into portable machine independent classes and small non-portable dependent
classes. Choices has been ported to and runs on SPARC, x86 and ARM.
5.3.1.3 Athene
Athene is an object based operating system first released in 2000 by Rocklyte Systems.
The user environment is constructed entirely from objects that are linked together at
runtime. Applications for Athene can also be created using this methodology and
arecommonly scripted using the object scripting language 'DML' (Dynamic Markup
Language). Objects can be shared between processes by creating them in shared memory
and locking them as required for access. Athene's object framework is multi-platform,
allowing it to be used in Windows and Linux environments for the development of object
oriented programs.
5.3.1.4 BeOS
One attempt at creating a truly object-oriented operating system was the BeOS of the mid
1990s, which used objects and the C++ language for the application programming
interface (API). But the kernel itself was written in C with C++ wrappers in user
14
space.The system did not become mainstream though even today it has its fans and
benefitsfrom ongoing development.
5.3.1.5 Syllable
Syllable makes heavy use of C++ and for that reason is often compared to BeOS.
5.3.1.6 TAJ
TAJ is India's first object oriented operating system. It is made in C++ with some part in
assembly. The source code of TAJ OS is highly modularized and is divided into different
modules, each module is implemented as class. Many object oriented features like
inheritance, polymorphism, virtual functions etc are extensively used in developing TAJ
Operating System. TAJ OS is a multitasking, multithreading and multiuser operating
system.
The kernel of TAJ Operating System is of monolithic type. i.e. all the device drivers and
other important OS modules are embedded into kernel itself. This increases the speed of
execution by reducing context switching time (time taken to execute a system call).
TAJ OS is developed by Viral Patel. You can download the image file for TAJ OS at
http://www.viralpatel.net or http://www.geocities.com/taj_os
15
Examples of attempts at such an operating system include JNode and JOS
5.3.2 Time-sharing
Time-sharing refers to sharing a computing resource among many users by multitasking.
Because early mainframes and minicomputers were extremely expensive, it was rarely
possible to allow a single user exclusive access to the machine for interactive use. But
because computers in interactive use often spend much of their time idly waiting for user
input, it was suggested that multiple users could share a machine by using one user's idle
time to service other users. Similarly, small slices of time spent waiting for disk, tape, or
network input could be granted to other users.
Throughout the late 1960s and the 1970s computer terminals were multiplexed onto large
institutional mainframe computers (central computer systems), which in many
implementations sequentially polled the terminals to see if there was any additional data
or action requested by the computer user. Later technology in interconnections
wereinterrupt driven, and some of these used parallel data transfer technologies like, for
example, the IEEE 488 standard. Generally, computer terminals were utilized on College
properties in much the same places as desktop computers or personal computers are
found today. In the earliest days of personal computers, many were in fact used as
particularly smart terminals for time-sharing systems.
With the rise of microcomputing in the early 1980's, time-sharing faded into the
background because the individual microprocessors were sufficiently inexpensive that a
single person could have all the CPU time dedicated solely to their needs, even when
idle.
The Internet has brought the general concept of time-sharing back into popularity.
Expensive corporate server farms costing millions can host thousands of customers all
sharing the same common resources. As with the early serial terminals, websites operate
primarily in bursts of activity followed by periods of idle time. The burstynature permits
the service to be used by many website customers at once, and none of them notice any
delays in communications until the servers start to get very busy.
16
360. Companies providing this service included Tymshare (founded in 1966), Dial
Data(bought by Tymshare in 1968), and Bolt, Beranek, and Newman. By 1968, there
were 32 such service bureaus serving the NIH alone.
5.3.2.2 History
The concept was first described publicly in early 1957 by Bob Bemer as part of an article
in Automatic Control Magazine. The first project to implement a time-sharing system
was initiated by John McCarthy in late 1957, on a modified IBM 704, and later
anadditionally modified IBM 7090 computer. Although he left to work on Project MAC
and other projects, one of the results of the project, known as the Compatible
TimeSharing System or CTSS, was demonstrated in November, 1961. CTSS has a good
claimto be the first time-sharing system and remained in use until 1973. The first
commerciallysuccessful time-sharing system was the Dartmouth Time-Sharing System
(DTSS) which was first implemented at Dartmouth College in 1964 and subsequently
formed the basis of General Electric's computer bureau services. DTSS influenced the
design of other early timesharing systems developed by Hewlett Packard, Control Data
Corporation, UNIVAC and others (in addition to introducing the BASIC programming
language). Other historical timesharing systems, some of them still in widespread use,
include:
IBM CMS (part of VM/CMS)
IBM TSS/360 (never finished; see OS/360)
IBM Time Sharing Option (TSO)
KRONOS (and later NOS) on the CDC 6000 series
Michigan Terminal System
Multics
MUSIC/SP
ORVYL
RSTS/E
RSX-11
TENEX
TOPS-10
TOPS-20
17
7. Disk operating system can be the operating system itself or not . Discuss.
8. Distinguish between DOS that is the OS itself and the one that is not .
9. Give two examples each of DOS that are the OS itself and DOS that are the
extension of the OS.
10. Memory allocation is even more critical in an RTOS than in other operating
systems. Discuss
11. Name some of the environment in which the RTOS can be found
12. List and explain the two basic design philosophies for the RTOS
13. Describe how inter-process communication and resource sharing are implemented
in the RTOS.
18