0% found this document useful (0 votes)
6 views26 pages

1 Rtos

The document provides an overview of Real-Time Operating Systems, detailing their role as an interface between users and computer hardware, and their responsibilities including resource management, security, and execution of programs. It outlines the evolution of operating systems from bare machine programming to more advanced methods like batch processing and multiprocessing. Additionally, it discusses the various management functions of operating systems such as memory, process, I/O device, information, and network management.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views26 pages

1 Rtos

The document provides an overview of Real-Time Operating Systems, detailing their role as an interface between users and computer hardware, and their responsibilities including resource management, security, and execution of programs. It outlines the evolution of operating systems from bare machine programming to more advanced methods like batch processing and multiprocessing. Additionally, it discusses the various management functions of operating systems such as memory, process, I/O device, information, and network management.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

S6 ECE EC 266

REAL TIME OPERATING SYSTEM (EC266)

Page | 1

INTRODUCTION:

Operating System (OS) is system software, which acts as an interface


between a user of the computer and the computer hardware. The main
purpose of an Operating System is to provide an environment in which we
can execute programs.

The main goals of the Operating System are:

(i) To make the computer system convenient to use,


(ii) To make the use of computer hardware in efficient way.

Operating System may be viewed as collection of software consisting of


procedures for operating the computer and providing an environment for
execution of programs. It is an interface between user and computer. So an
Operating System makes everything in the computer to work together
smoothly and efficiently.

BCCML
S6 ECE EC 266

Basically, an Operating System has three main responsibilities:

(a) Perform basic tasks such as recognizing input from the keyboard,
Page | 2 sending output to the display screen, keeping track of files and directories
on the disk, and controlling peripheral devices such as disk drives and
printers.

(b) Ensure that different programs and users running at the same time do
not interfere with each other.

(c) Provide a software platform on top of which other programs can run.

The Operating System is also responsible for security and ensuring that
unauthorized users do not access the system. Figure 1 illustrates the
relationship between application software and system software. The first
two responsibilities address the need for managing the computer hardware
and the application programs that use the hardware. The third
responsibility focuses on providing an interface between application
software and hardware so that application software can be efficiently
developed.
TRACE KTU
Since the Operating System is already responsible for managing the
hardware, it should provide a programming interface for application
developers. As a user, we normally interact with the Operating System
through a set of commands. The commands are accepted and executed by a
part of the Operating System called the command processor or command
line interpreter.

BCCML
S6 ECE EC 266

In order to understand operating systems we must understand the


computer hardware and the development of Operating System from
Page | 3 beginning. Hardware means the physical machine and its electronic
components including memory chips, input/output devices, storage
devices and the central processing unit. Software are the programs written
for these computer systems. Main memory is where the data and
instructions are stored to be processed. Input/output devices are the
peripherals attached to the system, such as keyboard, printers, disk drives,
CD drives, magnetic tape drives, modem, monitor, etc. The central
processing unit is the brain of the computer system; it has circuitry to
control the interpretation and execution of instructions. It controls the
operation of entire computer system. All of the storage references, data
manipulations and I/O operations are performed by the CPU. The entire
computer systems can be divided into four parts or components

(1) The hardware

(2) The Operating System

TRACE KTU
(3) The application programs and system programs

(4) The users.

The hardware provides the basic computing power. The system programs
the way in which these resources are used to solve the computing problems
of the users. There may be many different users trying to solve different
problems. The Operating System controls and coordinates the use of the
hardware among the various users and the application programs.

BCCML
S6 ECE EC 266

We can view an Operating System as a resource allocator. A computer


system has many resources, which are to be required to solve a computing
problem. These resources are the CPU time, memory space, files storage
space, input/output devices and so on. The Operating System acts as a
Page | 4 manager of all of these resources and allocates them to the specific
programs and users as needed by their tasks. Since there can be many
conflicting requests for the resources, the Operating System must decide
which requests are to be allocated resources to operate the computer
system fairly and efficiently. An Operating System can also be viewed as a
control program, used to control the various I/O devices and the users
programs. A control program controls the execution of the user programs
to prevent errors and improper use of the computer resources. It is
especially concerned with the operation and control of I/O devices. As
stated above the fundamental goal of computer system is to execute user
programs and solve user problems. For this goal computer hardware is
constructed. But the bare hardware is not easy to use and for this purpose
application/system programs are developed. These various programs
require some common operations, such as controlling/use of some

TRACE KTU
input/output devices and the use of CPU time for execution.

The common functions of controlling and allocation of resources


between different users and application programs is brought together into
one piece of software called operating system. It is easy to define operating
systems by what they do rather than what they are. The primary goal of the
operating systems is convenience for the user to use the computer.
Operating systems makes it easier to compute. A secondary goal is efficient
operation of the computer system. The large computer systems are very
expensive, and so it is desirable to make them as efficient as possible.
Operating systems thus makes the optimal use of computer resources. In
order to understand what operating systems are and what they do, we have
to study how they are developed. Operating systems and the computer
architecture have a great influence on each other. To facilitate the use of
the hardware operating systems were developed.

First, professional computer operators were used to operate the


computer. The programmers no longer operated the machine. As soon as
one job was finished, an operator could start the next one and if some
errors came in the program, the operator takes a dump of memory and

BCCML
S6 ECE EC 266

registers, and from this the programmer have to debug their programs. The
second major solution to reduce the setup time was to batch together jobs
of similar needs and run through the computer as a group. But there were
still problems. For example, when a job stopped, the operator would have
Page | 5 to notice it by observing the console, determining why the program
stopped, takes a dump if necessary and start with the next job. To
overcome this idle time, automatic job sequencing was introduced. But
even with batching technique, the faster computers allowed expensive time
lags between the CPU and the I/O devices. Eventually several factors
helped improve the performance of CPU. First, the speed of I/O devices
became faster. Second, to use more of the available storage area in these
devices, records were blocked before they were retrieved. Third, to reduce
the gap in speed between the I/O devices and the CPU, an interface called
the control unit was placed between them to perform the function of
buffering. A buffer is an interim storage area that works like this: as the
slow input device reads a record, the control unit places each character of
the record into the buffer. When the buffer is full, the entire record is
transmitted to the CPU. The process is just opposite to the output devices.

TRACE KTU
Fourth, in addition to buffering, an early form of spooling was developed by
moving off-line the operations of card reading, printing etc. SPOOL is an
acronym that stands for the simultaneous peripherals operations on-line.
For example, incoming jobs would be transferred from the card decks to
tape/disks off-line. Then they would be read into the CPU from the
tape/disks at a speed much faster than the card reader.

BCCML
S6 ECE EC 266

Page | 6

Moreover, the range and extent of services provided by an Operating


System depends on a number of factors. Among other things, the needs and
characteristics of the target environmental that the Operating System is
intended to support largely determine user- visible functions of an
operating system. For example, an Operating System intended for program
development in an interactive environment may have a quite different set
of system calls and commands than the Operating System designed for run-
time support of a car engine.

TRACE KTU

Operating System as a Resource Manager

The Operating System is a manager of system resources. A computer


system has many resources as stated above. Since there can be many
conflicting requests for the resources, the Operating System must decide

BCCML
S6 ECE EC 266

which requests are to be allocated resources to operate the computer


system fairly and efficiently.
The Operating System as a resources manager can be classified in to the
following three popular views: primary view, hierarchical view, and extended
Page | 7
machine view.
The primary view is that the Operating System is a collection of programs
designed to manage the system’s resources, namely, memory, processors,
peripheral devices, and information. It is the function of Operating System
to see that they are used efficiently and to resolve conflicts arising from
competition among the various users. The Operating System must keep
track of status of each resource; decide which process is to get the resource,
allocate it, and eventually reclaim it.
An operating system performs the following functions:

• Memory management
• Task or process management
• Storage management
• Device or input/output management
• Kernel or scheduling

Memory Management Functions

TRACE KTU
To execute a program, it must be mapped to absolute addresses and
loaded into memory. As the program executes, it accesses instructions and
data from memory by generating these absolute addresses. In
multiprogramming environment, multiple programs are maintained in the
memory simultaneously. The Operating System is responsible for the
following memory management functions:
� Keep track of which segment of memory is in use and by whom.
� Deciding which processes are to be loaded into memory when space
becomes available. In multiprogramming environment it decides which
process gets the available memory, when it gets it, where does it get it,
and how much.
� Allocation or de-allocation the contents of memory when the process
request for it otherwise reclaim the memory when the process does not
require it or has been terminated.

Processor/Process Management Functions


A process is an instance of a program in execution. While a program is
just a passive entity, process is an active entity performing the intended
functions of its related program. To accomplish its task, a process needs
certain resources like CPU, memory, files and I/O devices. In
multiprogramming environment, there will a number of simultaneous

BCCML
S6 ECE EC 266

processes existing in the system. The Operating System is responsible for


the following processor/ process management functions:
- Provides mechanisms for process synchronization for sharing of
resources amongst concurrent processes.
Page | 8
- Keeps track of processor and status of processes. The program that
does this has been called the traffic controller.
- Decide which process will have a chance to use the processor; the job
scheduler chooses from all the submitted jobs and decides which one
will be allowed into the system. If multiprogramming, decide which
process gets the processor, when, for how much of time. The module
that does this is called a process scheduler.
- Allocate the processor to a process by setting up the necessary
hardware registers. This module is widely known as the dispatcher.
- Providing mechanisms for deadlock handling.
- Reclaim processor when process ceases to use a processor, or
exceeds the allowed amount of usage.

I/O Device Management Functions

An Operating System will have device drivers to facilitate I/O functions


involving I/O devices. These device drivers are software routines that

TRACE KTU
control respective I/O devices through their controllers. The Operating
System is responsible for the following I/O Device Management Functions:
- Keep track of the I/O devices, I/O channels, etc. This module is
typically called I/O traffic controller.
- Decide what is an efficient way to allocate the I/O resource. If it is to
be shared, then decide who gets it, how much of it is to be allocated,
and for how long. This is called I/O scheduling.
- Allocate the I/O device and initiate the I/O operation.
- Reclaim device as and when its use is through. In most cases I/O
terminates automatically.

Information Management Functions

- Keeps track of the information, its location, its usage, status, etc. The
module called a file system provides these facilities.
- Decides who gets hold of information, enforce protection mechanism,
and provides for information access mechanism, etc.
- Allocate the information to a requesting process, e.g., open a file.
- De-allocate the resource, e.g., close a file.

BCCML
S6 ECE EC 266

Network Management Functions

An Operating System is responsible for the computer system networking


via a distributed environment. A distributed system is a collection of
Page | 9
processors, which do not share memory, clock pulse or any peripheral
devices. Instead, each processor is having its own clock pulse, and RAM and
they communicate through network. Access to shared resource permits
increased speed, increased functionality and enhanced reliability. Various
networking protocols are TCP/IP (Transmission Control Protocol/ Internet
Protocol), UDP (User Datagram Protocol), FTP (File Transfer Protocol),
HTTP (Hyper Text Transfer protocol), NFS (Network File System) etc.

EVOLUTION OF OPERATING SYSTEMS


Starting from the bare machine approach to its present forms, the
Operating System has evolved through a number of stages of its
development like serial processing, batch processing, multiprocessing etc.
as mentioned below:
Serial Processing
In theory, every computer system may be programmed in its
machine language, with no systems software support. Programming of the
bare machine was customary for early computer systems. A slightly more

TRACE KTU
advanced version of this mode of operation is common for the simple
evaluation boards that are sometimes used in introductory microprocessor
design and interfacing courses. Programs for the bare machine can be
developed by manually translating sequences of instructions into binary or
some other code whose base is usually an integer power of 2. Instructions
and data are then entered into the computer by means of console switches,
or perhaps through a hexadecimal keyboard. Loading the program counter
with the address of the first instruction starts programs. Results of
execution are obtained by examining the contents of the relevant registers
and memory locations. The executing program, if any, must control
Input/output devices, directly, say, by reading and writing the related I/O
ports. Evidently, programming of the bare machine results in low
productivity of both users and hardware. The long and tedious process of
program and data entry practically precludes execution of all but very
short programs in such an environment.
The next significant evolutionary step in computer-system usage
came about with the advent of input/output devices, such as punched cards
and paper tape, and of language translators. Programs, now coded in a
programming language are translated into executable form by a computer
program, such as a compiler or an interpreter. Another program, called the
loader, automates the process of loading executable programs into

BCCML
S6 ECE EC 266

memory. The user places a program and its input data on an input device,
and the loader transfers information from that input device into memory.
After transferring control to the loader program by manual or automatic
means, execution of the program commences. The executing program reads
Page | 10
its input from the designated input device and may produce some output
on an output device. Once in memory, the program may be rerun with a
different set of input data.
The mechanics of development and preparation of programs in such
environments are quite slow and cumbersome due to serial execution of
programs and to numerous manual operations involved in the process. In a
typical sequence, the editor program is loaded to prepare the source code
of the user program. The next step is to load and execute the language
translator and to provide it with the source code of the user program.
When serial input devices, such as card reader, are used, multiple-pass
language translators may require the source code to be repositioned for
reading during each pass. If syntax errors are detected, the whole process
must be repeated from the beginning. Eventually, the object code produced
from the syntactically correct source code is loaded and executed. If run-
time errors are detected, the state of the machine can be examined and
modified by means of console switches, or with the assistance of a program
called a debugger.

TRACE KTU
Batch Processing
With the invention of hard disk drive, the things were much better.
The batch processing was relied on punched cards or tape for the input
when assembling the cards into a deck and running the entire deck of cards
through a card reader as a batch. Present batch systems are not limited to
cards or tapes, but the jobs are still processed serially, without the
interaction of the user. The efficiency of these systems was measured in the
number of jobs completed in a given amount of time called as throughput.
Today’s operating systems are not limited to batch programs. This was the
next logical step in the evolution of operating systems to automate the
sequencing of operations involved in program resource utilization and
programmer productivity by reducing or eliminating component idle times
caused by comparatively lengthy manual operations.
Furthermore, even when automated, housekeeping operations such as
mounting of tapes and filling out log forms take a long time relative to
processors and memory speeds. Since there is not much that can be done to
reduce these operations, system performance may be increased by dividing
this overhead among a number of programs. More specifically, if several
programs are batched together on a single input tape for which
housekeeping operations are performed only once, the overhead per
program is reduced accordingly. A related concept, sometimes called

BCCML
S6 ECE EC 266

phasing, is to prearrange submitted jobs so that similar ones are placed in


the same batch. For example, by batching several Fortran compilation jobs
together, the Fortran compiler can be loaded only once to process all of
them in a row.
Page | 11
To realize the resource-utilization potential of batch processing, a
mounted batch of jobs must be executed automatically, without slow
human intervention. Generally, Operating System commands are
statements written in Job Control Language (JCL). These commands are
embedded in the job stream, together with user programs and data. A
memory-resident portion of the batch operating system- sometimes called
the batch monitor- reads, interprets, and executes these commands.
Moreover, the sequencing of program execution mostly automated by
batch operating systems, the speed discrepancy between fast processors
and comparatively slow I/O devices, such as card readers and printers,
emerged as a major performance bottleneck. Further improvements in
batch processing were mostly along the lines of increasing the throughput
and resource utilization by overlapping input and output operations. These
developments have coincided with the introduction of direct memory
access (DMA) channels, peripheral controllers, and later dedicated
input/output processors. As a result, computers for offline processing were
often replaced by sophisticated input/output programs executed on the

TRACE KTU
same computer with the batch monitor.
Many single-user operating systems for personal computers basically
provide for serial processing. User programs are commonly loaded into
memory and execution and in the mechanical aspects of program
development. The intent was to increase system executed in response to
user commands typed on the console. A file management system is often
provided for program and data storage. A form of batch processing is made
possible by means of files consisting of commands to the Operating System
that are executed in sequence. Command files are primarily used to
automate complicated customization and operational sequences of
frequent operations.

Multiprogramming
In multiprogramming, many processes are simultaneously resident in
memory, and execution switches between processes. The advantages of
multiprogramming are the same as the commonsense reasons that in life
you do not always wait until one thing has finished before starting the next
thing. Specifically:
- More efficient use of computer time. If the computer is running a
single process, and the process does a lot of I/O, then the CPU is idle

BCCML
S6 ECE EC 266

most of the time. This is a gain as long as some of the jobs are I/O
bound -- spend most of their time waiting for I/O.
- Faster turnaround if there are jobs of different lengths.
- Consideration (1) applies only if some jobs are I/O bound.
Page | 12
- Consideration (2) applies even if all jobs are CPU bound.
- For instance, suppose that first job A, which takes an hour, starts to run,
and then immediately afterward job B, which takes 1 minute, is
submitted. If the computer has to wait until it finishes A before it starts
B, then user A must wait an hour; user B must wait 61 minutes; so the
average waiting time is 60-1/2 minutes. If the computer can switch back
and forth between A and B until B is complete, then B will complete after
2 minutes; A will complete after 61 minutes; so the average waiting time
will be 31-1/2 minutes. If all jobs are CPU bound and the same length,
then there is no advantage in multiprogramming; you do better to run a
batch system. The multiprogramming environment is supposed to be
invisible to the user processes; that is, the actions carried out by each
process should proceed in the same was as if the process had the entire
machine to itself.
This raises the following issues:

- Process model: The state of an inactive process has to be encoded

TRACE KTU
and saved in a process table so that the process can be resumed
when made active.
- Context switching: How does one carry out the change from one
process to another?
- Memory translation: Each process treats the computer's memory as
its own private playground. How can we give each process the
illusion that it can reference addresses in memory as it wants, but not
have them step on each other's toes? The trick is by distinguishing
between virtual addresses -- the addresses used in the process code -
- and physical addresses -- the actual addresses in memory. Each
process is actually given a fraction of physical memory. The memory
management unit translates the virtual address in the code to a
physical address within the user's space. This translation is invisible
to the process.
- Memory management: How does the Operating System assign
sections of physical memory to each process?
- Scheduling: How does the Operating System choose which process to
run when?
Let us briefly review some aspects of program behaviour in order to
motivate the basic idea of multiprogramming. This is illustrated in Figure 6,
indicated by dashed boxes. Idealized serial execution of two programs, with

BCCML
S6 ECE EC 266

no inter-program idle times, is depicted in Figure 6(a). For comparison


purposes, both programs are assumed to have identical behaviour with
regard to processor and I/O times and their relative distributions. As
Figure 6(a) suggests, serial execution of programs causes either the
Page | 13
processor or the I/O devices to be idle at some time even if the input job
stream is never empty. One way to attack this problem is to assign some
other work to the processor and I/O devices when they would otherwise be
idling.

Figure 6(b) illustrates a possible scenario of concurrent execution of the


two programs introduced in Figure 6(a). It starts with the processor
executing the first computational sequence of Program 1. Instead of idling
during the subsequent I/O sequence of Program 1, the processor is
assigned to the first computational sequence of the Program 2, which is

TRACE KTU
assumed to be in memory and awaiting execution. When this work is done,
the processor is assigned to Program 1 again, then to Program 2, and so
forth.

As Figure 6 suggests, significant performance gains may be achieved by


interleaved executing of programs, or multiprogramming, as this mode of
operation is usually called. With a single processor, parallel execution of
programs is not possible, and at most one program can be in control of the
processor at any time. The example presented in Figure 6(b) achieves
100% processor utilization with only two active programs. The number of
programs actively competing for resources of a multi-programmed
computer system is called the degree of multiprogramming. In principle,

BCCML
S6 ECE EC 266

higher degrees of multiprogramming should result in higher resource


utilization. Time-sharing systems found in many university computer
centers provide a typical example of a multiprogramming system.
Interaction of Operating System & Hardware Architecture
Page | 14
Users interact indirectly through a collection of system programs that
make up the operating system interface.
The interface could be:
a) A GUI (Graphical User Interface), with icons and windows, etc.
b) A command-line interface for running processes and scripts,
browsing files in directories, etc.
c) A non-interactive batch system that takes a collection of jobs, which
it proceeds to churn through (e.g. payroll calculations, market
predictions, etc.) Processes interact by making system calls into the
operating system proper (i.e. the kernel). Although such calls are not
direct calls to kernel functions.

System Calls:
- Programming interface to the services provided by the OS (e.g. open
file, read file, etc.)
- Typically written in a high-level language (C or C++)

TRACE KTU
- Mostly accessed by programs via a high-level Application Program
Interface (API) rather than direct system call.
- Three most common APIs are Win32 API for Windows, POSIX API for
UNIX-based systems (including virtually all versions of UNIX, Linux,
and Mac OS X)

TYPES OF OPERATING SYSTEMS


Operating System can be classified into various categories on the basis
of several criteria, viz. number of simultaneously active programs, number
of users working simultaneously, number of processors in the computer
system, etc. In the following discussion several types of operating systems
are discussed.

1. Batch Operating System


Batch processing is the most primitive type of operating system. Batch
processing generally requires the program, data, and appropriate system
commands to be submitted together in the form of a job. Batch operating
systems usually allow little or no interaction between users and executing
programs. Batch processing has a greater potential for resource utilization
than simple serial processing in computer systems serving multiple users.
Due to turnaround delays and offline debugging, batch is not very

BCCML
S6 ECE EC 266

convenient for program development. Programs that do not require


interaction and programs with long execution times may be served well by
a batch operating system. Examples of such programs include payroll,
forecasting, statistical analysis, and large scientific number-crunching
Page | 15
programs. Serial processing combined with batch like command files is also
found on many personal computers. Scheduling in batch is very simple.
Jobs are typically processed in order of their submission, that is, first-come
first-served fashion.

Operating System (Fig) Memory layout for a


simple batch system.

User Program Area

Memory management in batch systems is also very simple. Memory

TRACE KTU
is usually divided into two areas. The resident portion of the Operating
System permanently occupies one of them, and the other is used to load
transient programs for execution. When a transient program terminates, a
new program is loaded into the same area of memory. Since at most one
program is in execution at any time, batch systems do not require any time-
critical device management. For this reason, many serial and I/O and
ordinary batch operating systems use simple, program controlled method
of I/O. The lack of contention for I/O devices makes their allocation and
deallocation trivial.
Batch systems often provide simple forms of file management. Since access
to files is also serial, little protection and no concurrency control of file
access in required.
2. Multiprogramming Operating System
A multiprogramming system permits multiple programs to be loaded
into memory and execute the programs concurrently.
Concurrent execution of programs has a significant potential for improving
system throughput and resource utilization relative to batch and serial
processing. This potential is realized by a class of operating systems that
multiplex resources of a computer system among a multitude of active
programs. Multiprogramming increases CPU utilization by organizing jobs
so that the CPU always has one to execute.

BCCML
S6 ECE EC 266

The OS keeps several jobs in memory simultaneously. The set of jobs is a


subset of the jobs kept in the job pool. The OS picks and begins to execute
one of the jobs in the memory. The job may have to wait for some other
task to finish, such as an I/O operation.
Page | 16

Operating System

Job 1

Job 2

Job 3

Job 4
(Fig) Memory layout for a multiprogrammed system.

3. Multitasking Operating System (Time- Sharing Systems)


It allows more than one program to run concurrently. The ability to

TRACE KTU
execute more than one task at the same time is called as multitasking. An
instance of a program in execution is called a process or a task.
A multitasking Operating System is distinguished by its ability to support
concurrent execution of two or more active processes. Multitasking is
usually implemented by maintaining code and data of several processes in
memory simultaneously, and by multiplexing processor and I/O devices
among them.
Multitasking is often coupled with hardware and software support for
memory protection in order to prevent erroneous processes from
corrupting address spaces and behaviour of other resident processes. The
terms multitasking and multiprocessing are often used interchangeably,
although multiprocessing sometimes implies that more than one CPU is
involved.
In multitasking, only one CPU is involved, but it switches from one program
to another so quickly that it gives the appearance of executing all of the
programs at the same time. There are two basic types of multitasking:
preemptive and cooperative. In preemptive multitasking, the Operating
System parcels out CPU time slices to each program. In cooperative
multitasking, each program can control the CPU for as long as it needs it. If
a program is not using the CPU, however, it can allow another program to
use it temporarily. OS/2, Windows 95, Windows NT, and UNIX use

BCCML
S6 ECE EC 266

preemptive multitasking, whereas Microsoft Windows 3.x and the


MultiFinder use cooperative multitasking.
4. Multi-user Operating System
Multiprogramming operating systems usually support multiple users, in
Page | 17
which case they are also called multi-user systems. Multi-user operating
systems provide facilities for maintenance of individual user environments
and therefore require user accounting. In general, multiprogramming
implies multitasking, but multitasking does not imply multi-programming.
In effect, multitasking operation is one of the mechanisms that a
multiprogramming Operating System employs in managing the totality of
computer-system resources, including processor, memory, and I/O devices.
Multitasking operation without multi-user support can be found in
operating systems of some advanced personal computers and in real-time
systems. Multi-access operating systems allow simultaneous access to a
computer system through two or more terminals. In general, multi-access
operation does not necessarily imply multiprogramming. An example is
provided by some dedicated transaction-processing systems, such as
airline ticket reservation systems, that support hundreds of active
terminals under control of a single program.
In general, the multiprocessing or multiprocessor operating systems
manage the operation of computer systems that incorporate multiple

TRACE KTU
processors. Multiprocessor operating systems are multitasking operating
systems by definition because they support simultaneous execution of
multiple tasks (processes) on different processors. Depending on
implementation, multitasking may or may not be allowed on individual
processors. Except for management and scheduling of multiple processors,
multiprocessor operating systems provide the usual complement of other
system services that may qualify them as time-sharing, real-time, or a
combination operating system.

5. Distributed Operating Systems


- A distributed operating system or simply a network is a
communication path between two or more systems.
- It is a collection of independent, networked, communicating, and
physically separate computational nodes.
- They handle jobs which are serviced by multiple CPUs. A
distributed computer system is a collection of autonomous
computer systems capable of communication and cooperation via
their hardware and software interconnections.
- A distributed Operating System governs the operation of a
distributed computer system and provides a virtual machine
abstraction to its users.

BCCML
S6 ECE EC 266

The key objective of a distributed Operating System is


-
transparency. Ideally, component and resource distribution
should be hidden from users and application programs unless
they explicitly demand otherwise.
- Distributed operating systems usually provide the means for
Page | 18
system-wide sharing of resources, such as computational capacity,
files, and I/O devices.
- In addition to typical operating-system services provided at each
node for the benefit of local clients, a distributed Operating
System may facilitate access to remote resources, communication
with remote processes, and distribution of computations.
- The added services necessary for pooling of shared system
resources include global naming, distributed file system, and
facilities for distribution.
The advantages of distributed systems are as follows −

- With resource sharing facility, a user at one site may be able to use
the resources available at another.
- Speedup the exchange of data with one another via electronic
mail.
-

-
TRACE KTU
If one site fails in a distributed system, the remaining sites can
potentially continue operating.
Better service to the customers.
- Reduction of the load on the host computer.
- Reduction of delays in data processing.

6. Parallel Operating System

Parallel operating systems are the interface between parallel


computers (or computer systems) and the applications (parallel or not)
that are executed on them. They translate the hardware’s capabilities into
concepts usable by programming languages.
- They gather together multiple CPUs to accomplish computational
work.
- They composed of two or more individual systems coupled
together.

BCCML
S6 ECE EC 266

7. Real Time Operating System


- Real time systems are used in time critical environments where
data must be processed extremely quickly because the output
influences immediate decisions.
- Real time systems are used for space flights, airport traffic control,
Page | 19
industrial processes, sophisticated medical equipments, telephone
switching etc.
- A real time system must be 100% responsive in time.
- Response time is measured in fractions of seconds.
- In real time systems the correctness of the computations not only
depends upon the logical correctness of the computation but also
upon the time at which the results is produced.
- If the timing constraints of the system are not met, system failure
is said to have occurred.
- Real-time operating systems are used in environments where a
large number of events, mostly external to the computer system,
must be accepted and processed in a short time or within certain
deadlines.
- A primary objective of real-time systems is to provide quick event-
response times, and thus meet the scheduling deadlines.
- User convenience and resource utilization are of secondary

TRACE KTU
concern to real-time system designers.
- It is not uncommon for a real-time system to be expected to
process bursts of thousands of interrupts per second without
missing a single event.
- Such requirements usually cannot be met by multi-programming
alone, and real-time operating systems usually rely on some
specific policies and techniques for doing their job.
- The Multitasking operation is accomplished by scheduling
processes for execution independently of each other.
- Each process is assigned a certain level of priority that
corresponds to the relative importance of the event that it
services.
- The processor is normally allocated to the highest-priority
process among those that are ready to execute. Higher-priority
processes usually preempt execution of the lower-priority
processes.
- This form of scheduling, called priority-based preemptive
scheduling, is used by a majority of real-time systems.
- Moreover, as already suggested, time-critical device management
is one of the main characteristics of real-time systems.

BCCML
S6 ECE EC 266

- In addition to providing sophisticated forms of interrupt


management and I/O buffering, real-time operating systems often
provide system calls to allow user processes to connect
themselves to interrupt vectors and to service events directly.
Page | 20
- Real time systems are of two types. Soft and Hard.
- Soft real time system does not cause severe issues if the system
fails. Eg. Toy car, household water level controller
- Hard real time systems are very sensitive in giving accurate
results otherwise it may cause catastrophes.
Eg. ABS in automobile, aircraft systems

Virtual Computers
The virtual machine approach makes it possible to run different
operating system on the same real machine.
System virtual machines (sometimes called hardware virtual machines)
allow the sharing of the underlying physical machine resources between
different virtual machines, each running its own operating system. The
software layer providing the virtualization is called a virtual machine
monitor or hypervisor. A hypervisor can run on bare hardware or on top of
an operating system.
The main advantages of system Virtual Machines are:

TRACE KTU
• Multiple Operating System environments can co-exist on the same
computer, in strong isolation from each other
• The virtual machine can provide an instruction set architecture (ISA)
that is somewhat different from that of the real machine
• Application provisioning, maintenance, high availability and disaster
recovery.

Multiple Virtual Machines each running their own operating system


(called guest operating system) are frequently used in server consolidation,
where different services that used to run on individual machines in order
to avoid interference are instead run in separate Virtual Machines on the
same physical machine. This use is frequently called quality-of-service
isolation (QoS isolation).
The desire to run multiple operating systems was the original motivation
for virtual machines, as it allowed time-sharing a single computer between
several single-tasking Operating Systems. This technique requires a
process to share the CPU resources between guest operating systems and
memory virtualization to share the memory on the host.
The guest Operating Systems do not have to be all the same, making it
possible to run different Operating Systems on the same computer (e.g.,
Microsoft Windows and Linux, or older versions of an Operating System in

BCCML
S6 ECE EC 266

order to support software that has not yet been ported to the latest
version). The use of virtual machines to support different guest Operating
Systems is becoming popular in embedded systems; a typical use is to
support a real-time operating system at the same time as a high-level
Page | 21
Operating System such as Linux or Windows.
Another use is to sandbox an Operating System that is not trusted,
possibly because it is a system under development. Virtual machines have
other advantages for Operating System development, including better
debugging access and faster reboots.
Consider the following figure in which OS1, OS2, and OS4 are three
different operating systems and OS3 is operating system under test. All
these operating systems are running on the same real machine but they are
not directly dealing with the real machine, they are dealing with Virtual
Machine Monitor (VMM) which provides each user with the illusion of
running on a separate machine. If the operating system being tested causes
a system to crash, this crash affects only its own virtual machine. The other
users of the real machine can continue their operation without being
disturbed. Actually lowest level routines of the operating system deals with
the VMM instead of the real machine which provides the services and
functions as those available on the real machine. Each user of the virtual
machine i.e. OS1, OS2 etc. runs in user mode, not supervisor mode, on the
real machine.
TRACE KTU

ARCHITECTURES OF OS
Kernel
The kernel is a part of software. It is like a bridge between the shell and
hardware. It is responsible for running programs and providing secure
access to the machine’s hardware. The kernel is used for scheduling, i.e., it
maintains a time table for all processes.

1. Monolithic Architecture

BCCML
S6 ECE EC 266

This structure is prominent in the early days. The structure consists of no-
structure. OS composed of a single module. All data and code use same
memeory space. The system is a collection of procedures. Each procedure
can call any other procedure.
Page | 22
A little structure imposed by exposing a set of system calls to the outside.
Supporting these system calls through utility procedures (check data
passed to system call, move data around …)

1. a main procedure requesting the services

2. a set of service procedures that carry out system calls

3. a set of utility procedures supporting the system calls

TRACE KTU
Advantages

a) Easier to design, therefore faster development cycle and more


potential for growth.
b) More efficient due to use of shared kernel memory.
c) Low overheads

Eg: Microsoft Windows NT

BCCML
S6 ECE EC 266

Page | 23

Microkernel Architecture

TRACE KTU

This structures the operating system by removing all nonessential


portions of the kernel and implementing them as system and user level
programs.

• Generally they provide minimal process and memory


management, and a communications facility.
• Communication between components of the OS is provided by
message passing.

The benefits of the microkernel are as follows:

• Extending the operating system becomes much easier.

BCCML
S6 ECE EC 266

• Any changes to the kernel tend to be fewer, since the kernel is


smaller.
• The microkernel also provides more security and reliability.

Main disadvantage is poor performance due to increased system overhead


Page | 24
from message passing.

Eg: mac, symbian

Layered Architecture

This approach breaks up the operating system into different layers. The
operating system is broken up into number of layers. The bottom layer

TRACE KTU
(layer 0) is the hardware layer and the highest layer (layer n) is the user
interface layer as shown in the figure.

The layered are selected such that each user functions and services of
only lower level layer. The first layer can be debugged wit out any concern
for the rest of the system. It user basic hardware to implement this function
once the first layer is debugged., it’s correct functioning can be
assumed while the second layer is debugged & soon . If an error is found
during the debugged of particular layer, the layer must be on that layer,
because the layer below it already debugged. Because of this design of the
system is simplified when operating system is broken up into layer.

BCCML
S6 ECE EC 266

- This allows implementers to change the inner workings, and


increases modularity.
- As long as the external interface of the routines doesn’t changes,
developers have more freedom to change the inner workings of
Page | 25
the routines.
- With the layered approach, the bottom layer is the hardware,
while the highest layer is the user interface.
o The main advantage is simplicity of construction and
debugging.
o The main disadvantage is that the OS tends to be less
efficient than other implementations.
o It requires an appropriate definition of the various layers &
a careful planning of the proper placement of the layer.

EXO Kernel Architecture

TRACE KTU

The idea behind exo-kernels is to force as few abstractions as


possible on application developers, enabling them to make as many
decisions as possible about hardware abstractions. Exo-kernels are tiny,
since functionality is limited to ensuring protection and multiplexing of
resources, which is considerably simpler than conventional microkernels'
implementation of message passing and monolithic kernels'
implementation of high-level abstractions.
Implemented applications are called library operating systems; they
may request specific memory addresses, disk blocks, etc. The kernel only
ensures that the requested resource is free, and the application is allowed
to access it. This low-level hardware access allows the programmer to
implement custom abstractions, and omit unnecessary ones, most

BCCML
S6 ECE EC 266

commonly to improve a program's performance. It also allows


programmers to choose what level of abstraction they want, high, or low.
Exokernels can be seen as an application of the end-to-end
principle to operating systems, in that they do not force an application
Page | 26 program to layer its abstractions on top of other abstractions that were
designed with different requirements in mind.
HYBRID Kernel Architecture

A hybrid kernel is one that combines aspects of both micro and


monolithic kernels, but there is no exact definition. Often, "hybrid kernel"
means that the kernel is highly modular, but all runs in the same address
space. This allows the kernel avoid the overhead of a complicated message
passing system within the kernel, while still retaining some microkernel-
like features.

● Example – Apple Mac OS X

TRACE KTU

BCCML

You might also like