1 Rtos
1 Rtos
Page | 1
INTRODUCTION:
BCCML
S6 ECE EC 266
(a) Perform basic tasks such as recognizing input from the keyboard,
Page | 2 sending output to the display screen, keeping track of files and directories
on the disk, and controlling peripheral devices such as disk drives and
printers.
(b) Ensure that different programs and users running at the same time do
not interfere with each other.
(c) Provide a software platform on top of which other programs can run.
The Operating System is also responsible for security and ensuring that
unauthorized users do not access the system. Figure 1 illustrates the
relationship between application software and system software. The first
two responsibilities address the need for managing the computer hardware
and the application programs that use the hardware. The third
responsibility focuses on providing an interface between application
software and hardware so that application software can be efficiently
developed.
TRACE KTU
Since the Operating System is already responsible for managing the
hardware, it should provide a programming interface for application
developers. As a user, we normally interact with the Operating System
through a set of commands. The commands are accepted and executed by a
part of the Operating System called the command processor or command
line interpreter.
BCCML
S6 ECE EC 266
TRACE KTU
(3) The application programs and system programs
The hardware provides the basic computing power. The system programs
the way in which these resources are used to solve the computing problems
of the users. There may be many different users trying to solve different
problems. The Operating System controls and coordinates the use of the
hardware among the various users and the application programs.
BCCML
S6 ECE EC 266
TRACE KTU
input/output devices and the use of CPU time for execution.
BCCML
S6 ECE EC 266
registers, and from this the programmer have to debug their programs. The
second major solution to reduce the setup time was to batch together jobs
of similar needs and run through the computer as a group. But there were
still problems. For example, when a job stopped, the operator would have
Page | 5 to notice it by observing the console, determining why the program
stopped, takes a dump if necessary and start with the next job. To
overcome this idle time, automatic job sequencing was introduced. But
even with batching technique, the faster computers allowed expensive time
lags between the CPU and the I/O devices. Eventually several factors
helped improve the performance of CPU. First, the speed of I/O devices
became faster. Second, to use more of the available storage area in these
devices, records were blocked before they were retrieved. Third, to reduce
the gap in speed between the I/O devices and the CPU, an interface called
the control unit was placed between them to perform the function of
buffering. A buffer is an interim storage area that works like this: as the
slow input device reads a record, the control unit places each character of
the record into the buffer. When the buffer is full, the entire record is
transmitted to the CPU. The process is just opposite to the output devices.
TRACE KTU
Fourth, in addition to buffering, an early form of spooling was developed by
moving off-line the operations of card reading, printing etc. SPOOL is an
acronym that stands for the simultaneous peripherals operations on-line.
For example, incoming jobs would be transferred from the card decks to
tape/disks off-line. Then they would be read into the CPU from the
tape/disks at a speed much faster than the card reader.
BCCML
S6 ECE EC 266
Page | 6
TRACE KTU
BCCML
S6 ECE EC 266
• Memory management
• Task or process management
• Storage management
• Device or input/output management
• Kernel or scheduling
TRACE KTU
To execute a program, it must be mapped to absolute addresses and
loaded into memory. As the program executes, it accesses instructions and
data from memory by generating these absolute addresses. In
multiprogramming environment, multiple programs are maintained in the
memory simultaneously. The Operating System is responsible for the
following memory management functions:
� Keep track of which segment of memory is in use and by whom.
� Deciding which processes are to be loaded into memory when space
becomes available. In multiprogramming environment it decides which
process gets the available memory, when it gets it, where does it get it,
and how much.
� Allocation or de-allocation the contents of memory when the process
request for it otherwise reclaim the memory when the process does not
require it or has been terminated.
BCCML
S6 ECE EC 266
TRACE KTU
control respective I/O devices through their controllers. The Operating
System is responsible for the following I/O Device Management Functions:
- Keep track of the I/O devices, I/O channels, etc. This module is
typically called I/O traffic controller.
- Decide what is an efficient way to allocate the I/O resource. If it is to
be shared, then decide who gets it, how much of it is to be allocated,
and for how long. This is called I/O scheduling.
- Allocate the I/O device and initiate the I/O operation.
- Reclaim device as and when its use is through. In most cases I/O
terminates automatically.
- Keeps track of the information, its location, its usage, status, etc. The
module called a file system provides these facilities.
- Decides who gets hold of information, enforce protection mechanism,
and provides for information access mechanism, etc.
- Allocate the information to a requesting process, e.g., open a file.
- De-allocate the resource, e.g., close a file.
BCCML
S6 ECE EC 266
TRACE KTU
advanced version of this mode of operation is common for the simple
evaluation boards that are sometimes used in introductory microprocessor
design and interfacing courses. Programs for the bare machine can be
developed by manually translating sequences of instructions into binary or
some other code whose base is usually an integer power of 2. Instructions
and data are then entered into the computer by means of console switches,
or perhaps through a hexadecimal keyboard. Loading the program counter
with the address of the first instruction starts programs. Results of
execution are obtained by examining the contents of the relevant registers
and memory locations. The executing program, if any, must control
Input/output devices, directly, say, by reading and writing the related I/O
ports. Evidently, programming of the bare machine results in low
productivity of both users and hardware. The long and tedious process of
program and data entry practically precludes execution of all but very
short programs in such an environment.
The next significant evolutionary step in computer-system usage
came about with the advent of input/output devices, such as punched cards
and paper tape, and of language translators. Programs, now coded in a
programming language are translated into executable form by a computer
program, such as a compiler or an interpreter. Another program, called the
loader, automates the process of loading executable programs into
BCCML
S6 ECE EC 266
memory. The user places a program and its input data on an input device,
and the loader transfers information from that input device into memory.
After transferring control to the loader program by manual or automatic
means, execution of the program commences. The executing program reads
Page | 10
its input from the designated input device and may produce some output
on an output device. Once in memory, the program may be rerun with a
different set of input data.
The mechanics of development and preparation of programs in such
environments are quite slow and cumbersome due to serial execution of
programs and to numerous manual operations involved in the process. In a
typical sequence, the editor program is loaded to prepare the source code
of the user program. The next step is to load and execute the language
translator and to provide it with the source code of the user program.
When serial input devices, such as card reader, are used, multiple-pass
language translators may require the source code to be repositioned for
reading during each pass. If syntax errors are detected, the whole process
must be repeated from the beginning. Eventually, the object code produced
from the syntactically correct source code is loaded and executed. If run-
time errors are detected, the state of the machine can be examined and
modified by means of console switches, or with the assistance of a program
called a debugger.
TRACE KTU
Batch Processing
With the invention of hard disk drive, the things were much better.
The batch processing was relied on punched cards or tape for the input
when assembling the cards into a deck and running the entire deck of cards
through a card reader as a batch. Present batch systems are not limited to
cards or tapes, but the jobs are still processed serially, without the
interaction of the user. The efficiency of these systems was measured in the
number of jobs completed in a given amount of time called as throughput.
Today’s operating systems are not limited to batch programs. This was the
next logical step in the evolution of operating systems to automate the
sequencing of operations involved in program resource utilization and
programmer productivity by reducing or eliminating component idle times
caused by comparatively lengthy manual operations.
Furthermore, even when automated, housekeeping operations such as
mounting of tapes and filling out log forms take a long time relative to
processors and memory speeds. Since there is not much that can be done to
reduce these operations, system performance may be increased by dividing
this overhead among a number of programs. More specifically, if several
programs are batched together on a single input tape for which
housekeeping operations are performed only once, the overhead per
program is reduced accordingly. A related concept, sometimes called
BCCML
S6 ECE EC 266
TRACE KTU
same computer with the batch monitor.
Many single-user operating systems for personal computers basically
provide for serial processing. User programs are commonly loaded into
memory and execution and in the mechanical aspects of program
development. The intent was to increase system executed in response to
user commands typed on the console. A file management system is often
provided for program and data storage. A form of batch processing is made
possible by means of files consisting of commands to the Operating System
that are executed in sequence. Command files are primarily used to
automate complicated customization and operational sequences of
frequent operations.
Multiprogramming
In multiprogramming, many processes are simultaneously resident in
memory, and execution switches between processes. The advantages of
multiprogramming are the same as the commonsense reasons that in life
you do not always wait until one thing has finished before starting the next
thing. Specifically:
- More efficient use of computer time. If the computer is running a
single process, and the process does a lot of I/O, then the CPU is idle
BCCML
S6 ECE EC 266
most of the time. This is a gain as long as some of the jobs are I/O
bound -- spend most of their time waiting for I/O.
- Faster turnaround if there are jobs of different lengths.
- Consideration (1) applies only if some jobs are I/O bound.
Page | 12
- Consideration (2) applies even if all jobs are CPU bound.
- For instance, suppose that first job A, which takes an hour, starts to run,
and then immediately afterward job B, which takes 1 minute, is
submitted. If the computer has to wait until it finishes A before it starts
B, then user A must wait an hour; user B must wait 61 minutes; so the
average waiting time is 60-1/2 minutes. If the computer can switch back
and forth between A and B until B is complete, then B will complete after
2 minutes; A will complete after 61 minutes; so the average waiting time
will be 31-1/2 minutes. If all jobs are CPU bound and the same length,
then there is no advantage in multiprogramming; you do better to run a
batch system. The multiprogramming environment is supposed to be
invisible to the user processes; that is, the actions carried out by each
process should proceed in the same was as if the process had the entire
machine to itself.
This raises the following issues:
TRACE KTU
and saved in a process table so that the process can be resumed
when made active.
- Context switching: How does one carry out the change from one
process to another?
- Memory translation: Each process treats the computer's memory as
its own private playground. How can we give each process the
illusion that it can reference addresses in memory as it wants, but not
have them step on each other's toes? The trick is by distinguishing
between virtual addresses -- the addresses used in the process code -
- and physical addresses -- the actual addresses in memory. Each
process is actually given a fraction of physical memory. The memory
management unit translates the virtual address in the code to a
physical address within the user's space. This translation is invisible
to the process.
- Memory management: How does the Operating System assign
sections of physical memory to each process?
- Scheduling: How does the Operating System choose which process to
run when?
Let us briefly review some aspects of program behaviour in order to
motivate the basic idea of multiprogramming. This is illustrated in Figure 6,
indicated by dashed boxes. Idealized serial execution of two programs, with
BCCML
S6 ECE EC 266
TRACE KTU
assumed to be in memory and awaiting execution. When this work is done,
the processor is assigned to Program 1 again, then to Program 2, and so
forth.
BCCML
S6 ECE EC 266
System Calls:
- Programming interface to the services provided by the OS (e.g. open
file, read file, etc.)
- Typically written in a high-level language (C or C++)
TRACE KTU
- Mostly accessed by programs via a high-level Application Program
Interface (API) rather than direct system call.
- Three most common APIs are Win32 API for Windows, POSIX API for
UNIX-based systems (including virtually all versions of UNIX, Linux,
and Mac OS X)
BCCML
S6 ECE EC 266
TRACE KTU
is usually divided into two areas. The resident portion of the Operating
System permanently occupies one of them, and the other is used to load
transient programs for execution. When a transient program terminates, a
new program is loaded into the same area of memory. Since at most one
program is in execution at any time, batch systems do not require any time-
critical device management. For this reason, many serial and I/O and
ordinary batch operating systems use simple, program controlled method
of I/O. The lack of contention for I/O devices makes their allocation and
deallocation trivial.
Batch systems often provide simple forms of file management. Since access
to files is also serial, little protection and no concurrency control of file
access in required.
2. Multiprogramming Operating System
A multiprogramming system permits multiple programs to be loaded
into memory and execute the programs concurrently.
Concurrent execution of programs has a significant potential for improving
system throughput and resource utilization relative to batch and serial
processing. This potential is realized by a class of operating systems that
multiplex resources of a computer system among a multitude of active
programs. Multiprogramming increases CPU utilization by organizing jobs
so that the CPU always has one to execute.
BCCML
S6 ECE EC 266
Operating System
Job 1
Job 2
Job 3
Job 4
(Fig) Memory layout for a multiprogrammed system.
TRACE KTU
execute more than one task at the same time is called as multitasking. An
instance of a program in execution is called a process or a task.
A multitasking Operating System is distinguished by its ability to support
concurrent execution of two or more active processes. Multitasking is
usually implemented by maintaining code and data of several processes in
memory simultaneously, and by multiplexing processor and I/O devices
among them.
Multitasking is often coupled with hardware and software support for
memory protection in order to prevent erroneous processes from
corrupting address spaces and behaviour of other resident processes. The
terms multitasking and multiprocessing are often used interchangeably,
although multiprocessing sometimes implies that more than one CPU is
involved.
In multitasking, only one CPU is involved, but it switches from one program
to another so quickly that it gives the appearance of executing all of the
programs at the same time. There are two basic types of multitasking:
preemptive and cooperative. In preemptive multitasking, the Operating
System parcels out CPU time slices to each program. In cooperative
multitasking, each program can control the CPU for as long as it needs it. If
a program is not using the CPU, however, it can allow another program to
use it temporarily. OS/2, Windows 95, Windows NT, and UNIX use
BCCML
S6 ECE EC 266
TRACE KTU
processors. Multiprocessor operating systems are multitasking operating
systems by definition because they support simultaneous execution of
multiple tasks (processes) on different processors. Depending on
implementation, multitasking may or may not be allowed on individual
processors. Except for management and scheduling of multiple processors,
multiprocessor operating systems provide the usual complement of other
system services that may qualify them as time-sharing, real-time, or a
combination operating system.
BCCML
S6 ECE EC 266
- With resource sharing facility, a user at one site may be able to use
the resources available at another.
- Speedup the exchange of data with one another via electronic
mail.
-
-
TRACE KTU
If one site fails in a distributed system, the remaining sites can
potentially continue operating.
Better service to the customers.
- Reduction of the load on the host computer.
- Reduction of delays in data processing.
BCCML
S6 ECE EC 266
TRACE KTU
concern to real-time system designers.
- It is not uncommon for a real-time system to be expected to
process bursts of thousands of interrupts per second without
missing a single event.
- Such requirements usually cannot be met by multi-programming
alone, and real-time operating systems usually rely on some
specific policies and techniques for doing their job.
- The Multitasking operation is accomplished by scheduling
processes for execution independently of each other.
- Each process is assigned a certain level of priority that
corresponds to the relative importance of the event that it
services.
- The processor is normally allocated to the highest-priority
process among those that are ready to execute. Higher-priority
processes usually preempt execution of the lower-priority
processes.
- This form of scheduling, called priority-based preemptive
scheduling, is used by a majority of real-time systems.
- Moreover, as already suggested, time-critical device management
is one of the main characteristics of real-time systems.
BCCML
S6 ECE EC 266
Virtual Computers
The virtual machine approach makes it possible to run different
operating system on the same real machine.
System virtual machines (sometimes called hardware virtual machines)
allow the sharing of the underlying physical machine resources between
different virtual machines, each running its own operating system. The
software layer providing the virtualization is called a virtual machine
monitor or hypervisor. A hypervisor can run on bare hardware or on top of
an operating system.
The main advantages of system Virtual Machines are:
TRACE KTU
• Multiple Operating System environments can co-exist on the same
computer, in strong isolation from each other
• The virtual machine can provide an instruction set architecture (ISA)
that is somewhat different from that of the real machine
• Application provisioning, maintenance, high availability and disaster
recovery.
BCCML
S6 ECE EC 266
order to support software that has not yet been ported to the latest
version). The use of virtual machines to support different guest Operating
Systems is becoming popular in embedded systems; a typical use is to
support a real-time operating system at the same time as a high-level
Page | 21
Operating System such as Linux or Windows.
Another use is to sandbox an Operating System that is not trusted,
possibly because it is a system under development. Virtual machines have
other advantages for Operating System development, including better
debugging access and faster reboots.
Consider the following figure in which OS1, OS2, and OS4 are three
different operating systems and OS3 is operating system under test. All
these operating systems are running on the same real machine but they are
not directly dealing with the real machine, they are dealing with Virtual
Machine Monitor (VMM) which provides each user with the illusion of
running on a separate machine. If the operating system being tested causes
a system to crash, this crash affects only its own virtual machine. The other
users of the real machine can continue their operation without being
disturbed. Actually lowest level routines of the operating system deals with
the VMM instead of the real machine which provides the services and
functions as those available on the real machine. Each user of the virtual
machine i.e. OS1, OS2 etc. runs in user mode, not supervisor mode, on the
real machine.
TRACE KTU
ARCHITECTURES OF OS
Kernel
The kernel is a part of software. It is like a bridge between the shell and
hardware. It is responsible for running programs and providing secure
access to the machine’s hardware. The kernel is used for scheduling, i.e., it
maintains a time table for all processes.
1. Monolithic Architecture
BCCML
S6 ECE EC 266
This structure is prominent in the early days. The structure consists of no-
structure. OS composed of a single module. All data and code use same
memeory space. The system is a collection of procedures. Each procedure
can call any other procedure.
Page | 22
A little structure imposed by exposing a set of system calls to the outside.
Supporting these system calls through utility procedures (check data
passed to system call, move data around …)
TRACE KTU
Advantages
BCCML
S6 ECE EC 266
Page | 23
Microkernel Architecture
TRACE KTU
BCCML
S6 ECE EC 266
Layered Architecture
This approach breaks up the operating system into different layers. The
operating system is broken up into number of layers. The bottom layer
TRACE KTU
(layer 0) is the hardware layer and the highest layer (layer n) is the user
interface layer as shown in the figure.
The layered are selected such that each user functions and services of
only lower level layer. The first layer can be debugged wit out any concern
for the rest of the system. It user basic hardware to implement this function
once the first layer is debugged., it’s correct functioning can be
assumed while the second layer is debugged & soon . If an error is found
during the debugged of particular layer, the layer must be on that layer,
because the layer below it already debugged. Because of this design of the
system is simplified when operating system is broken up into layer.
BCCML
S6 ECE EC 266
TRACE KTU
BCCML
S6 ECE EC 266
TRACE KTU
BCCML