0% found this document useful (0 votes)
20 views24 pages

Mod 1

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views24 pages

Mod 1

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Module No 1: Introduction

Concept of Operating System. An operating system acts as an intermediary


between the user of a computer and the computer hardware. The purpose of
an operating system is to provide an environment in which a user can
execute programs in a convenient and efficient manner. An operating system
is software that manages the computer hardware. The hardware must
provide appropriate mechanisms to ensure the correct operation of the
computer system and to prevent user programs from interfering with the
proper operation of the system. Internally, operating systems vary greatly in
their makeup, since they are organized along many different lines. The
design of a new operating system is a major task. It is important that the
goals of the system be well defined before the design begins. These goals
form the basis for choices among various algorithms and strategies. Because
an operating system is large and complex, it must be created piece by piece.
Each of these pieces should be a well-delineated portion of the system, with
carefully defined inputs, outputs, and functions.
Operating System objectives and functions.

An OS is a program that controls the execution of application programs and


acts as an interface between applications and the computer hardware. It can
be thought of as having three objectives:

• Convenience: An OS makes a computer more convenient to use.

• Efficiency: An OS allows the computer system resources to be used in an


efficient manner.

• Ability to evolve: An OS should be constructed in such a way as to permit


the effective development, testing, and introduction of new system functions
without interfering with service.

OS typically provides services in the following areas.

• Program development: The OS provides a variety of facilities and


services, such as editors and debuggers, to assist the programmer in
creating programs. Typically, these services are in the form of utility
programs that, while not strictly part of the core of the OS, are supplied
with the OS and are referred to as application program development
tools.
• Program execution: A number of steps need to be performed to
execute a program. Instructions and data must be loaded into main
memory, I/O devices and files must be initialized, and other resources
must be prepared. The OS handles these scheduling duties for the user.
• Access to I/O devices: Each I/O device requires its own peculiar set of
instructions or control signals for operation. The OS provides a
uniform interface that hides these details so that programmers can
access such devices using simple reads and writes.
• Controlled access to files: For file access, the OS must reflect a detailed
understanding of not only the nature of the I/O device (disk drive, tape
drive) but also the structure of the data contained in the files on the
storage medium. In the case of a system with multiple users, the OS
may provide protection mechanisms to control access to the files.
• System access: For shared or public systems, the OS controls access to
the system as a whole and to specific system resources. The access
function must provide protection of resources and data from
unauthorized users and must resolve conflicts for resource contention.
• Error detection and response: A variety of errors can occur while a
computer system is running. These include internal and external
hardware errors, such as a memory error, or a device failure or
malfunction; and various software errors, such as division by zero,
attempt to access forbidden memory location, and inability of the OS
to grant the request of an application. In each case, the OS must
provide a response that clears the error condition with the least impact
on running applications. The response may range from ending the
program that caused the error, to retrying the operation, to simply
reporting the error to the application.
• Accounting: A good OS will collect usage statistics for various
resources and monitor performance parameters such as response time.
On any system, this information is useful in anticipating the need for
future enhancements and in tuning the system to improve
performance. On a multiuser system, the information can be used for
billing purposes.

Types of Operating System.


Serial Processing: With the earliest computers, from the late 1940s to the
mid-1950s, the programmer interacted directly with the computer hardware;
there was no OS. These computers were run from a console consisting of
display lights, toggle switches, some form of input device, and a printer.
Programs in machine code were loaded via the input device (e.g., a card
reader). If an error halted the program, the error condition was indicated by
the lights. If the program proceeded to a normal completion, the output
appeared on the printer. These early systems presented two main problems:
• Scheduling: Most installations used a hardcopy sign-up sheet to reserve
computer time. Typically, a user could sign up for a block of time in
multiples of a half hour or so. A user might sign up for an hour and finish in
45 minutes; this would result in wasted computer processing time. On the
other hand, the user might run into problems, not finish in the allotted time,
and be forced to stop before resolving the problem.

• Setup time: A single program, called a job, could involve loading the
compiler plus the high-level language program (source program) into
memory, saving the compiled program (object program) and then loading
and linking together the object program and common functions. Each of
these steps could involve mounting or dismounting tapes or setting up card
decks. If an error occurred, the hapless user typically had to go back to the
beginning of the setup sequence. Thus, a considerable amount of time was
spent just in setting up the program to run. This mode of operation could be
termed serial processing, reflecting the fact that users have access to the
computer in series. Over time, various system software tools were
developed to attempt to make serial processing more efficient. These include
libraries of common functions, linkers, loaders, debuggers, and I/O driver
routines that were available as common software for all users.

Simple Batch Systems

Early computers were very expensive, and therefore it was important to


maximize processor utilization. The wasted time due to scheduling and
setup time was unacceptable.

The central idea behind the simple batch-processing scheme is the use of a
piece of software known as the monitor. With this type of OS, the user no
longer has direct access to the processor. Instead, the user submits the job on
cards or tape to a computer operator, who batches the jobs together
sequentially and places the entire batch on an input device, for use by the
monitor. Each program is constructed to branch back to the monitor when it
completes processing, at which point the monitor automatically begins
loading the next program. To understand how this scheme works, let us look
at it from two points of view: that of the monitor and that of the processor.

• Monitor point of view: The monitor controls the sequence of events. For
this to be so, much of the monitor must always be in main memory and
available for execution (Figure 2.3). That portion is referred to as the resident
monitor. The rest of the monitor consists of utilities and common functions
that are loaded as subroutines to the user program at the beginning of any
job that requires them. The monitor reads in jobs one at a time from the input
device (typically a card reader or magnetic tape drive). As it is read in, the
current job is placed in the user program area, and control is passed to this
job. When the job is completed, it returns control to the monitor, which
immediately reads in the next job. The results of each job are sent to an
output device, such as a printer, for delivery to the user.

• Processor point of view: At a certain point, the processor is executing


instructions from the portion of main memory containing the monitor. These
instructions cause the next job to be read into another portion of main
memory
Once a job has been read in, the processor will encounter a branch instruction
in the monitor that instructs the processor to continue execution at the start
of the user program. The processor will then execute the instructions in the
user program until it encounters an ending or error condition. Either event
causes the processor to fetch its next instruction from the monitor program.
Thus, the phrase “control is passed to a job” simply means that the processor
is now fetching and executing instructions in a user program, and “control
is returned to the monitor” means that the processor is now fetching and
executing instructions from the monitor program.

The monitor performs a scheduling function: A batch of jobs is queued up,


and jobs are executed as rapidly as possible, with no intervening idle time.
The monitor improves job setup time as well. With each job, instructions are
included in a primitive form of job control language (JCL).

Multi-programmed Batch Systems

Even with the automatic job sequencing provided by a simple batch OS, the
processor is often idle. The problem is that I/O devices are slow compared to
the processor. Figure 2.4 details a representative calculation. The calculation
concerns a program that processes a file of records and performs, on average,
100 machine instructions per record. In this example, the computer spends
over 96% of its time waiting for I/O devices to finish transferring data to and
from the file. Figure 2.5a illustrates this situation, where we have a single
program, referred to as uniprogramming. The processor spends a certain
amount of time executing, until it reaches an I/O instruction. It must then
wait until that I/O instruction concludes before proceeding.

This inefficiency is not necessary. We know that there must be enough


memory to hold the OS (resident monitor) and one user program. Suppose
that there is room for the OS and two user programs. When one job needs to
wait for I/O, the processor can switch to the other job, which is likely not
waiting for I/O (Figure 2.5b). Furthermore, we might expand memory to
hold three, four, or more programs and switch among all of them (Figure
2.5c). The approach is known as multiprogramming , or multitasking . It is
the central theme of modern operating systems.

To illustrate the benefit of multiprogramming, we give a simple example.


Consider a computer with 250 Mbytes of available memory (not used by the
OS), a disk, a terminal, and a printer. Three programs, JOB1, JOB2, and JOB3,
are submitted for execution at the same time, with the attributes listed in
Table 2.1. We assume minimal processor requirements for JOB2 and JOB3
and continuous disk and printer use by JOB3. For a simple batch
environment, these jobs will be executed in sequence. Thus, JOB1 completes
in 5 minutes. JOB2 must wait until the 5 minutes are over and then completes
15 minutes after that. JOB3 begins after 20 minutes and completes at 30
minutes from the time it was initially submitted.

The average resource utilization, throughput, and response times are shown
in the uniprogramming column of Table 2.2. Device-by-device utilization is
illustrated in Figure 2.6a. It is evident that there is gross underutilization for
all resources when averaged over the required 30-minute time period. Now
suppose that the jobs are run concurrently under a multiprogramming OS.
Because there is little resource contention between the jobs, all three can run
in nearly minimum time while coexisting with the others in the computer
(assuming that JOB2 and JOB3 are allotted enough processor time to keep
their input and output operations active). JOB1 will still require 5 minutes to
complete, but at the end of that time, JOB2 will be one-third finished and
JOB3 half finished. All three jobs will have finished within 15 minutes. The
improvement is evident when examining the multiprogramming column of
Table 2.2, obtained from the histogram shown in Figure 2.6b . As with a
simple batch system, a multiprogramming batch system must rely on certain
computer hardware features. The most notable additional feature that is
useful for multiprogramming is the hardware that supports I/O interrupts
and DMA (direct memory access). With interrupt-driven I/O or DMA, the
processor can issue an I/O command for one job and proceed with the
execution of another job while the I/O is carried out by the device controller.
When the I/O operation is complete, the processor is interrupted and control
is passed to an interrupt-handling program in the OS. The OS will then pass
control to another job.

Time-Sharing Systems

Just as multiprogramming allows the processor to handle multiple batch


jobs at a time, multiprogramming can also be used to handle multiple
interactive jobs. In this latter case, the technique is referred to as time sharing,
because processor time is shared among multiple users. In a time-sharing
system, multiple users simultaneously access the system through terminals,
with the OS interleaving the execution of each user program in a short burst
or quantum of computation. Thus, if there are n users actively requesting
service at one time, each user will only see on the average 1/ n of the effective
computer capacity, not counting OS overhead. However, given the relatively
slow human reaction time, the response time on a properly designed system
should be similar to that on a dedicated computer. Both batch processing
and time-sharing use multiprogramming.

Real Time Operating System An RTOS or real-time operating system is a


special-purpose OS for computers that must accomplish tasks within severe
time limitations. All real-time operating systems are built to complete their
tasks in a specific amount of time, so they must be quick enough to meet
their deadline. Time restrictions in real-time systems simply refer to the time
interval provided for the continuing program’s reaction. This deadline
indicates that the task must be performed within the specified time frame.
As a result, air traffic control systems employ them.

Types of RTOS:

1. Hard Real-Time: A hard real-time operating system is used when we


need to complete tasks by a given deadline. If the task is not completed
on time, then the system is considered to be failed.
2. Soft Real-Time: A soft real-time operating system is used where few
delays in time duration are acceptable. That is if the given task is taking
a few seconds more than the specified time then also no critical
damage takes place.
3. Firm Real-Time: A firm real-time operating system lies between the
hard and soft real-time operating system. A firm real-time system is
one in which a few missed deadlines will not lead to total failure, but
missing more than a few may lead to complete system failure.

OS Services.
An operating system provides an environment for the execution of
programs. It provides certain services to programs and to the users of those
programs. The specific services provided, of course, differ from one
operating system to another, but we can identify common classes. These
operating system services are provided for the convenience of the
programmer, to make the programming task easier. Figure 2.1 shows one
view of the various operating-system services and how they interrelate. One
set of operating system services provides functions that are helpful to the
user.
• User interface. Almost all operating systems have a user interface (UI).
This interface can take several forms. One is a command-line interface
(CLI), which uses text commands and a method for entering them (say,
a keyboard for typing in commands in a specific format with specific
options). Another is a batch interface, in which commands and
directives to control those commands are entered into files, and those
files are executed. Most commonly, a graphical user interface (GUI) is
used. Here, the interface is a window system with a pointing device to
direct I/O, choose from menus, and make selections and a keyboard to
enter text. Some systems provide two or all three of these variations.
• Program execution. The system must be able to load a program into
memory and to run that program. The program must be able to end its
execution, either normally or abnormally (indicating error).
• I/O operations. A running program may require I/O, which may
involve a file or an I/O device. For specific devices, special functions
may be desired (such as recording to a CD or DVD drive or blanking
a display screen). For efficiency and protection, users usually cannot
control I/O devices directly. Therefore, the operating system must
provide a means to do I/O.
• File-system manipulation. The file system is of particular interest.
Obviously, programs need to read and write files and directories. They
also need to create and delete them by name, search for a given file,
and list file information. Finally, some operating systems include
permissions management to allow or deny access to files or directories
based on file ownership. Many operating systems provide a variety of
file systems, sometimes to allow personal choice and sometimes to
provide specific features or performance characteristics.
• Communications. There are many circumstances in which one process
needs to exchange information with another process. Such
communication may occur between processes that are executing on the
same computer or between processes that are executing on different
computer systems tied together by a computer network.
Communications may be implemented via shared memory, in which
two or more processes read and write to a shared section of memory,
or message passing, in which packets of information in predefined
formats are moved between processes by the operating system.
• Error detection. The operating system needs to be detecting and
correcting errors constantly. Errors may occur in the CPU and memory
hardware (such as a memory error or a power failure), in I/O devices
(such as a parity error on disk, a connection failure on a network, or
lack of paper in the printer), and in the user program (such as an
arithmetic overflow, an attempt to access an illegal memory location,
or a too-great use of CPU time). For each type of error, the operating
system should take the appropriate action to ensure correct and
consistent computing. Sometimes, it has no choice but to halt the
system. At other times, it might terminate an error-causing process or
return an error code to a process for the process to detect and possibly
correct.

Another set of operating system functions exists not for helping the
user but rather for ensuring the efficient operation of the system itself.
Systems with multiple users can gain efficiency by sharing the
computer resources among the users.

• Resource allocation. When there are multiple users or multiple jobs


running at the same time, resources must be allocated to each of them.
The operating system manages many different types of resources.
Some (such as CPU cycles, main memory, and file storage) may have
special allocation code, whereas others (such as I/O devices) may have
much more general request and release code. For instance, in
determining how best to use the CPU, operating systems have CPU-
scheduling routines that take into account the speed of the CPU, the
jobs that must be executed, the number of registers available, and other
factors. There may also be routines to allocate printers, USB storage
drives, and other peripheral devices.
• Accounting. We want to keep track of which users use how much and
what kinds of computer resources. This record keeping may be used
for accounting (so that users can be billed) or simply for accumulating
usage statistics. Usage statistics may be a valuable tool for researchers
who wish to reconfigure the system to improve computing services.
• Protection and security. The owners of information stored in a
multiuser or networked computer system may want to control use of
that information. When several separate processes execute
concurrently, it should not be possible for one process to interfere with
the others or with the operating system itself. Protection involves
ensuring that all access to system resources is controlled. Security of
the system from outsiders is also important. Such security starts with
requiring each user to authenticate himself or herself to the system,
usually by means of a password, to gain access to system resources. It
extends to defending external I/O devices, including network
adapters, from invalid access attempts and to recording all such
connections for detection of break-ins. If a system is to be protected
and secure, precautions must be instituted throughout it. A chain is
only as strong as its weakest link.

System Calls.
System calls provide an interface to the services made available by an
operating system. These calls are generally available as routines written in C
and C++, although certain low-level tasks (for example, tasks where
hardware must be accessed directly) may have to be written using assembly-
language instructions.

Before we discuss how an operating system makes system calls available,


let’s first use an example to illustrate how system calls are used:

writing a simple program to read data from one file and copy them to
another file. The first input that the program will need is the names of the
two files: the input file and the output file. These names can be specified in
many ways, depending on the operating-system design. One approach is for
the program to ask the user for the names. In an interactive system, this
approach will require a sequence of system calls, first to write a prompting
message on the screen and then to read from the keyboard the characters
that define the two files. On mouse-based and icon-based systems, a menu
of file names is usually displayed in a window. The user can then use the
mouse to select the source name, and a window can be opened for the
destination name to be specified. This sequence requires many I/O system
calls.

Once the two file names have been obtained, the program must open the
input file and create the output file. Each of these operations requires another
system call. Possible error conditions for each operation can require
additional system calls. When the program tries to open the input file, for
example, it may find that there is no file of that name or that the file is
protected against access. In these cases, the program should print a message
on the console (another sequence of system calls) and then terminate
abnormally (another system call). If the input file exists, then we must create
a new output file. We may find that there is already an output file with the
same name. This situation may cause the program to abort (a system call),
or we may delete the existing file (another system call) and create a new one
(yet another system call). Another option, in an interactive system, is to ask
the user (via a sequence of system calls to output the prompting message
and to read the response from the terminal) whether to replace the existing
file or to abort the program.

When both files are set up, we enter a loop that reads from the input file (a
system call) and writes to the output file (another system call). Each read and
write must return status information regarding various possible error
conditions. On input, the program may find that the end of the file has been
reached or that there was a hardware failure in the read (such as a parity
error). The write operation may encounter various errors, depending on the
output device (for example, no more disk space).

Finally, after the entire file is copied, the program may close both files
(another system call), write a message to the console or window (more
system calls), and finally terminate normally (the final system call). This
system-call sequence is shown in Figure 2.5.

For most programming languages, the run-time support system (a set of


functions built into libraries included with a compiler) provides a systemcall
interface that serves as the link to system calls made available by the
operating system. The system-call interface intercepts function calls in the
API and invokes the necessary system calls within the operating system.
Typically, a number is associated with each system call, and the system-call
interface maintains a table indexed according to these numbers. The system
call interface then invokes the intended system call in the operating-system
kernel and returns the status of the system call and any return values.

The relationship between an API, the system-call interface, and the operating
system is shown in Figure 2.6, which illustrates how the operating system
handles a user application invoking the open() system call.

Three general methods are used to pass parameters to the operating system.
The simplest approach is to pass the parameters in registers. In some cases,
however, there may be more parameters than registers. In these cases, the
parameters are generally stored in a block, or table, in memory, and the
address of the block is passed as a parameter in a register (Figure 2.7). This
is the approach taken by Linux and Solaris. Parameters also can be placed,
or pushed, onto the stack by the program and popped off the stack by the
operating system. Some operating systems prefer the block or stack method
because those approaches do not limit the number or length of parameters
being passed.

Types of System Calls

System calls can be grouped roughly into six major categories:

• Process control
• File manipulation
• Device manipulation
• Information maintenance
• Communications
• Protection.
System Programs. System programs, also known as system utilities,
provide a convenient environment for program development and execution.
Some of them are simply user interfaces to system calls. Others are
considerably more complex. They can be divided into these categories:

• File management. These programs create, delete, copy, rename, print,


dump, list, and generally manipulate files and directories.
• Status information. Some programs simply ask the system for the
date, time, amount of available memory or disk space, number of
users, or similar status information. Others are more complex,
providing detailed performance, logging, and debugging information.
Typically, these programs format and print the output to the terminal
or other output devices or files or display it in a window of the GUI.
Some systems also support a registry, which is used to store and
retrieve configuration information.
• File modification. Several text editors may be available to create and
modify the content of files stored on disk or other storage devices.
There may also be special commands to search contents of files or
perform transformations of the text.
• Programming-language support. Compilers, assemblers, debuggers,
and interpreters for common programming languages (such as C, C++,
Java, and PERL) are often provided with the operating system or
available as a separate download.
• Program loading and execution. Once a program is assembled or
compiled, it must be loaded into memory to be executed. The system
may provide absolute loaders, relocatable loaders, linkage editors, and
overlay loaders. Debugging systems for either higher-level languages
or machine language are needed as well.
• Communications. These programs provide the mechanism for
creating virtual connections among processes, users, and computer
systems. They allow users to send messages to one another’s screens,
to browse Web pages, to send e-mail messages, to log in remotely, or
to transfer files from one machine to another.
• Background services. All general-purpose systems have methods for
launching certain system-program processes at boot time. Some of
these processes terminate after completing their tasks, while others
continue to run until the system is halted. Constantly running system-
program processes are known as services, subsystems, or daemons.

Structure of an OS. Assignment


Concept of Virtual Machine. Virtual machines first appeared
commercially on IBM mainframes in 1972. Virtualization was provided by
the IBM VM operating system. This system has evolved and is still available.
In addition, many of its original concepts are found in other systems, making
it worth exploring.
The fundamental idea behind a virtual machine is to abstract the hardware
of a single computer (the CPU, memory, disk drives, network interface
cards, and so forth) into several different execution environments, thereby
creating the illusion that each separate environment is running on its own
private computer. This concept may seem similar to the layered approach of
operating system implementation, and in some ways it is. In the case of
virtualization, there is a layer that creates a virtual system on which
operating systems or applications can run.

Virtual machine implementations involve several components. At the base


is the host, the underlying hardware system that runs the virtual machines.
The virtual machine manager (VMM) (also known as a hypervisor) creates
and runs virtual machines by providing an interface that is identical to the
host (except in the case of paravirtualization, discussed later). Each guest
process is provided with a virtual copy of the host (Figure 16.1). Usually, the
guest process is in fact an operating system. A single physical machine can
thus run multiple operating systems concurrently, each in its own virtual
machine.
Benefits and Features

Several advantages make virtualization attractive. Most of them are


fundamentally related to the ability to share the same hardware yet run
several different execution environments (that is, different operating
systems) concurrently. One important advantage of virtualization is that the
host system is protected from the virtual machines, just as the virtual
machines are protected from each other. A virus inside a guest operating
system might damage that operating system but is unlikely to affect the host
or the other guests. Because each virtual machine is almost completely
isolated from all other virtual machines, there are almost no protection
problems.

Another advantage of virtual machines for developers is that multiple


operating systems can run concurrently on the developer’s workstation. This
virtualized workstation allows for rapid porting and testing of programs in
varying environments.
Virtualization can improve not only resource utilization but also resource
management.

Case Study of Unix and Windows Operating System.


Assignment…

You might also like