0% found this document useful (0 votes)
85 views49 pages

Operating System 4TH Semester

An operating system (OS) serves as an intermediary between users and computer hardware, managing resources and providing an environment for program execution. Key functions include process management, memory management, file management, I/O management, security, and networking. The OS can be structured in various ways, such as simple, layered, micro-kernel, or modular, and includes components like the kernel and shell that facilitate user interaction and resource allocation.

Uploaded by

Taha Uzair
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
85 views49 pages

Operating System 4TH Semester

An operating system (OS) serves as an intermediary between users and computer hardware, managing resources and providing an environment for program execution. Key functions include process management, memory management, file management, I/O management, security, and networking. The OS can be structured in various ways, such as simple, layered, micro-kernel, or modular, and includes components like the kernel and shell that facilitate user interaction and resource allocation.

Uploaded by

Taha Uzair
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

UNIT-1

Introduction to Operating System


An operating system acts as an intermediary between the user of a computer and computer
hardware. The purpose of an operating system is to provide an environment in which a user can
execute programs in a convenient and efficient manner.

An operating system is software that manages the computer hardware. The hardware must
provide appropriate mechanisms to ensure the correct operation of the computer system and to
prevent user programs from interfering with the proper operation of the system.

Operating System – Definition:


 An operating system is a program that controls the execution of application programs
and acts as an interface between the user of a computer and the computer hardware.
 A more common definition is that the operating system is the one program running at
all times on the computer (usually called the kernel), with all else being application
programs.
 An operating system is concerned with the allocation of resources and services, such
as memory, processors, devices, and information. The operating system
correspondingly includes programs to manage these resources, such as a traffic
controller, a scheduler, memory management module, I/O programs, and a file
system.
Introduction to Operating System

Applications of Operating System


Following are some of the important activities that an Operating System performs

 Security− By means of password and similar other techniques, it prevents


unauthorized access to programs and data.
 Control over system performance− Recording delays between request for a service
and response from the system.
 Job accounting− Keeping track of time and resources used by various jobs and users.
 Error detecting aids− Production of dumps, traces, error messages, and other
debugging and error detecting aids.
 Coordination between other softwares and users− Coordination and assignment of
compilers, interpreters, assemblers and other software to the various users of the
computer systems.

Functions of an operating system

The primary function of an operating system is to provide an environment for the execution of
users’ program. However, it is divided into number of small pieces that performs specialized
tasks.

The various functions of operating system are:

1. Process Management
2. Main memory Management
3. Secondary storage Management
4. File Management
5. I/O Management
6. Protection and security
7. Networking
8. Command interpretation

1. Process Management
 The process management refers to the assignment of processor to different tasks
being performed by the computer system. The process management schedules the
various processes of a system for execution.
 The operating system is responsible for the following activities in connection with
process management:
1. Creating and deleting both user and system processes.
2. Suspending and resuming processes.
3. Providing mechanisms for process synchronization such as semaphores.
4. Providing mechanism for process communication(i.e. communication between
different processes in a system)
5. Providing mechanisms for deadlock handling. Deadlock is a condition in which the
number of processes waits infinitely for some shared resource.

2. Memory Management
 The operating system manages the Primary Memory or Main Memory. Main memory
is made up of a large array of bytes or words where each byte or word is assigned a
certain address.
 Main memory is a fast storage and it can be accessed directly by the CPU. For a
program to be executed, it should be first loaded in the main memory.
 An Operating System performs the following activities for memory management:
1. It keeps tracks of primary memory, i.e., which bytes of memory are used by which
user program.
2. Deciding which processes are to be loaded into memory when memory space
becomes available
3. Allocating and deallocating memory spaces as needed
 In multi programming, the OS decides the order in which process are granted access
to memory, and for how long.

3. Secondary storage Management


 The main memory has a limited size and cannot store all the user programs at once.
Moreover, when the power is lost, the data it holds are also lost. So computer system
provides secondary storage devices such as magnetic disks and tapes to back up main
memory.
 The secondary storage devices store system programs, such as compiler, editor and
assembler and user programs that are not used frequently.
 The operating system performs following functions in connection with disk
management;
1. Free space management i.e. manages free space on disk by reclaiming memory from
used objects.
2. Storage allocation i.e., allocates storage area for storing new programs.
3. Disk scheduling.

4. File Management
A file system is normally organized into directories for easy navigation and usage. These
directories may contain files and other directions.

An Operating System does the following activities for file management −

 Keeps track of information, location, uses, status etc. The collective facilities are
often known as file system.
 Decides who gets the resources.
 Allocates the resources.
 De-allocates the resources.

5. I/O Management
 I/O management refers to the coordination and assignment of the different input and
output devices to the various programs that are being executed.
 Thus, an OS is responsible for managing various I/O devices, such as keyboard,
mouse, printer and monitor.
 The I/O sub system consists of following components:
1. A memory management component that includes buffering, caching and spooling.
2. A general device-driver interface.
3. Drivers for specific hardware devices.
 The operating system performs following function related to I/O management
1. Issuing commands to various input and output devices.
2. Capturing interrupt such as hardware failure.
3. Handling errors that appear in reading and writing process of devices.
4. Security
 Security deals with protecting the various resources and information of a computer
system against destruction and unauthorized access.
 A total approach to computer security involves both external and internal security.
 External security deals with securing the computer system against external factors
such as fires, floods, earthquakes, stolen disks/tapes, leaking out of stored information
by a person who has access to the information.
 Internal security deals with users’ authentication, access control and cryptography.

7. Networking
 Networking is used for exchanging information among different computer that are
distributed across various locations.
 Distributed systems consist of multiple processor and each processors has its own
memory and clock.
 The various processors communicate using communication links, such as telephone
lines or buses.
 The processors in distributed system vary is size and functions. They may include
small microprocessors, workstation, minicomputers and large general purpose
computer systems.
 Thus, a distributed system enables us to share the various resources of the network.
 This results in computation speedup, increased functionality, increased data
availability and better reliability.
8. Command Interpretation
 The command interpreter is the basic interface between the computer and the user.
 Command interpreter provides a set of commands using which the user can give
instruction to the computer for getting some job done by it.
 The various commands supported by command interpretation module are known as
system calls.
 When a user gives instructions to the computer by using these system calls, the
command interpreter takes care of interpreting these commands and directing the
system resources to handle the requests.
 There are two different user interfaces supported by various operating systems:
1. Command line interface
2. Graphical user interface
 Command line Interface (CLI):- It is the textual user interface in which user gives
instruction to computer by typing the commands.
 Graphical user interface (GUI):- GUI provides the user a screen full of graphical
icons or menus and allows the user to make a rapid selection from the displayed icons
or menus to give instruction to computer.

Operating system as a resource manager


 A computer system usually has many hardware and software resources such as
processor, memory, disks, printers, I/O devices etc. The operating system acts as a
manager of these resources.
 The operating system is responsible for controlling and allocating various hardware
and software resources to different users in an optimal and efficient manner.
 The task of resources management becomes essential in multi user operating systems
where different users compete for the same resources.
 As a resource manager, an operating system keeps track of:
1. Who is using which resources
2. Granting resource requests
3. Accounting resource usage
4. Mediating conflicting requests from different programs and users.
 Operating system manages resources in two ways:
1. Time Multiplexing: – It defines the sharing of resources on the basis of fixed time
slices. For example, the operating system allocates the resources, such as CPU to
program A for a fixed time slice. After that time slice is over, the CPU is allocated to
another program B and so on.
2. Space Multiplexing:- It defines the concurrent sharing of resources among different
programs. For example, sharing of hard disk and main memory is space multiplexing.
Structures of Operating Systems
Operating system can be implemented with the help of various structures. The structure of the OS
depends mainly on how the various common components of the operating system are
interconnected and melded into the kernel. Depending on this we have following structures of the
operating system:

Simple structure:
Such operating systems do not have well defined structure and are small, simple and limited
systems. The interfaces and levels of functionality are not well separated. MS-DOS is an example
of such operating system. In MS-DOS application programs are able to access the basic I/O
routines. These types of operating system cause the entire system to crash if one of the user
programs fails.

Diagram of the structure of MS-DOS is shown below.


Layered structure:
An OS can be broken into pieces and retain much more control on system. In this structure the
OS is broken into number of layers (levels). The bottom layer (layer 0) is the hardware and the
topmost layer (layer N) is the user interface. These layers are so designed that each layer uses the
functions of the lower level layers only. This simplifies the debugging process as if lower level
layers are debugged and an error occurs during debugging then the error must be on that layer
only as the lower level layers have already been debugged.

The main disadvantage of this structure is that at each layer, the data needs to be modified and
passed on which adds overhead to the system. Moreover careful planning of the layers is
necessary as a layer can use only lower level layers. UNIX is an example of this structure.

Micro-kernel:
This structure designs the operating system by removing all non-essential components from the
kernel and implementing them as system and user programs. This result in a smaller kernel called
the micro-kernel.
Advantages of this structure are that all new services need to be added to user space and does not
require the kernel to be modified. Thus it is more secure and reliable as if a service fails then rest
of the operating system remains untouched. Mac OS is an example of this type of OS.

Modular structure or approach:


It is considered as the best approach for an OS. It involves designing of a modular kernel. The
kernel has only set of core components and other services are added as dynamically loadable
modules to the kernel either during run time or boot time. It resembles layered structure due to
the fact that each kernel has defined and protected interfaces but it is more flexible than the
layered structure as a module can call any other module.
For example Solaris OS is organized as shown in the figure.

Role of kernel and Shell

 Each process asks for the system resources like computing power, memory network
network connectivity etc. The kernel is the bulk of the executable code in charge of
handling such request.
 The kernel is the main component of most computer operating systems. It is a bridge
between applications and the actual data processing done at the hardware level.
The following are the major role of the kernel
Resource allocation
 The kernel’s primary function is to manage the computer’s resources and allow other
programs to run and use these resources. These resources are CPU, memory and I/O
devices.
Process Management
 The kernel is in charge of creating, destroying and handling the input output of the
process.
 Communications amongst the different processes is the responsibility of the kernel.
Memory Management
 The memory is the major resource of the computer and the policy used to deal with it
is a critical.
 The kernel builds a virtual address space for all the process on the top of the resource.
The different parts of the kernel interact with the memory management subsystem by
using the function calls.
File System
 The kernel builds the structured file system on the top of the unstructured file system
on the top of the unstructured hardware.
 Kernel also supports the multiple file system types that is different way of managing
the data.
Device Control
 Every system amps into the physical device.
 All the device control operations are performed by the code that us specific to the
device being addressed. The code is called device driver.
Inter- Process communication
 Kernel provides methods for synchronization and communication between processes
called Inter-process communication (IPC)
 There are various approaches of IPC say, semaphore, shared memory, message
queue, pipe etc.
Security or protection Management
 Kernel also provides protection from faults (error control) and from malicious
behavior.
 One approach toward this can be language based protection sytem, in which the
kernel will only allow code to execute which has been produced by a trusted language
compiler.

Role of Shell
 It gathers input from user and executes programs based on that input when a program
finishes executing; it displays that program’s output.
 It is primary interface between a user sitting at his terminal and operating system,
unless the user is not using a graphical interface.
 A shell is an environment in which user can run out commands, programs and shell
scripts
 There can various kinds of shells such as sh(Bourne shell), csh( C shell),ksh(korn
shell) and bash.
 When any user logs in, a shell is started.
 The shell has the terminal as standard input and standard output.
 It starts out by typing the prompt, a character such as $, which tells user that shell is
waiting to accept command.
 For example if user types date command, $ date
 Tue Feb. 23 06:01:13 IST 2019
 The shell creates a child process and runs date program as child.
 While child process is running, the shell waits for it to terminate.
 When child finishes, the shell types the prompt again and tries to read the next input
line.
 Shell is work as interface, command interpreter and programming language.
 Shell is interface between user and computer.
 User can directly interact with shell.
 Shell provides command prompt to user to execute commands.
 Shell-As command interpreter.
 It read command enter by user on prompt.
 It interprets the command, so kernel can understand it easily.
 Shell – As programming language
 Shell is also work as programming language
 It provides all features of programming language like variables, control structures and
loop structures

Views of operating system


An operating system is a construct that allows the user application programs to interact with the
system hardware. Operating system by itself does not provide any function but it provides an
atmosphere in which different applications and programs can do useful work.

The operating system can be observed from the point of view of the user or the system. This is
known as the user view and the system view respectively. More details about these are given as
follows –

User View

The user view depends on the system interface that is used by the users. The different types of
user view experiences can be explained as follows −

 If the user is using a personal computer, the operating system is largely designed to
make the interaction easy. Some attention is also paid to the performance of the
system, but there is no need for the operating system to worry about resource
utilization. This is because the personal computer uses all the resources available and
there is no sharing.
 If the user is using a system connected to a mainframe or a minicomputer, the
operating system is largely concerned with resource utilization. This is because there
may be multiple terminals connected to the mainframe and the operating system
makes sure that all the resources such as CPU, memory, I/O devices etc. are divided
uniformly between them.
 If the user is sitting on a workstation connected to other workstations through
networks, then the operating system needs to focus on both individual usage of
resources and sharing though the network. This happens because the workstation
exclusively uses its own resources but it also needs to share files etc. with other
workstations across the network.
 If the user is using a handheld computer such as a mobile, then the operating system
handles the usability of the device including a few remote operations. The battery
level of the device is also taken into account.
There are some devices that contain very less or no users view because there is no interaction
with the users. Examples are embedded computers in home devices, automobiles etc.

System View
According to the computer system, the operating system is the bridge between applications and
hardware. It is most intimate with the hardware and is used to control it as required.

The different types of system view for operating system can be explained as follows:

 The system views the operating system as a resource allocator. There are many
resources such as CPU time, memory space, file storage space, I/O devices etc. that
are required by processes for execution. It is the duty of the operating system to
allocate these resources judiciously to the processes so that the computer system can
run as smoothly as possible.
 The operating system can also work as a control program. It manages all the
processes and I/O devices so that the computer system works smoothly and there are
no errors. It makes sure that the I/O devices work in a proper manner without creating
problems.
 Operating systems can also be viewed as a way to make using hardware easier.
 Computers were required to easily solve user problems. However it is not easy to
work directly with the computer hardware. So, operating systems were developed to
easily communicate with the hardware.
 An operating system can also be considered as a program running at all times in the
background of a computer system (known as the kernel) and handling all the
application programs. This is the definition of the operating system that is generally
followed.

Evolution of OS
Evolution of OS since 1950 described in detail in this article. Here we will discuss six main
operating system types evaluated over the past 70 years.

Evolution of Operating System

Serial Processing
History of the operating system started in 1950. Before 1950, the programmers directly interact
with the hardware there was no operating system at that time. If a programmer wishes to execute
a program on those days, the following serial steps are necessary.

 Type the program or punched card.


 Convert the punched card to a card reader.
 Submit to the computing machine, is there any errors, the error was indicated by the
lights.
 The programmer examined the register and main memory to identify the cause of an
error
 Take outputs on the printers.
 Then the programmer ready for the next program.
Drawback:

This type of processing is difficult for users, it takes much time and the next program should wait
for the completion of the previous one. The programs are submitted to the machine one after one,
therefore the method is said to be serial processing.

Batch Processing
Before 1960, it is difficult to execute a program using a computer because of the computer
located in three different rooms, one room for the card reader, one room for executing the
program and another room for printing the result.
The user/machine operator runs between three rooms to complete a job. We can solve this
problem by using batch processing.
In batch processing technique, the same type of jobs batch together and execute at a time. The
carrier carries the group of jobs at a time from one room to another.
Therefore, the programmer need not run between these three rooms several times.

Multiprogramming
Multiprogramming is a technique to execute the number of programs simultaneously by a single
processor. In multiprogramming, a number of processes reside in main memory at a time. The
OS(Operating System) picks and begins to execute one of the jobs in main memory. Consider the
following figure, it depicts the layout of the multiprogramming system. The main memory
consisting of 5 jobs at a time, the CPU executes one by one.

Multiprogramming

In the non-multiprogramming system, the CPU can execute only one program at a time, if the
running program is waiting for any I/O device, the CPU becomes idle so it will effect on the
performance of the CPU.

But in a multiprogramming environment, if any I/O wait happened in a process, then the CPU
switches from that job to another job in the job pool. So, the CPU is not idle at any time.

Advantages:

 Can get efficient memory utilization.


 CPU is never idle so the performance of CPU will increase.
 The throughput of CPU may also increase.
 In the non-multiprogramming environment, the user/program has to wait for CPU
much time. But waiting time is limited in multiprogramming.

Time Sharing System


Time-sharing or multitasking is a logical extension of multiprogramming. Multiple jobs are
executed by the CPU switching between them. The CPU scheduler selects a job from the ready
queue and switches the CPU to that job. When the time slot expires, the CPU switches from this
job to another.
In this method, the CPU time is shared by different processes. So, it is said to be “Time-Sharing
System“. Generally, time slots are defined by the operating system.
Advantages:

 The main advantage of the time-sharing systemis efficient CPU utilization. It was
developed to provide interactive use of a computer system at a reasonable cost. A
time shared OS uses CPU scheduling and multiprogramming to provide each user
with a small portion of a time-shared computer.
 Another advantage of the time-sharing system over the batch processing system is,
the user can interact with the job when it is executing, but it is not possible in batch
systems.
Parallel System
There is a trend multiprocessor system, such system have more than one processor in close
communication, sharing the computer bus, the clock, and sometimes memory and peripheral
devices.

These systems are referred to as “Tightly Coupled” system. Then the system is called a parallel
system. In the parallel system, a number of processors are executing there job in parallel.
Advantages:

 It increases the throughput.


 By increasing the number of processors (CPU), to get more work done in a shorter
period of time.

Distributed System
In a distributed operating system, the processors cannot share a memory or a clock, each
processor has its own local memory. The processor communicates with one another through
various communication lines, such as high-speed buses. These systems are referred to as
“Loosely Coupled” systems.
Advantages:

 If a number of sites connected by high-speed communication lines, it is possible to


share the resources from one site to another site, for example, s1 and s2 are two sites.
These are connected by some communication lines. The site s1 having a printer, but
the site does not have any print. Then the system can be altered without moving from
s2 to s1. Therefore, resource sharing is possible in the distributed operating system.
 A big computer that is partitioned into a number of partitions, these sub-partitions are
run concurrently in distributed systems.
 If a resource or a system fails in one site due to technical problems, we can use other
systems/resources in some other sites. So, the reliability will increase in the
distributed system.
Types of Operating Systems
An Operating System performs all the basic tasks like managing files, processes, and memory.
Thus operating system acts as the manager of all the resources, i.e. resource manager. Thus, the
operating system becomes an interface between user and machine.
Types of Operating Systems: Some widely used operating systems are as follows-

1. Single user system


 In single user operating system, a single user can access the computer at a particular
time.
 The computer that are based on this operating system, have only single processor and
execute only a single program at all the time.
 This system provides all the resources such as CPU, I/O devices to a single user at all
the time.
 Single user operating system is of two types:
1. Single user, single tasking operating system
2. Single user,multi-tasking operating system
 The Single User, Single tasking operating system allows a single user to execute one
program at a particular time. For example MS-DOS and Palm OS for Palm handheld
computers are single user, single tasking OS
 The Single User, Multi tasking operating system allows a single user to execute
multiple programs at a same time. For example, a user can perform different tasks
such as making calculations in Excel sheet, printing a word document and download
a file from the internet at the same time.
 The main Disadvantage of this Operating system is that CPU sists idle for most of
the time and is not utilized to its maximum.

2. Multi user System

 In a multi-user operating system, multiple numbers of users can access different


resources of a computer at a same time.
 The Access is provided using a network that consists of various personal computers
attached to a mainframe computer system.
 The various personal computers can send and receive information to mainframe
computer system.
 Thus, the mainframe computers acts as the server and other personal computers act as
clients for that server.
 The examples of multi-user are UNIX, Windows 2000,Novell Netware

Advantage of Multi user system

 It helps in the sharing of data and information among different users.


 It also helps in the sharing of hardware resources such as printers and modems.
Disadvantages of multi user system
 It requires expensive hardware to setup a mainframe computer.
 When multiple users log on or work on same system it reduces the overall
performance of system

3. Batch Operating System


This type of operating system does not interact with the computer directly. There is an operator
which takes similar jobs having the same requirement and groups them into batches. It is the
responsibility of the operator to sort jobs with similar needs.

Advantages of Batch Operating System:

 It is very difficult to guess or know the time required for any job to complete.
Processors of the batch systems know how long the job would be when it is in queue
 Multiple users can share the batch systems
 The idle time for the batch system is very less
 It is easy to manage large work repeatedly in batch systems

Disadvantages of Batch Operating System:

 The computer operators should be well known with batch systems


 Batch systems are hard to debug
 It is sometimes costly
 The other jobs will have to wait for an unknown time if any job fails
Examples of Batch based Operating System: Payroll System, Bank Statements, etc.

4. Multi Programming

In the multi-programming system, one or multiple programs can be loaded into its main
memory for getting to execute. It is capable only one program or process to get CPU for
executes for their instructions, and other programs wait for getting their turn. Main goal of using
of multiprogramming system is overcome issue of under utilization of CPU and primary
memory.
Main objective of multiprogramming is to manage entire resources of the system. The primary
components of multiprogramming system are command processor, file system, I/O control
system, and transient area.

Requirement of Multiprogramming system


1. Large memory
 For a multiprogramming to work satisfactorily, large main memory is required to
accommodate a good number of user programs along with operating system.
2. Memory Protection
 Computers designed for multiprogramming must provide some type of memory
protection mechanism to prevent a program in one memory partition from changing
information or instruction of a program in another memory partition
3. Job status Preservation
 In multiprogramming, when one running job is blocked for I/O operation, the CPU is
taken away from that job and is given to some another job. Later on, when that job
has finished its I/O operation, it needs to be resumed with its execution.
 This requires preserving the status information of that job and restoring this
information back.
 For this an operating system uses process control block to save the status of each
process
4. Proper Job mix
 A proper mix I/O bound and CPU bound processes is required so that the operations
of CPU and I/O devices are balanced.
 If all the loaded jobs need I/O at the same time, the CPU will be idle.
 Thus, the main memory should contain some CPU bound and some I/O bound jobs so
that at least one job is always to utilize the CPU
5. CPU Scheduling
 In a multiprogramming system, often there will be situation in which two or more
jobs will be in the ready state, wating for CPU to be allocated for execution.
 In such a case, the operating system must decide to which process or job should CPU
be allocated
Advantages
 High CPU utilization.
 It appears that many programs are allotted CPU almost concurrently.
Disadvantages
 CPU scheduling is required.
 To accommodate several jobs in memory, memory management
5. Multitasking System
Multitasking operating system provides the interface for executing the multiple program tasks by
single user at a same time on the one computer system. For example, any editing task can be
performed while other programs are executing concurrently. Other example, user can open Gmail
and Power Point same time.

Types of Multitasking Operating System


True Multitasking
True multitasking is the capable for executing and process multiple tasks concurrently without
taking delay instead of switching tasks from one processor to other processor. It can perform
couple of tasks in parallel with underlying the H/W or S/W.

Preemptive Multitasking

Preemptive multitasking is special task that is assigned to computer operating system, in which
it takes decision that how much time spent by one task before assigning other task for using the
operating system. Operating system has control for completing this entire process, so it is known
as “Preemptive”.

Cooperative Multitasking
Cooperative multitasking is known as “Non-Preemptive Multitasking”. Main goal of Cooperative
multitasking is to run currently task, and to release the CPU to allow another task run. This task is
performed by calling taskYIELD().Context-switch is executed when this function is called.

Advantages of Multitasking Operating System


Time Shareable
In which, all tasks are allocated specific piece of time, so they do not need for waiting time for
CPU.

Manage Several Users


This operating system is more comfort for handling the multiple users concurrently, and several
programs can run smoothly without degradation of system’s performance.
Secured Memory
Multitasking operating system has well defined memory management, because this operating
system does not provide any types of permissions of unwanted programs to wasting the memory.

Great Virtual Memory


Multitasking operating system contains the best virtual memory system. Due to virtual memory,
any program do not need long waiting g time for completion their tasks, if this problem is
occurred then those programs are transferred to virtual memory.

Background Processing
Multitasking operating system creates the better environment to execute the background
programs. These background programs are not transparent for normal users, but these programs
help to run other programs smoothly such as firewall, antivirus software, and more.

Good Reliability
Multitasking operating system provides the several flexibilities for multiple users, and they are
more satisfied to them. On which, every users can operate single or multiple programs with
smoothly.

Use Multiple Programs


Users can operate multiple programs such as internet browser, PowerPoint, MS Excel, games,
and other utilities concurrently.

Optimize Computer Resources


Multitasking operating system is able to handle smoothly multiple computers’ resources such as
RAM, input/output devices, CPU, hard disk, and more.

Disadvantages of Multitasking Operating System


Memory Boundation
Computer can get slow performance, due to run multiple programs at a same time because main
memory gets more load while loading multiple programs. CPU is not able to provide separate
time for every program, and its response time gets increase. Main reason of occurring this
problem is that it uses to less capacity RAM. So, for getting solution can be increased the RAM
capacity.

Processor Boundation
Computer can run programs slowly due to slow speed of their processors, and its response time
can increase while handling multiple programs. Need better processing power, to overcome this
problem.
CPU Heat up
Multiple processors become busier at a time for executing any task in multitasking nature, So
CPU produces more heat.

Examples of Multitasking Operating System


There are some examples of multi tasking OS like as –
 Windows XP
 Windows Vista
 Windows 7
 Windows 8
 Windows 10
 Windows 2000
 IBM’s OS/390
 Linux
 UNIX
Difference between Multiprogramming and multitasking

Sr.no Multiprogramming Multi-tasking


1. Both of these concepts are for single CPU. Both of these concepts are for single CPU.
Concept of Context Switching and Time Sharing is
2. Concept of Context Switching is used.
used.
The processor is typically used in time sharing mode.
In multiprogrammed system, the operating
Switching happens when either allowed time expires
3. system simply switches to, and executes,
or where there other reason for current process needs
another job when current job needs to wait.
to wait (example process needs to do IO).
Multi-programming increases CPU In multi-tasking also increases CPU utilization, it also
4.
utilization by organising jobs . increases responsiveness.
The idea is to reduce the CPU idle time for The idea is to further extend the CPU Utilization
5.
as long as possible. concept by increasing responsiveness Time Sharing.

6. 6. Multiprocessing System
 Multiprocessor system is the system that contains two or more processors or CPU’s
and has ability to simultaneously execute several programs. Hence the name multi-
processor.
 In such a system, multiple processors share the clock, bus, memory and peripheral
devices.
 A multiprocessor system is also known as parallel system.
 In such a system, instructions from different and independent programs can be
processed at the same instant of time by different CPU’s.
 In this system, the CPU’s simultaneously execute different instructions from the same
program.

Types of Multiprocessors
There are mainly two types of multiprocessors i.e. symmetric and asymmetric multiprocessors.
Details about them are as follows −

Symmetric Multiprocessors
In these types of systems, each processor contains a similar copy of the operating system and they
all communicate with each other. All the processors are in a peer to peer relationship i.e. no
master – slave relationship exists between them.

An example of the symmetric multiprocessing system is the Encore version of UNIX for the
Multimax Computer.

Asymmetric Multiprocessors
In asymmetric systems, each processor is given a predefined task. There is a master processor that
gives instruction to all the other processors. Asymmetric multiprocessor system contains a master
slave relationship.

Asymmetric multiprocessor was the only type of multiprocessor available before symmetric
multiprocessors were created. Now also, this is the cheaper option.

Advantages of Multiprocessor Systems


There are multiple advantages to multiprocessor systems. Some of these are −

More reliable Systems


In a multiprocessor system, even if one processor fails, the system will not halt. This ability to
continue working despite hardware failure is known as graceful degradation. For example: If
there are 5 processors in a multiprocessor system and one of them fails, then also 4 processors are
still working. So the system only becomes slower and does not ground to a halt.

Enhanced Throughput
If multiple processors are working in tandem, then the throughput of the system increases i.e.
number of processes getting executed per unit of time increase. If there are N processors then the
throughput increases by an amount just under N.

More Economic Systems


Multiprocessor systems are cheaper than single processor systems in the long run because they
share the data storage, peripheral devices, power supplies etc. If there are multiple processes that
share data, it is better to schedule them on multiprocessor systems with shared data than have
different computer systems with multiple copies of the data.

Disadvantages of Multiprocessor Systems


There are some disadvantages as well to multiprocessor systems. Some of these are:

Increased Expense
Even though multiprocessor systems are cheaper in the long run than using multiple computer
systems, still they are quite expensive. It is much cheaper to buy a simple single processor system
than a multiprocessor system.
Complicated Operating System Required
There are multiple processors in a multiprocessor system that share peripherals, memory etc. So,
it is much more complicated to schedule processes and impart resources to processes. than in
single processor systems. Hence, a more complex and complicated operating system is required
in multiprocessor systems.

Large Main Memory Required


All the processors in the multiprocessor system share the memory. So a much larger pool of
memory is required as compared to single processor systems.

Difference between Multiprocessing and multiprogramming

S.No. Multiprocessing Multiprogramming


The availability of more than one processor per The concurrent application of more than one
1. system that can execute several set of instructions in program in the main memory is known as
parallel is known as multiprocessing. multiprogramming.
2. The number of CPU is more than one. The number of CPU is one.
3. It takes less time for job processing. It takes more time to process the jobs.
In this, more than one process can be executed at a In this, one process can be executed at a
4.
time. time.
5. It is economical. It is economical.
6. The number of users is can be one or more than one. The number of users is one at a time.
7. Throughput is maximum. Throughput is less.
8. Its efficiency is maximum. Its efficiency is Less.

7. Time-Sharing Operating Systems –


Each task is given some time to execute so that all the tasks work smoothly. Each
user gets the time of CPU as they use a single system. These systems are also known
as Multitasking Systems. The task can be from a single user or different users also.
The time that each task gets to execute is called quantum. After this time interval is
over OS switches over to the next task.

Advantages of Time-Sharing OS:


 Each task gets an equal opportunity
 Fewer chances of duplication of software
 CPU idle time can be reduced
Disadvantages of Time-Sharing OS:
 Reliability problem
 One must have to take care of the security and integrity of user programs and data
 Data communication problem

Examples of Time-Sharing OSs are: Multics, UNIX, etc.


8. Distributed Operating System
These types of the operating system is a recent advancement in the world of computer technology
and are being widely accepted all over the world and, that too, with a great pace. Various
autonomous interconnected computers communicate with each other using a shared
communication network. Independent systems possess their own memory unit and CPU. These
are referred to as loosely coupled systems or distributed systems. These system’s processors
differ in size and function. The major benefit of working with these types of the operating system
is that it is always possible that one user can access the files or software which are not actually
present on his system but some other system connected within this network i.e., remote access is
enabled within the devices connected in that network.

Advantages of Distributed Operating System:


 Failure of one will not affect the other network communication, as all systems are
independent from each other
 Electronic mail increases the data exchange speed
 Since resources are being shared, computation is highly fast and durable
 Load on host computer reduces
 These systems are easily scalable as many systems can be easily added to the network
 Delay in data processing reduces
Disadvantages of Distributed Operating System:
 Failure of the main network will stop the entire communication
 To establish distributed systems the language which is used are not well defined yet
 These types of systems are not readily available as they are very expensive. Not only
that the underlying software is highly complex and not understood well yet
Examples of Distributed Operating System are- LOCUS, etc.
9. Network Operating System
These systems run on a server and provide the capability to manage data, users,
groups, security, applications, and other networking functions. These types of
operating systems allow shared access of files, printers, security, applications, and
other networking functions over a small private network. One more important aspect
of Network Operating Systems is that all the users are well aware of the underlying
configuration, of all other users within the network, their individual connections, etc.
and that’s why these computers are popularly known as tightly coupled systems.

Advantages of Network Operating System:


 Highly stable centralized servers
 Security concerns are handled through servers
 New technologies and hardware up-gradation are easily integrated into the system
 Server access is possible remotely from different locations and types of systems
Disadvantages of Network Operating System:
 Servers are costly
 User has to depend on a central location for most operations
 Maintenance and updates are required regularly
Examples of Network Operating System are: Microsoft Windows Server 2003, Microsoft
Windows Server 2008, UNIX, Linux, Mac OS X, Novell NetWare, and BSD, etc.
10. Real-Time Operating System
These types of OSs serve real-time systems. The time interval required to process and
respond to inputs is very small. This time interval is called response time.
Real-time systems are used when there are time requirements that are very strict like missile
systems, air traffic control systems, robots, etc.
Two types of Real-Time Operating System which are as follows:
 HardReal-TimeSystems:
These OSs are meant for applications where time constraints are very strict and even
the shortest possible delay is not acceptable. These systems are built for saving life
like automatic parachutes or airbags which are required to be readily available in case
of any accident. Virtual memory is rarely found in these systems.
 Soft Real-Time Systems:
These OSs are for applications where for time-constraint is less strict.
Advantages of RTOS:
 Maximum Consumption:Maximum utilization of devices and system, thus more
output from all the resources
 Task Shifting:The time assigned for shifting tasks in these systems are very less. For
example, in older systems, it takes about 10 microseconds in shifting one task to
another, and in the latest systems, it takes 3 microseconds.
 Focus on Application:Focus on running applications and less importance to
applications which are in the queue.
 Real-timeoperating system in the embedded system: Since the size of programs are
small, RTOS can also be used in embedded systems like in transport and others.
 Error Free:These types of systems are error-free.
 Memory Allocation:Memory allocation is best managed in these types of systems.
Disadvantages of RTOS:
 Limited Tasks:Very few tasks run at the same time and their concentration is very
less on few applications to avoid errors.
 Use heavy system resources:Sometimes the system resources are not so good and
they are expensive as well.
 Complex Algorithms:The algorithms are very complex and difficult for the designer
to write on.
 Device driver and interrupt signals:It needs specific device drivers and interrupts
signals to respond earliest to interrupts.
 Thread Priority:It is not good to set thread priority as these systems are very less
prone to switching tasks.
Examples of Real-Time Operating Systems are: Scientific experiments, medical imaging
systems, industrial control systems, weapon systems, robots, air traffic control systems, etc.

Difference between Hard real time and Soft real time system :

HARD REAL TIME SYSTEM SOFT REAL TIME SYSTEM


In hard real time system, the size of data file
In soft real time system, the size of data file is large.
is small or medium.
In this system response time is in millisecond. In this system response time are higher.
Peak load performance should be predictable. In soft real time system, peak load can be tolerated.
In this system safety is critical. In this system safety is not critical.
A hard real time system is very restrictive. A Soft real time system is less restrictive.
HARD REAL TIME SYSTEM SOFT REAL TIME SYSTEM
In case of an error in a hard real time system, In case of an soft real time system, computation is rolled
the computation is rolled back. back to previously established a checkpoint.
Satellite launch, Railway signaling system
DVD player, telephone switches, electronic games etc.
etc.

11. Multi-threaded Operating System


 Multithreading is a technique in which a process, excuting an application is divided
into threads that can run concurrently.
 A thread is a dispatchable unit of work. It includes a processor context and its own
data area for a stack.
 A thread executes sequentially and is interruptable so that the processor can turn to
another thread.
 Thus, a thread represents a light weight process and is the smallest unit of CPU
utilization. It is like a miniprocess.
 A process, on other hand is a collection of one or more threads and associated system
resources.
 A thread is not a process by itself. It cannot run on its own. It always run within a
process.
 Thus , a multithreaded process may have multiple execution flows, different ones
belonging to different threads.
 All the threads of a process share same private address space of the process and share
all the resources acquired by the process.
 By breaking a single application into multiple threads, the programmer has great
control over the modularity of the application and the timing of application related
events.
The various states exhibited by window thread are
1. Ready: A ready thread may be scheduled for execution. The kernel dispatcher keeps
track of all ready threads and schedules then in priority order.
2. Standby: A thread that has been selected to run next on a particular processor is said
to be in standby state. The tread waits in this state until the processors is made
available. If the priority of standby thread is higher than the thread that is currently
running on the processor than this running thread may be preempted.
3. Running: The thread that is currently utilizing CPU is in running state. It keeps the
processor until it is preempted by a higher priority thread or it gets blocked or its time
slice expires.
4. Waiting: A thread ethers waiting state when:
5. It is blocked on an event.
6. It voluntarily waits for synchronization purpose.
7. An environment subsystem directs the thread to suspend itself.
8. Transition: A thread enters this state after waiting if it is ready to run but resources
are not available. E.g. the thread’s stack may be paged out of memory. When
resources are available, the thread goes to the ready state.
9. Terminated: A thread can be terminated by itself, by another thread or when its
parent process terminated.
Difference between Real Time OS and Time sharing OS

S.NO Time Sharing Operating System Real-Time Operating System


In time sharing operating system, quick While in real time operating system, computation
1.
response is emphasized for a request. tasks are emphasized before its nominative point.
S.NO Time Sharing Operating System Real-Time Operating System
In this operating
While in this operating
2. system Switching method/function is
system Switching method/function is not available.
available.
In this operating system any modification in
3. While in this modification does not take place.
the program can be possible.
In this OS, computer resources are shared to But in this OS, computer resources are not shared
4.
the external. to the external.
It deals with more than processes or Whereas it deals with only one process or
5.
applications simultaneously. application at a time.
In this OS, the response is provided to the While in real time OS, the response is provided to
6.
user within a second. the user within time constraint.

Process & Thread Management


Process vs. Program
1. Program
When we execute a program that was just compiled, the OS will generate a process to execute the
program. Execution of the program starts via GUI mouse clicks, command line entry of its name,
etc. A program is a passive entity as it resides in the secondary memory, such as the contents of a
file stored on disk. One program can have several processes.

2. Process:
The term process (Job) refers to program code that has been loaded into a computer’s memory so
that it can be executed by the central processing unit (CPU). A process can be described as an
instance of a program running on a computer or as an entity that can be assigned to and executed
on a processor. A program becomes a process when loaded into memory and thus is an active
entity.

Difference between Program and Process:

Sr.No. Program Process


Program contains a set of instructions designed
1. Process is an instance of an executing program.
to complete a specific task.
Program is a passive entity as it resides in the Process is a active entity as it is created during
2.
secondary memory. execution and loaded into the main memory.
Program exists at a single place and continues Process exists for a limited span of time as it
3.
to exist until it is deleted. gets terminated after the completion of task.
4. Program is a static entity. Process is a dynamic entity.
Program does not have any resource Process has a high resource requirement, it
5. requirement, it only requires memory space for needs resources like CPU, memory address, I/O
storing the instructions. during its lifetime.
Sr.No. Program Process
Process has its own control block called Process
6. Program does not have any control block.
Control Block.

PCB (Process Control Block)


1. Program:
When we execute a program that was just compiled, the OS will generate a process to
execute the program. Execution of the program starts via GUI mouse clicks,
command line entry of its name, etc. A program is a passive entity as it resides in the
secondary memory, such as the contents of a file stored on disk. One program can
have several processes.

2. Process:
The term process (Job) refers to program code that has been loaded into a computer’s
memory so that it can be executed by the central processing unit (CPU). A process
can be described as an instance of a program running on a computer or as an entity
that can be assigned to and executed on a processor. A program becomes a process
when loaded into memory and thus is an active entity.

Difference between Program and Process:

Sr.No. Program Process


Program contains a set of instructions designed
1. Process is an instance of an executing program.
to complete a specific task.
Program is a passive entity as it resides in the Process is a active entity as it is created during
2.
secondary memory. execution and loaded into the main memory.
Program exists at a single place and continues Process exists for a limited span of time as it
3.
to exist until it is deleted. gets terminated after the completion of task.
4. Program is a static entity. Process is a dynamic entity.
Program does not have any resource Process has a high resource requirement; it
5. requirement; it only requires memory space for needs resources like CPU, memory address, I/O
storing the instructions. during its lifetime.
Process has its own control block called Process
6. Program does not have any control block.
Control Block.

PCB (Process Control Block)


 Process control block is a data structure used by operating system to store all the
information about a process. It is also known as process descriptor.
 When a process is created, the operating system creates a corresponding process
control block.
 Information In a process control block is updated during the transition of process
states.
 When the process terminates, its PCB is released to the pool of free cells from which
new PCBs are drawn.
 Each process has a single PCB.
The PCB of a process contains the following information about the process

 Process state: A process can be new, ready, running, waiting, etc.


 Program counter: The program counter lets you know the address of the next
instruction, which should be executed for that process.
 CPU registers: This componentincludes accumulators, index and general-purpose
registers, and information of condition code.
 CPU scheduling information: This componentincludes a process priority, pointers
for scheduling queues, and various other scheduling parameters.
 Accounting and business information: It includes the amount of CPU and time
utilities like real time used, job or process numbers, etc.
 Memory-management information: This information includes the value of the base
and limit registers, the page, or segment tables. This depends on the memory system,
which is used by the operating system.
 I/O status information: This block includes a list of open files, the list of I/O
devices that are allocated to the process, etc.

Process State Transition


 A state transition is a change from one state to another. A state transition is caused by
the occurrence of some event in the system.
 A process has to go through various states for performing its task.
 The transition of a process from one state to another occurs depending on the flow of
the execution of the process.
 A new process is added to data structure called a ready queue, also known as ready
pool or pool of executable processes. This queue stores all processes in a first in first
out (FIFO) manner. A new process is added into the ready queue from its rear end
and the process at the front of the ready queue is sent for execution.
 If the process does not voluntarily release the CPU before the time slice expires, the
interrupting cycle generates an interrupt, causing the operating system to regain
control.
 Each process is assigned a time slice for the execution. A time slice is a very short
period of time and its duration varies in different systems.
 The CPU executes the process at the front end of the ready queue and that process
makes a state transition from ready to the running state. The assignment of the CPU
to the first process on the ready queue is called dispatching. This transition is
indicated as
Dispatch (process name): ready running

 The operating system then adds the previously running process to the rear end of
ready queue and allocates CPU to the first process on the ready queue. These state
transitions are indicated as:
Timerunout (processname): running ready
 If a running process initiates an input/output operation before its time slice expires,
the running process voluntarily release the CPU. It is sent to the waiting queue and
the process state is marked as waiting blocked. This state transition is indicated as:
Block(processname) : running blocked

 After the competition of I/O task, the blocked or waiting process is restored and
placed back in the ready queue and the process state is marked as ready.
 When the execution of process ends, the process state is marked as terminated and the
operating system reclaims all the process allocated to the process.

Scheduling Queues
 In multiprogramming when several processes are in waiting for I/O operation, they
form queues.
 The various queues maintained by operating system are:
1. Job Queue
 As the process enter the system, it is put into a job queue. This queue consists of all
ready to run.
2. Ready queue
 It is a doubly linked list of processes that are residing in the main memory and are
ready to run.
 The various processes in ready queue are placed according to their priority i.e. higher
priority process is at the front of the queue.
 The header of ready queue contains two pointers. The first pointer points to the PCB
of first process and the second pointer points to the PCB of last process in the queue.

3. Device Queue
 Device queue contains all those processes that are waiting for a particular I/O device.
 Each device has its own device queue.

Types of schedulers
Scheduler
 A scheduler is an operating system module that selects the next job or process to be
admitted into the system.
 Thus, a scheduler selects one of the processes from among the processes in the
memory that are ready to execute and allocates CPU to it.

 In complex operating system three different types of schedulers may exist.


1) Long term scheduler
2) Medium term scheduler

3) Short term scheduler

1. Long term scheduler


 The job scheduler or long-term scheduler selects processes from the storage pool in
the secondary memory and loads them into the ready queue in the main memory for
execution.
 The long-term scheduler controls the degree of multiprogramming. It must select a
careful mixture of I/O bound and CPU bound processes to yield optimum system
throughput. If it selects too many CPU bound processes then the I/O devices are idle
and if it selects too many I/O bound processes then the processor has nothing to do.
 The job of the long-term scheduler is very important and directly affects the system
for a long time.
2) Medium term scheduler
 Medium-term scheduling is an important part of swapping. It enables you to handle
the swapped out-processes. In this scheduler, a running process can become
suspended, which makes an I/O request.
 A running process can become suspended if it makes an I/O request. A suspended
processes can’t make any progress towards completion. In order to remove the
process from memory and make space for other processes, the suspended process
should be moved to secondary storage.
3) Short term scheduler
 Short term scheduling is also known as CPU scheduler. The main goal of this
scheduler is to boost the system performance according to set criteria. This helps you
to select from a group of processes that are ready to execute and allocates CPU to one
of them. The dispatcher gives control of the CPU to the process selected by the short
term scheduler.

Concept of Thread
 A is a single sequential flow of execution of the tasks of a process.
 A thread is a lightweight process and the smallest unit of CPU utilization. Thus a
thread is like a little miniprocess.
 Each thread has a thread id, a program counter, a register set and a stack.
 A thread undergoes different states such as new, ready, running, waiting and
terminated similar to that of a process.
 However, a thread is not a program as it cannot run on its own. It runs within a
program.
Why Multithreading?
A thread is also known as lightweight process. The idea is to achieve parallelism by dividing a
process into multiple threads. For example, in a browser, multiple tabs can be different threads.
MS Word uses multiple threads: one thread to format the text, another thread to process inputs,
etc.

Types of Threads:
1. User Level thread (ULT)
Is implemented in the user level library, they are not created using the system calls.
Thread switching does not need to call OS and to cause interrupt to Kernel. Kernel
doesn’t know about the user level thread and manages them as if they were single-
threaded processes.
Advantages of ULT
 Can be implemented on an OS that doesn’t support multithreading.
 Simple representation since thread has only program counter, register set, stack space.
 Simple to create since no intervention of kernel.
 Thread switching is fast since no OS calls need to be made.
Disadvantages of ULT –
 No or less co-ordination among the threads and Kernel.
 If one thread causes a page fault, the entire process blocks.

2. Kernel Level Thread (KLT)


Kernel knows and manages the threads. Instead of thread table in each process, the
kernel itself has thread table (a master one) that keeps track of all the threads in the
system. In addition kernel also maintains the traditional process table to keep track of
the processes. OS kernel provides system call to create and manage threads.
Advantages of KLT
 Since kernel has full knowledge about the threads in the system, scheduler may
decide to give more time to processes having large number of threads.
 Good for applications that frequently block.
Disadvantages of KLT
 Slow and inefficient.
 It requires thread control block so it is an overhead

Difference between Kernel level thread and User level thread

User level thread Kernel level thread


User thread are implemented by users. kernel threads are implemented by OS.
OS doesn’t recognized user level threads. Kernel threads are recognized by OS.
Implementation of User threads is easy. Implementation of Kernel thread is complicated.
Context switch time is less. Context switch time is more.
Context switch requires no hardware support. Hardware support is needed.
If one user level thread perform blocking operation If one kernel thread perform blocking operation then
then entire process will be blocked. another thread can continue execution.
User level threads are designed as dependent Kernel level threads are designed as independent
threads. threads.
Example : Java thread, POSIX threads. Example : Window Solaris.
Benefits of Threads
 Enhanced throughput of the system:When the process is split into many threads
and each thread is treated as a job, the number of jobs done in the unit time increases.
That is why the throughput of the system also increases.
 Effective Utilization of Multiprocessor system:When you have more than one
thread in one process, you can schedule more than one thread in more than one
processor.
 Faster context switch:The context switching period between threads is less than the
process context switching. The process context switch means more overhead for the
CPU.
 Responsiveness:When the process is split into several threads, and when a thread
completes its execution, that process can be responded to as soon as possible.
 Communication:Multiple-thread communication is simple because the threads share
the same address space, while in process, we adopt just a few exclusive
communication strategies for communication between two processes.
 Resource sharing:Resources can be shared between all threads within a process,
such as code, data, and files. Note: The stack and register cannot be shared between
threads. There are a stack and register for each thread.
Process Synchronization
 A co-operating process is one that can affect or be affected by other processes
executing in the system.
 Such co-operating processes may either directly share a logical address space or be
allowed to share data only through files.
 When co-operating processes concurrently share the data, it may result in data
inconsistency
 Maintaining data consistency requires mechanisms to ensure the orderly execution of
co-operating processes.
 Thus, process synchronization ensures a perfect co-ordination among the processes.
 Process synchronization can be provided by using several different tools like
semaphore, mutex and monitors.
 Synchronization is important for both user applications and implementation of
operating system.
Concept of Race Condition
 When several processes access and manipulate the same data at the same time, they
may enter into a race condition.
 A race condition is a flaw in a system of processes whereby the output of the process
is dependent on the sequence of other processes.
 Race conditions occur among processes that share common storage and each process
can read and write on this shared common storage.
 Thus, a race condition occurs due to improper synchronization of shared memory
access.
 Race conditions can occur in poorly designed systems.
 If the race condition is allowed to happen in the system the output of the processes
cannot be ascertained.
CPU Scheduling
 Scheduling is a fundamental operating system function.
 Scheduling refers to set of policies and mechanisms built into the operating system
that governs the order in which the work to be done by a computer system is
completed.
 CPU scheduling is the basis of multiprogrammed operating system.
Why do we need Scheduling?
 In Multiprogramming, if the long term scheduler picks more I/O bound processes
then most of the time, the CPU remains idol. The task of Operating system is to
optimize the utilization of resources.
 If most of the running processes change their state from running to waiting then there
may always be a possibility of deadlock in the system. Hence to reduce this overhead,
the OS needs to schedule the jobs to get the optimal utilization of CPU and to avoid
the possibility to deadlock.

CPU Scheduling: Dispatcher


Another component involved in the CPU scheduling function is the Dispatcher. The dispatcher is
the module that gives control of the CPU to the process selected by the short-term scheduler.
This function involves:
 Switching context
 Switching to user mode
 Jumping to the proper location in the user program to restart that program from where
it left last time.
The dispatcher should be as fast as possible, given that it is invoked during every process switch.
The time taken by the dispatcher to stop one process and start another process is known as
the Dispatch Latency. Dispatch Latency can be explained using the below figure:

Types of CPU Scheduling


CPU scheduling decisions may take place under the following four circumstances:

1. When a process switches from the runningstate to the waiting state(for I/O request
or invocation of wait for the termination of one of the child processes).
2. When a process switches from the runningstate to the ready state (for example,
when an interrupt occurs).
3. When a process switches from the waitingstate to the ready state(for example,
completion of I/O).
4. When a process terminates.
In circumstances 1 and 4, there is no choice in terms of scheduling. A new process(if one exists in
the ready queue) must be selected for execution. There is a choice, however in circumstances 2
and 3.

When Scheduling takes place only under circumstances 1 and 4, we say the scheduling scheme
is non-preemptive; otherwise, the scheduling scheme is preemptive.

Non-Preemptive Scheduling
Under non-preemptive scheduling, once the CPU has been allocated to a process, the process
keeps the CPU until it releases the CPU either by terminating or by switching to the waiting state.

This scheduling method is used by the Microsoft Windows 3.1 and by the Apple Macintosh
operating systems.

It is the only method that can be used on certain hardware platforms because It does not require
the special hardware(for example a timer) needed for preemptive scheduling.

In non-preemptive scheduling, it does not interrupt a process running CPU in the middle of the
execution. Instead, it waits till the process completes its CPU burst time, and then after that it can
allocate the CPU to any other process.

Some Algorithms based on non-preemptive scheduling are: Shortest Job First (SJF basically non-
preemptive) Scheduling and Priority (non- preemptive version) Scheduling, etc.

Preemptive Scheduling
In this type of Scheduling, the tasks are usually assigned with priorities. At times it is necessary
to run a certain task that has a higher priority before another task although it is running.
Therefore, the running task is interrupted for some time and resumed later when the priority task
has finished its execution.

Thus this type of scheduling is used mainly when a process switches either from running state to
ready state or from waiting state to ready state. The resources (that is CPU cycles) are mainly
allocated to the process for a limited amount of time and then are taken away, and after that, the
process is again placed back in the ready queue in the case if that process still has a CPU burst
time remaining. That process stays in the ready queue until it gets the next chance to execute.

Some Algorithms that are based on preemptive scheduling are Round Robin Scheduling (RR),
Shortest Remaining Time First (SRTF), Priority (preemptive version) Scheduling, etc.

CPU Scheduling: Scheduling Criteria


There are many different criteria to check when considering the “best” scheduling algorithm,
they are:

CPU Utilization
To make out the best use of the CPU and not to waste any CPU cycle, the CPU would be working
most of the time(Ideally 100% of the time). Considering a real system, CPU usage should range
from 40% (lightly loaded) to 90% (heavily loaded.)

Throughput
It is the total number of processes completed per unit of time or rather says the total amount of
work done in a unit of time. This may range from 10/second to 1/hour depending on the specific
processes.

Turnaround Time
It is the amount of time taken to execute a particular process, i.e. The interval from the time of
submission of the process to the time of completion of the process(Wall clock time).

Waiting Time
The sum of the periods spent waiting in the ready queue amount of time a process has been
waiting in the ready queue to acquire get control on the CPU.

Load Average
It is the average number of processes residing in the ready queue waiting for their turn to get into
the CPU.

Response Time
Amount of time it takes from when a request was submitted until the first response is produced.
Remember, it is the time till the first response and not the completion of process execution(final
response).

In general CPU utilization and Throughput are maximized and other factors are reduced for
proper optimization.

CPU-I/O Burst Cycle


 The success of CPU scheduling depends on an observed property of processes:
o Process execution consists of a cycle of CPU execution and I/O wait.
Processes alternate between these two states.
o Process execution begins with a CPU burst. That is followed by an I/O
burst, which is followed by another CPU burst, then another I/O burst,
and so on.
 Eventually, the final CPU burst ends with a system request to terminate execution

Figure 10: Alternating sequence of CPU and I/O bursts.


 The durations of CPU bursts have been measured extensively. They tend to have a
frequency curve similar to that shown in Fig. 11.

Figure 11: Histogram of CPU-burst durations.


 The curve is generally characterized as exponentialor hyperexponential, with a large
number of short CPU bursts and a small number of long CPU bursts.
o An I/O-boundprogram typically has many short CPU bursts.
o A CPU-bound program might have a few long CPU bursts.
 This distribution can be important in the selection of an appropriateCPU-scheduling
algorithm.
 Nearly all processes alternate bursts of computing with (disk) I/O requests, as shown
in Fig. 12.

Figure 12: Bursts of CPU usage alternate with periods of waiting for I/O. (a) A CPU-bound process. (b)
An I/O-bound process.
 Some processes, such as the one in, spend most of their time computing (CPU-
bound), while others, such as the one in, spend most of their time waiting for I/O
(I/O-bound).
 Having some CPU-bound processes and some I/O-bound processes in memory
together is a better idea than first loading and running all the CPU-bound jobs and
then when they are finished loading and running all the I/O-bound jobs (a careful mix
of processes).

Scheduling Algorithms
To decide which process to execute first and which process to execute last to achieve maximum
CPU utilization, computer scientists have defined some algorithms, they are:

1. First Come First Serve(FCFS) Scheduling


2. Shortest-Job-First(SJF) Scheduling
3. Priority Scheduling
4. Round Robin(RR) Scheduling
5. Multilevel Queue Scheduling
6. Multilevel Feedback Queue Scheduling
7. Shortest Remaining Time First (SRTF)
8. Longest Remaining Time First (LRTF)
9. Highest Response Ratio Next (HRRN)
FCFS Scheduling
First come first serve (FCFS) scheduling algorithm simply schedules the jobs according to their
arrival time. The job which comes first in the ready queue will get the CPU first. The lesser the
arrival time of the job, the sooner will the job get the CPU. FCFS scheduling may cause the
problem of starvation if the burst time of the first process is the longest among all the jobs.

Advantages of FCFS
 Simple
 Easy
 First come, First serve

Disadvantages of FCFS
1. The scheduling method is non preemptive, the process will run to the completion.
2. Due to the non-preemptive nature of the algorithm, the problem of starvation may
occur.
3. Although it is easy to implement, but it is poor in performance since the average
waiting time is higher as compare to other scheduling algorithms.

Example
Let’s take an example of The FCFS scheduling algorithm. In the Following schedule, there are 5
processes with process ID P0, P1, P2, P3 and P4. P0 arrives at time 0, P1 at time 1, P2 at time 2,
P3 arrives at time 3 and Process P4 arrives at time 4 in the ready queue. The processes and their
respective Arrival and Burst time are given in the following table.
The Turnaround time and the waiting time are calculated by using the following formula.

1. Turn Around Time= Completion Time – Arrival Time


2. Waiting Time = Turnaround time – Burst Time
The average waiting Time is determined by summing the respective waiting time of all the
processes and divided the sum by the total number of processes.

Process ID Arrival Time Burst Time Completion Time Turn Around Time Waiting Time
0 0 2 2 2 0
1 1 6 8 7 1
2 2 4 12 8 4
3 3 9 21 18 9
4 4 12 33 29 17
Avg Waiting Time=31/5

Shortest Job First (SJF) Scheduling


Till now, we were scheduling the processes according to their arrival time (in FCFS scheduling).
However, SJF scheduling algorithm, schedules the processes according to their burst time.
In SJF scheduling, the process with the lowest burst time, among the list of available processes in
the ready queue, is going to be scheduled next.

However, it is very difficult to predict the burst time needed for a process hence this algorithm is
very difficult to implement in the system.

Advantages of SJF
1. Maximum throughput
2. Minimum average waiting and turnaround time

Example
In the following example, there are five jobs named as P1, P2, P3, P4 and P5. Their arrival time
and burst time are given in the table below.

PID Arrival Time Burst Time Completion Time Turn Around Time Waiting Time
1 1 7 8 7 0
2 3 3 13 10 7
3 6 2 10 4 2
4 7 10 31 24 14
5 9 8 21 12 4
Avg Waiting Time = 27/5

Round Robin Scheduling Algorithm


Round Robin scheduling algorithm is one of the most popular scheduling algorithm which can
actually be implemented in most of the operating systems. This is the preemptive version of first
come first serve scheduling. The Algorithm focuses on Time Sharing. In this algorithm, every
process gets executed in a cyclic way. A certain time slice is defined in the system which is called
time quantum. Each process present in the ready queue is assigned the CPU for that time
quantum, if the execution of the process is completed during that time then the process
will terminate else the process will go back to the ready queue and waits for the next turn to
complete the execution.

Advantages
1. It can be actually implementable in the system because it is not depending on the
burst time.
2. It doesn’t suffer from the problem of starvation or convoy effect.
3. All the jobs get a fare allocation of CPU.

Disadvantages
1. The higher the time quantum, the higher the response time in the system.
2. The lower the time quantum, the higher the context switching overhead in the system.
3. Deciding a perfect time quantum is really a very difficult task in the system.
RR Scheduling Example
In the following example, there are six processes named as P1, P2, P3, P4, P5 and P6. Their
arrival time and burst time are given below in the table. The time quantum of the system is 4
units.

Process ID Arrival Time Burst Time


1 0 5
2 1 6
3 2 3
4 3 1
5 4 5
6 6 4
According to the algorithm, we have to maintain the ready queue and the Gantt chart. The
structure of both the data structures will be changed after every scheduling.

Ready Queue:
Initially, at time 0, process P1 arrives which will be scheduled for the time slice 4 units. Hence in
the ready queue, there will be only one process P1 at starting with CPU burst time 5 units.

P1
5

GANTT chart
The P1 will be executed for 4 units first.

Ready Queue
Meanwhile the execution of P1, four more processes P2, P3, P4 and P5 arrives in the ready
queue. P1 has not completed yet, it needs another 1 unit of time hence it will also be added back
to the ready queue.

P2 P3 P4 P5 P1
6 3 1 5 1

GANTT chart
After P1, P2 will be executed for 4 units of time which is shown in the Gantt chart.

Ready Queue
During the execution of P2, one more process P6 is arrived in the ready queue. Since P2 has not
completed yet hence, P2 will also be added back to the ready queue with the remaining burst time
2 units.

P3 P4 P5 P1 P6 P2
3 1 5 1 4 2

GANTT chart
After P1 and P2, P3 will get executed for 3 units of time since its CPU burst time is only 3
seconds.

Ready Queue
Since P3 has been completed, hence it will be terminated and not be added to the ready queue.
The next process will be executed is P4.

P4 P5 P1 P6 P2
1 5 1 4 2

GANTT chart
After, P1, P2 and P3, P4 will get executed. Its burst time is only 1 unit which is lesser then the
time quantum hence it will be completed.

Ready Queue
The next process in the ready queue is P5 with 5 units of burst time. Since P4 is completed hence
it will not be added back to the queue.

P5 P1 P6 P2
5 1 4 2

GANTT chart
P5 will be executed for the whole time slice because it requires 5 units of burst time which is
higher than the time slice.

Ready Queue
P5 has not been completed yet; it will be added back to the queue with the remaining burst time
of 1 unit.

P1 P6 P2 P5
1 4 2 1

GANTT Chart
The process P1 will be given the next turn to complete its execution. Since it only requires 1 unit
of burst time hence it will be completed.
Ready Queue
P1 is completed and will not be added back to the ready queue. The next process P6 requires only
4 units of burst time and it will be executed next.

P6 P2 P5
4 2 1

GANTT chart
P6 will be executed for 4 units of time till completion.

Ready Queue
Since P6 is completed, hence it will not be added again to the queue. There are only two
processes present in the ready queue. The Next process P2 requires only 2 units of time.

P2 P5
2 1

GANTT Chart
P2 will get executed again, since it only requires only 2 units of time hence this will be
completed.

Ready Queue
Now, the only available process in the queue is P5 which requires 1 unit of burst time. Since the
time slice is of 4 units hence it will be completed in the next burst.

P5
1

GANTT chart
P5 will get executed till completion.

The completion time, Turnaround time and waiting time will be calculated as shown in the table
below.

As, we know,

1. Turn Around Time= Completion Time – Arrival Time


2. Waiting Time= Turn Around Time – Burst Time

Process ID Arrival Time Burst Time Completion Time Turn Around Time Waiting Time
1 0 5 17 17 12
2 1 6 23 22 16
3 2 3 11 9 6
4 3 1 12 9 8
5 4 5 24 20 15
6 6 4 21 15 11
Avg Waiting Time = (12+16+6+8+15+11)/6 = 76/6 units

Advantages of Round Robin Scheduling Algorithm


Some advantages of the Round Robin scheduling algorithm are as follows:
 While performing this scheduling algorithm, a particular time quantum is allocated to
different jobs.
 In terms of average response time, this algorithm gives the best performance.
 With the help of this algorithm, all the jobs get a fair allocation of CPU.
 In this algorithm, there are no issues of starvation or convoy effect.
 This algorithm deals with all processes without any priority.
 This algorithm is cyclic in nature.
 In this, the newly created process is added to the end of the ready queue.
 Also, in this, a round-robin scheduler generally employs time-sharing which means
providing each job a time slot or quantum.
 In this scheduling algorithm, each process gets a chance to reschedule after a
particular quantum time.
Disadvantages of Round Robin Scheduling Algorithm
Some disadvantages of the Round Robin scheduling algorithm are as follows:

 This algorithm spends more time on context switches.


 For small quantum, it is time-consuming scheduling.
 This algorithm offers a larger waiting time and response time.
 In this, there is low throughput.
 If time quantum is less for scheduling then its Gantt chart seems to be too big.
Multilevel Queue (MLQ) CPU Scheduling
It may happen that processes in the ready queue can be divided into different classes where each
class has its own scheduling needs. For example, a common division is a foreground
(interactive) process and background (batch) processes.These two classes have different
scheduling needs. For this kind of situation Multilevel Queue Scheduling is used.Now, let us see
how it works.
Ready Queue is divided into separate queues for each class of processes. For example, let us take
three different types of process System processes, Interactive processes and Batch Processes. All
three process have there own queue. Now,look at the below figure.

All three different type of processes have there own queue. Each queue have its own Scheduling
algorithm. For example, queue 1 and queue 2 uses Round Robin while queue 3 can use FCFS to
schedule there processes.

Scheduling among the queues : What will happen if all the queues have some processes? Which
process should get the cpu? To determine this Scheduling among the queues is necessary. There
are two ways to do so –
1. Fixed priority preemptive scheduling method –Each queue has absolute priority
over lower priority queue. Let us consider following priority order queue 1 > queue 2
> queue 3.According to this algorithm no process in the batch queue(queue 3) can
run unless queue 1 and 2 are empty. If any batch process (queue 3) is running and any
system (queue 1) or Interactive process(queue 2) entered the ready queue the batch
process is preempted.
2. Time slicing– In this method each queue gets certain portion of CPU time and can
use it to schedule its own processes.For instance, queue 1 takes 50 percent of CPU
time queue 2 takes 30 percent and queue 3 gets 20 percent of CPU time.
Example Problem :
Consider below table of four processes under Multilevel queue scheduling.Queue number denotes
the queue of the process.

Priority of queue 1 is greater than queue 2. queue 1 uses Round Robin (Time Quantum = 2) and
queue 2 uses FCFS.

Below is the gantt chart of the problem :

At starting both queues have process so process in queue 1 (P1, P2) runs first (because of higher
priority) in the round robin fashion and completes after 7 units then process in queue 2 (P3) starts
running (as there is no process in queue 1) but while it is running P4 comes in queue 1 and
interrupts P3 and start running for 5 second and after its completion P3 takes the CPU and
completes its execution.

Advantages:
 The processes are permanently assigned to the queue, so it has advantage of low
scheduling overhead.
Disadvantages:
 Some processes may starve for CPU if some higher priority queues are never
becoming empty.
 It is inflexible in nature.

UNIT-2
Memory Management
Introduction
In a multiprogramming computer, the operating system resides in a part of memory and the rest is
used by multiple processes. The task of subdividing the memory among different processes is
called memory management.

1.Logical and Physical Address Space:


 Logical Address space: An address generated by the CPU is known as “Logical
Address”. It is also known as a Virtual address. Logical address space can be defined
as the size of the process. A logical address can be changed.
 Physical Address space: An address seen by the memory unit (i.e the one loaded
into the memory address register of the memory) is commonly known as a “Physical
Address”.

2.Static and Dynamic Loading or Binding:


To load a process into the main memory is done by a loader. There are two different types of
loading :

 Static loading:- loading the entire program into a fixed address. It requires more
memory space.
 Dynamic loading:- The entire program and all data of a process must be in physical
memory for the process to execute. So, the size of a process is limited to the size of
physical memory. To gain proper memory utilization, dynamic loading is used. In
dynamic loading, a routine is not loaded until it is called.

Static and Dynamic linking:


To perform a linking task a linker is used. A linker is a program that takes one or more object
files generated by a compiler and combines them into a single executable file.

 Static linking: In static linking, the linker combines all necessary program modules
into a single executable program. So there is no runtime dependency. Some operating
systems support only static linking, in which system language libraries are treated like
any other object module.
 Dynamic linking: The basic concept of dynamic linking is similar to dynamic
loading. In dynamic linking, “Stub” is included for each appropriate library routine
reference. A stub is a small piece of code.

Memory Protection

Memory protection prevents a process from accessing unallocated memory in OS as it stops the
software from seizing control of an excessive amount of memory and may cause damage that will
impact other software which is currently being used or may create a loss of saved data.

Methods of memory protection:

1.Memory Protection using Keys: The concept of using memory protection with keys can be
found in most modern computers with the purpose of paged memory organization and for the
dynamic distribution between the parallel running programs.

2. Memory Protection using Rings: In OS, the domains related to ordered protection
are called Protection Rings. This method helps in improving fault tolerance and
provides security. These rings are arranged in a hierarchy from most privileged to
least privileged.
3. Capability-based addressing: It is a method of protecting the memory that cannot
be seen in modern commercial computers. Here, the pointers (objects consisting of a
memory address) are restored by the capabilities objects that can only be created with
the protected instructions and may only execute by a kernel, or by another process
that is authorized to execute and therefore it gives an advantage of controlling the
unauthorized processes in creating additional
separate address spaces in memory.

4.Memory Protection using masks: The masks are used in the protection of memory during the
organization of paging. In this method, before the implementation, the page numbers are
indicated to each program and are reserved for the placement of its directives.

5.Memory Protection using Segmentation: It is a method of dividing the system memory into
different segments. The data structures of x86 architecture of OS like local descriptor table and
global descriptor table are used in the protection of memory.

6.Memory Protection using Simulated segmentation: With this technique, we can monitor the
program for interpreting the machine code instructions of system architectures. Through this, the
simulator can help in protecting the memory by using a segmentation using the scheme and
validating the target address of every instruction in real-time.
7.Memory Protection using Dynamic tainting: Dynamic tainting is a technique that consists of
marking and tracking certain data in a program at runtime as it protects the process from illegal
memory accesses. I

Memory sharing
When OS dividing memory between the different process is known as memory sharing. It is in
two types Paging and segmentation.

Paging
Paging is a memory management scheme that eliminates the need for contiguous allocation of
physical memory. This scheme permits the physical address space of a process to be non-
contiguous.

 Logical Address or Virtual Address (represented in bits): An address generated by the


CPU
 Logical Address Space or Virtual Address Space (represented in words or bytes): The
set of all logical addresses generated by a program
 Physical Address (represented in bits): An address actually available on a memory
unit
 Physical Address Space (represented in words or bytes): The set of all physical
addresses corresponding to the logical addresses
The address generated by the CPU is divided into

 Page number(p): Number of bits required to represent the pages in Logical Address
Space or Page number
 Page offset(d): Number of bits required to represent a particular word in a page or
page size of Logical Address Space or word number of a page or page offset.
Physical Address is divided into

 Frame number(f): Number of bits required to represent the frame of Physical


Address Space or Frame number frame
 Frame offset(d): Number of bits required to represent a particular word in a frame or
frame size of Physical Address Space or word number of a frame or frame offset.

Segmentation
In Operating Systems, Segmentation is a memory management technique in which the memory is
divided into the variable size parts. Each part is known as a segment which can be allocated to a
process.

The details about each segment are stored in a table called a segment table.

Segment table contains mainly two information about segment:

1. Base: It is the base address of the segment


2. Limit: It is the length of the segment.

Virtual Memory in Operating System

Virtual Memory is a storage allocation scheme in which secondary memory can be addressed as
though it were part of the main memory. The addresses a program may use to reference memory
are distinguished from the addresses the memory system uses to identify physical storage sites,
and program-generated addresses are translated automatically to the corresponding machine
addresses. The size of virtual storage is limited by the addressing scheme of the computer system
and the amount of secondary memory is available not by the actual number of the main storage
locations.

Demand Paging :
The process of loading the page into memory on demand (whenever page fault occurs) is known
as demand paging.
The process includes the following steps :
1. If the CPU tries to refer to a page that is currently not available in the main memory,
it generates an interrupt indicating a memory access fault.
2. The OS puts the interrupted process in a blocking state. For the execution to proceed
the OS must bring the required page into the memory.
3. The OS will search for the required page in the logical address space.
4. The required page will be brought from logical address space to physical address
space. The page replacement algorithms are used for the decision-making of
replacing the page in physical address space.
5. The page table will be updated accordingly.
6. The signal will be sent to the CPU to continue the program execution and it will place
the process back into the ready state.

Page replacement algorithms

Page Replacement Algorithms :


1. First In First Out (FIFO) –
This is the simplest page replacement algorithm. In this algorithm, the operating
system keeps track of all pages in the memory in a queue, the oldest page is in the
front of the queue. When a page needs to be replaced page in the front of the queue is
selected for removal.
Example-1Consider page reference string 1, 3, 0, 3, 5, 6 with 3 page frames. Find
number of page faults.

2. Optimal Page replacement –


In this algorithm, pages are replaced which would not be used for the longest duration
of time in the future.
Example-2:Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, with 4 page
frame. Find number of page fault.
3. Least Recently Used –
In this algorithm page will be replaced which is least recently used.
Example-3Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2 with 4
page frames.Find number of page faults.

UNIT-3
I/O Device Management
I/O Device and Controllers
Device Controllers: A device controller is a hardware unit which is attached with the
input/output bus of the computer and provides a hardware interface between the computer and the
input/output devices. On one side it knows how to communicate with input/output devices and on
the other side it knows how to communicate with the computer system though input/output bus.
A device controller usually can control several input/output devices.

DMA Direct memory access (DMA) is a method that allows an input/output (I/O) device to send
or receive data directly to or from the main memory, bypassing the CPU to speed up memory
operations.
The process is managed by a chip known as a DMA controller (DMAC).

Memory-mapped Input/Output: Each controller has a few registers that are used for
communicating with the CPU. By writing into these registers, the operating system can command
the device to deliver data, accept data, switch itself on or off, or otherwise perform some action.
Port-mapped I/O : each control register is assigned an I/O port number, an 8- or 16-bit integer.
Using a special I/O instruction such as IN REG,PORT the CPU can read in control register PORT
and store the result in CPU register REG. Similarly, using OUT PORT,REG the CPU can write
the contents of REG to a control register.

Device Drivers
Device Drivers are very essential for a computer system to work properly because without device
driver the particular hardware fails to work accordingly means it fails in doing a particular
function/action for which it has been created.
In a very common way most term it as only a Driver also when someone says Hardware
Driver that also refers to this Device Driver.

Types of Device Driver:


For almost every device associated with the computer system there exist Device Driver for the
particular hardware.But it can be broadly classified into two types i.e.,
1. Kernel-mode Device Driver –
This Kernel-mode device driver includes some generic hardwares which loads with
operating System as part the OS these are BIOS, motherboard, processor and some
other hardwares which are part of kernel software. These includes the minimum
system requirement device drivers for each operating system.
2. User-mode Device Driver –
Other than the devices which are brought by kernel for working of the system the user
also bring some devices for use during the using of a system that devices needs
device drivers to functions those drivers falls under User mode device driver. For
example user needs any plug and play action that comes under this.

Disk Drive (HDD) Secondary memory


A hard disk is a memory storage device that looks like this:

The disk is divided into tracks. Each track is further divided into sectors. The point to be noted
here is that outer tracks are bigger in size than the inner tracks but they contain the same number
of sectors and have equal storage capacity. This is because the storage density is high in sectors
of the inner tracks whereas the bits are sparsely arranged in sectors of the outer tracks. Some
space of every sector is used for formatting. So, the actual capacity of a sector is less than the
given capacity.
Read-Write(R-W) head moves over the rotating hard disk. It is this Read-Write head that
performs all the read and writes operations on the disk and hence, the position of the R-W head is
a major concern. To perform a read or write operation on a memory location, we need to place
the R-W head over that position. Some important terms must be noted here:
1. Seek time –The time taken by the R-W head to reach the desired track from its
current position.
2. Rotational latency –Time is taken by the sector to come under the R-W head.
3. Data transfer time –Time is taken to transfer the required amount of data. It depends
upon the rotational speed.
4. Controller time –The processing time taken by the controller.
5. Average Access time –seek time + Average Rotational latency + data transfer time +
controller time.
File management
A file management system is used for file maintenance (or management) operations. It is a type
of software that manages data files in a computer system.

A file management system has limited capabilities and is designed to manage individual or group
files, such as special office documents and records.

concepts:
1. File Attributes
It specifies the characteristics of the files such as type, date of last modification, size,
location on disk etc. file attributes help the user to understand the value and location
of files. File attributes is one most important feature. It is uses to describe all the
information regarding particular file.
2. File Operations
It specifies the task that can be performed on a file such as opening and closing of
file.
3. File Access permission
It specifies the access permissions related to a file such as read and write.
4. File Systems
It specifies the logical method of file storage in a computer system. Some of the
commonly used files systems include FAT and NTFS.
5. Creating a file. Two steps are necessary to create a file.
1. Space in the file system must be found for the file.
2. An entry for the new file must be made in the directory.
6. Writing a file. To write a file, we make a system call specifying both the name of the
file and the information to be written to the file. The system must keep a write
pointerto the location in the file where the next write is to take place. The write
pointer must be updated whenever a write occurs.
7. Reading a file. To read from a file, we use a system call that specifies the name of
the file and where (in memory) the next block of the file should be put. The system
needs to keep a read pointerto the location in the file where the next read is to take
place.
8. Repositioning within a file. The directory is searched for the appropriate entry, and
the current-file-position pointer is repositioned to a given value. Repositioning within
a file need not involve any actual I/O. This file operation is also known as a file seek.
9. Deleting a file. To delete a file, we search the directory for the named file. Having
found the associated directory entry, we release all file space, so that it can be reused
by other files, and erase the directory entry.
10. Truncating a file. The user may want to erase the contents of a file but keep its
attributes. Rather than forcing the user to delete the file and then recreate it, this
function allows all attributes to remain unchanged (except for file length) but lets the
file be reset to length zero and its file space released
File Access Methods in Operating System
There are three ways to access a file into a computer system: Sequential-Access, Direct Access,
Index sequential Method.

1. Sequential Access –
It is the simplest access method. Information in the file is processed in order, one
record after the other. This mode of access is by far the most common; for example,
editor and compiler usually access the file in this fashion.
2. Direct Access –
Another method is direct access method also known as relative access method. A
filed-length logical record that allows the program to read and write record rapidly. in
no particular order. The direct access is based on the disk model of a file since disk
allows random access to any file block. For direct access, the file is viewed as a
numbered sequence of block or record. Thus, we may read block 14 then block 59,
and then we can write block 17. There is no restriction on the order of reading and
writing for a direct access file.
3. Index sequential method –
It is the other method of accessing a file that is built on the top of the sequential
access method. These methods construct an index for the file. The index, like an
index in the back of a book, contains the pointer to the various blocks. To find a
record in the file, we first search the index, and then by the help of pointer we access
the file directly.

Structures of Directory in Operating System


A directory is a container that is used to contain folders and files. It organizes files and folders in
a hierarchical manner.
1.Single-level directory –
The single-level directory is the simplest directory structure. In it, all files are contained in the
same directory which makes it easy to support and understand.
2.Two-level directory – In the two-level directory structure, each user has their own user files
directory (UFD). The UFDs have similar structures, but each lists only the files of a single user.
system’s master file directory (MFD) is searches whenever a new user id=s logged in.
3.Tree-structured directory
Once we have seen a two-level directory as a tree of height 2, the natural generalization is to
extend the directory structure to a tree of arbitrary height.
This generalization allows the user to create their own subdirectories and to organize their files
accordingly.

Remote file System

Files can be shared across the network via variety of methods –


 Using FTP i.e., file transfer protocol is used to transfer file from one computer to
other.
 Using distributed file system (DFS) in which remote directories are visible from local
machine.
 Using Remote File System (RFS) in which the arrival of networks has allowed
communication between remote computer.
Remote file sharing (RFS) is a type of distributed file system technology. It enables file and/or
data access to multiple remote users over the Internet or a network connection. It is also known as
a general process of providing remote user access to locally stored files and/or data.
Client-Server Model in RFS :
RFS allows a computer to support one or more file systems from one or more remote machines.
In this case, the machine containing the files is server and the machine wanting access to the files
is the client.
For example, a user sends a file open request to the server along with its ID. The server then
check file access to determine if the user has rights to access the file requested mode. This request
is either allowed or denied. If it is allowed, a file is returned to the client application, and the
application then may perform read, write and other operations on file.
After the required operation is performed, the client closes the files.

Protection in File System

Users want to protect the information stored in the file system from improper access and physical
damage.

To protect our information, one can make duplicate copies of the files, some systems
automatically copy the files to protect the user from losing important information if the original
files are accidentally destroyed.

Access Control
There are numerous ways to access any file, one of the prominent ones is to associate identity-
dependent access with all files and directories. A list is created called the access-control list
which enlists the names of users and the type of access granted to them.

To resolve this situation and condense the length of access-control list, the following
classifications are used:

1. Owner:The user who created the file.


2. Group:The set of users sharing the same file.
3. Universe:All the other users.

Other ways of protection:


Another approach is to use passwords to enable access to the file systems.
However, this method has certain disadvantages:

1. If one password is used for all the files, then in a situation where the password
happens to be known by the other users, all the files will be accessible.
2. It can be difficult to remember a lengthy and large number of passwords.

UNIT-4
Introduction to Distributed Operating System
Distributed Operating System is one of the important type of operating system.
Multiple central processors are used by Distributed systems to serve multiple real-time
applications and multiple users.

Characteristics
 With resource sharing facility, a user at one site may be able to use the resources
available at another.
 Speedup the exchange of data with one another via electronic mail.
 Failure of one site in a distributed system doesn’t affect the others, the remaining sites
can potentially continue operating.
 Better service to the customers.
 Reduction of the load on the host computer.
 Reduction of delays in data processing.
Architecture of Distributed Operating System
A distributed operating system runs on a number of independent sites, those are connected
through a communication network, but users feel it like a single virtual machine and runs its own
operating system.

The figure below gives the architecture of a distributed system. It shows the workstations,
terminals, different servers are connected to a communication network. It shares the services
together. Each computer node has its own memory. Real life example of a distributed system is
the Internet, Intranet, mobile computing, etc.

Issues in Distributed Systems


 the lack of global knowledge.
 naming.
 scalability.
 compatibility.
 process synchronization (requires global knowledge)
 resource management (requires global knowledge)
 security.
 fault tolerance, error recovery.
Multiprocessor Operating System
Multiprocessor is a computer system in which two or more central processing units (CPUs)
exists, each CPU sharing the common main memory (RAM) as well as the peripherals. Due to
this, the simultaneous processing of programs is possible.

he basic organization of the multiprocessor system is given below.

Fig. Basic Organisation of Multiprocessing System

In a multiprocessing system, symmetric multiprocessing model is used. Each processor runs the
same copy of the operating system, and these copies communicate with each other. Each
processor is assigned a specific task in this system. There is also a concept of a master processor
whose task is to controls the system. This scheme refers to as a master-slave relationship. This
system is economically beneficial as compare to single processor systems because the processors
can share peripherals, power supplies, and other devices.

Multiprocessor system is divided into following basic architectures:


1. Symmetric Multiprocessor System (SMP): Systems operating under a single OS
(operating system) with two or more homogeneous processors and with a centralized
shared main memory.
2. UMA (Uniform Memory Access): Uniform memory access (UMA)
system.Heterogeneous multiprocessing system,Symmetric multiprocessing system
(SMP)
3. NUMA (Non-Uniform Memory Access): A cc–NUMA system is a cluster of SMP
systems – each called a “node”, which can have a single processor, a multi-core
processor, or a mix of the two, of one or other kinds of architecture – connected via a
high-speed “connection network”
Approaches to Multiple Processor Scheduling
There are two approaches to multiple processor scheduling in the operating system: Symmetric
Multiprocessing and Asymmetric Multiprocessing.

1. Symmetric Multiprocessing:It is used where each processor is self-scheduling. All


processes may be in a common ready queue, or each processor may have its private
queue for ready processes. The scheduling proceeds further by having the scheduler
for each processor examine the ready queue and select a process to execute.
2. Asymmetric Multiprocessing:It is used when all the scheduling decisions and I/O
processing are handled by a single processor called the Master Server. The other
processors execute only the user code. This is simple and reduces the need for data
sharing, and this entire scenario is called Asymmetric Multiprocessing.

A real-time operating system (RTOS) is an operating system with two key features:
predictability and determinism. In an RTOS, repeated tasks are performed within a tight time
boundary, while in a general-purpose operating system, this is not necessarily so. Predictability
and determinism, in this case, go hand in hand: We know how long a task will take, and that it
will always produce the same result.
RTOSes are subdivided into “soft” real-time and “hard” real- time systems. Soft real-time
systems operate within a few hundred milliseconds, at the scale of a human reaction.
Hard real-time systems, however, provide responses that are predictable within tens of
milliseconds or less.
Characteristics of Real-time System:
Following are the some of the characteristics of Real-time System:
1. Time Constraints:
Time constraints related with real-time systems simply means that time interval
allotted for the response of the ongoing program. This deadline means that the task
should be completed within this time interval. Real-time system is responsible for the
completion of all tasks within their time intervals.
2. Correctness:
Correctness is one of the prominent part of real-time systems. Real-time systems
produce correct result within the given time interval. If the result is not obtained
within the given time interval then also result is not considered correct. In real-time
systems, correctness of result is to obtain correct result in time constraint.
3. Embedded:
All the real-time systems are embedded now-a-days. Embedded system means that
combination of hardware and software designed for a specific purpose. Real-time
systems collect the data from the environment and passes to other components of the
system for processing.
4. Safety:
Safety is necessary for any system but real-time systems provide critical safety. Real-
time systems also can perform for a long time without failures. It also recovers very
soon when failure occurs int he system and it does not cause any harm to the data and
information.
5. Concurrency:
Real-time systems are concurrent that means it can respond to a several number of
processes at a time. There are several different tasks going on within the system and it
responds accordingly to every task in short intervals. This makes the real-time
systems concurrent systems.
6. Distributed:
In various real-time systems, all the components of the systems are connected in a
distributed way. The real-time systems are connected in such a way that different
components are at different geographical locations. Thus all the operations of real-
time systems are operated in distributed ways.
7. Stability:
Even when the load is very heavy, real-time systems respond in the time constraint
i.e. real-time systems does not delay the result of tasks even when there are several
task going on a same time.

Architecture – Task Management

In RTOS, The application is decomposed into small, schedulable, and sequential program units
known as “Task”, a basic unit of execution and is governed by three time-critical properties;
release time, deadline and execution time. Release time refers to the point in time from which the
task can be executed. Deadline is the point in time by which the task must complete. Execution
time denotes the time the task takes to execute.

Fig. 5: A Diagram Illustrating Use of RTOS for Time Management Application

Scheduling
The scheduler keeps record of the state of each task and selects from among them that are ready
to execute and allocates the CPU to one of them. Various scheduling algorithms are used in
RTOS

Non pre-emptive scheduling or Cooperative Multitasking: Highest priority task executes for some
time, then relinquishes control, re-enters ready state.

Fig. 10: A Figure Illustrating Non-Preemptive Scheduling or Cooperative Multitasking


 Preemptive scheduling Priority multitasking: Current task is immediately suspended
Control is given to the task of the highest priority at all time.
Fig. 11: A Diagram Representing Preemptive Scheduling or Priority Multitasking
Case study on Linux
Explain evolution of UNIX • UNIX development was started in 1969 at Bell Laboratories in New
Jersey. • Bell Laboratories was (1964–1968) involved on the development of a multi-user,
timesharing operating system called Multics (Multiplexed Information and Computing System).

Case Study on Linux


 Unix V6, released in 1975 became very popular. Unix V6 was free and was
distributed with its source code.
 In 1983, AT&T released Unix System V which was a commercial version.
 Meanwhile, the University of California at Berkeley started the development of its
own version of Unix. Berkeley was also involved in the inclusion of Transmission
Control Protocol/Internet Protocol (TCP/IP) networking protocol.
 The following were the major mile stones in UNIX history early 1980’s.
 AT&T was developing its System V Unix.
 Berkeley took initiative on its own Unix BSD (Berkeley Software Distribution) Unix.
• Sun Microsystems developed its own BSD-based Unix called SunOS and later was
renamed to Sun Solaris.
 Microsoft and the Santa Cruz operation (SCO) were involved in another version of
UNIX called XENIX.
 Hewlett-Packard developed HP-UX for its workstations. • DEC released ULTRIX.
 In 1986, IBM developed AIX (Advanced Interactive eXecutive).

You might also like