Operating System 4TH Semester
Operating System 4TH Semester
An operating system is software that manages the computer hardware. The hardware must
provide appropriate mechanisms to ensure the correct operation of the computer system and to
prevent user programs from interfering with the proper operation of the system.
The primary function of an operating system is to provide an environment for the execution of
users’ program. However, it is divided into number of small pieces that performs specialized
tasks.
1. Process Management
2. Main memory Management
3. Secondary storage Management
4. File Management
5. I/O Management
6. Protection and security
7. Networking
8. Command interpretation
1. Process Management
The process management refers to the assignment of processor to different tasks
being performed by the computer system. The process management schedules the
various processes of a system for execution.
The operating system is responsible for the following activities in connection with
process management:
1. Creating and deleting both user and system processes.
2. Suspending and resuming processes.
3. Providing mechanisms for process synchronization such as semaphores.
4. Providing mechanism for process communication(i.e. communication between
different processes in a system)
5. Providing mechanisms for deadlock handling. Deadlock is a condition in which the
number of processes waits infinitely for some shared resource.
2. Memory Management
The operating system manages the Primary Memory or Main Memory. Main memory
is made up of a large array of bytes or words where each byte or word is assigned a
certain address.
Main memory is a fast storage and it can be accessed directly by the CPU. For a
program to be executed, it should be first loaded in the main memory.
An Operating System performs the following activities for memory management:
1. It keeps tracks of primary memory, i.e., which bytes of memory are used by which
user program.
2. Deciding which processes are to be loaded into memory when memory space
becomes available
3. Allocating and deallocating memory spaces as needed
In multi programming, the OS decides the order in which process are granted access
to memory, and for how long.
4. File Management
A file system is normally organized into directories for easy navigation and usage. These
directories may contain files and other directions.
Keeps track of information, location, uses, status etc. The collective facilities are
often known as file system.
Decides who gets the resources.
Allocates the resources.
De-allocates the resources.
5. I/O Management
I/O management refers to the coordination and assignment of the different input and
output devices to the various programs that are being executed.
Thus, an OS is responsible for managing various I/O devices, such as keyboard,
mouse, printer and monitor.
The I/O sub system consists of following components:
1. A memory management component that includes buffering, caching and spooling.
2. A general device-driver interface.
3. Drivers for specific hardware devices.
The operating system performs following function related to I/O management
1. Issuing commands to various input and output devices.
2. Capturing interrupt such as hardware failure.
3. Handling errors that appear in reading and writing process of devices.
4. Security
Security deals with protecting the various resources and information of a computer
system against destruction and unauthorized access.
A total approach to computer security involves both external and internal security.
External security deals with securing the computer system against external factors
such as fires, floods, earthquakes, stolen disks/tapes, leaking out of stored information
by a person who has access to the information.
Internal security deals with users’ authentication, access control and cryptography.
7. Networking
Networking is used for exchanging information among different computer that are
distributed across various locations.
Distributed systems consist of multiple processor and each processors has its own
memory and clock.
The various processors communicate using communication links, such as telephone
lines or buses.
The processors in distributed system vary is size and functions. They may include
small microprocessors, workstation, minicomputers and large general purpose
computer systems.
Thus, a distributed system enables us to share the various resources of the network.
This results in computation speedup, increased functionality, increased data
availability and better reliability.
8. Command Interpretation
The command interpreter is the basic interface between the computer and the user.
Command interpreter provides a set of commands using which the user can give
instruction to the computer for getting some job done by it.
The various commands supported by command interpretation module are known as
system calls.
When a user gives instructions to the computer by using these system calls, the
command interpreter takes care of interpreting these commands and directing the
system resources to handle the requests.
There are two different user interfaces supported by various operating systems:
1. Command line interface
2. Graphical user interface
Command line Interface (CLI):- It is the textual user interface in which user gives
instruction to computer by typing the commands.
Graphical user interface (GUI):- GUI provides the user a screen full of graphical
icons or menus and allows the user to make a rapid selection from the displayed icons
or menus to give instruction to computer.
Simple structure:
Such operating systems do not have well defined structure and are small, simple and limited
systems. The interfaces and levels of functionality are not well separated. MS-DOS is an example
of such operating system. In MS-DOS application programs are able to access the basic I/O
routines. These types of operating system cause the entire system to crash if one of the user
programs fails.
The main disadvantage of this structure is that at each layer, the data needs to be modified and
passed on which adds overhead to the system. Moreover careful planning of the layers is
necessary as a layer can use only lower level layers. UNIX is an example of this structure.
Micro-kernel:
This structure designs the operating system by removing all non-essential components from the
kernel and implementing them as system and user programs. This result in a smaller kernel called
the micro-kernel.
Advantages of this structure are that all new services need to be added to user space and does not
require the kernel to be modified. Thus it is more secure and reliable as if a service fails then rest
of the operating system remains untouched. Mac OS is an example of this type of OS.
Each process asks for the system resources like computing power, memory network
network connectivity etc. The kernel is the bulk of the executable code in charge of
handling such request.
The kernel is the main component of most computer operating systems. It is a bridge
between applications and the actual data processing done at the hardware level.
The following are the major role of the kernel
Resource allocation
The kernel’s primary function is to manage the computer’s resources and allow other
programs to run and use these resources. These resources are CPU, memory and I/O
devices.
Process Management
The kernel is in charge of creating, destroying and handling the input output of the
process.
Communications amongst the different processes is the responsibility of the kernel.
Memory Management
The memory is the major resource of the computer and the policy used to deal with it
is a critical.
The kernel builds a virtual address space for all the process on the top of the resource.
The different parts of the kernel interact with the memory management subsystem by
using the function calls.
File System
The kernel builds the structured file system on the top of the unstructured file system
on the top of the unstructured hardware.
Kernel also supports the multiple file system types that is different way of managing
the data.
Device Control
Every system amps into the physical device.
All the device control operations are performed by the code that us specific to the
device being addressed. The code is called device driver.
Inter- Process communication
Kernel provides methods for synchronization and communication between processes
called Inter-process communication (IPC)
There are various approaches of IPC say, semaphore, shared memory, message
queue, pipe etc.
Security or protection Management
Kernel also provides protection from faults (error control) and from malicious
behavior.
One approach toward this can be language based protection sytem, in which the
kernel will only allow code to execute which has been produced by a trusted language
compiler.
Role of Shell
It gathers input from user and executes programs based on that input when a program
finishes executing; it displays that program’s output.
It is primary interface between a user sitting at his terminal and operating system,
unless the user is not using a graphical interface.
A shell is an environment in which user can run out commands, programs and shell
scripts
There can various kinds of shells such as sh(Bourne shell), csh( C shell),ksh(korn
shell) and bash.
When any user logs in, a shell is started.
The shell has the terminal as standard input and standard output.
It starts out by typing the prompt, a character such as $, which tells user that shell is
waiting to accept command.
For example if user types date command, $ date
Tue Feb. 23 06:01:13 IST 2019
The shell creates a child process and runs date program as child.
While child process is running, the shell waits for it to terminate.
When child finishes, the shell types the prompt again and tries to read the next input
line.
Shell is work as interface, command interpreter and programming language.
Shell is interface between user and computer.
User can directly interact with shell.
Shell provides command prompt to user to execute commands.
Shell-As command interpreter.
It read command enter by user on prompt.
It interprets the command, so kernel can understand it easily.
Shell – As programming language
Shell is also work as programming language
It provides all features of programming language like variables, control structures and
loop structures
The operating system can be observed from the point of view of the user or the system. This is
known as the user view and the system view respectively. More details about these are given as
follows –
User View
The user view depends on the system interface that is used by the users. The different types of
user view experiences can be explained as follows −
If the user is using a personal computer, the operating system is largely designed to
make the interaction easy. Some attention is also paid to the performance of the
system, but there is no need for the operating system to worry about resource
utilization. This is because the personal computer uses all the resources available and
there is no sharing.
If the user is using a system connected to a mainframe or a minicomputer, the
operating system is largely concerned with resource utilization. This is because there
may be multiple terminals connected to the mainframe and the operating system
makes sure that all the resources such as CPU, memory, I/O devices etc. are divided
uniformly between them.
If the user is sitting on a workstation connected to other workstations through
networks, then the operating system needs to focus on both individual usage of
resources and sharing though the network. This happens because the workstation
exclusively uses its own resources but it also needs to share files etc. with other
workstations across the network.
If the user is using a handheld computer such as a mobile, then the operating system
handles the usability of the device including a few remote operations. The battery
level of the device is also taken into account.
There are some devices that contain very less or no users view because there is no interaction
with the users. Examples are embedded computers in home devices, automobiles etc.
System View
According to the computer system, the operating system is the bridge between applications and
hardware. It is most intimate with the hardware and is used to control it as required.
The different types of system view for operating system can be explained as follows:
The system views the operating system as a resource allocator. There are many
resources such as CPU time, memory space, file storage space, I/O devices etc. that
are required by processes for execution. It is the duty of the operating system to
allocate these resources judiciously to the processes so that the computer system can
run as smoothly as possible.
The operating system can also work as a control program. It manages all the
processes and I/O devices so that the computer system works smoothly and there are
no errors. It makes sure that the I/O devices work in a proper manner without creating
problems.
Operating systems can also be viewed as a way to make using hardware easier.
Computers were required to easily solve user problems. However it is not easy to
work directly with the computer hardware. So, operating systems were developed to
easily communicate with the hardware.
An operating system can also be considered as a program running at all times in the
background of a computer system (known as the kernel) and handling all the
application programs. This is the definition of the operating system that is generally
followed.
Evolution of OS
Evolution of OS since 1950 described in detail in this article. Here we will discuss six main
operating system types evaluated over the past 70 years.
Serial Processing
History of the operating system started in 1950. Before 1950, the programmers directly interact
with the hardware there was no operating system at that time. If a programmer wishes to execute
a program on those days, the following serial steps are necessary.
This type of processing is difficult for users, it takes much time and the next program should wait
for the completion of the previous one. The programs are submitted to the machine one after one,
therefore the method is said to be serial processing.
Batch Processing
Before 1960, it is difficult to execute a program using a computer because of the computer
located in three different rooms, one room for the card reader, one room for executing the
program and another room for printing the result.
The user/machine operator runs between three rooms to complete a job. We can solve this
problem by using batch processing.
In batch processing technique, the same type of jobs batch together and execute at a time. The
carrier carries the group of jobs at a time from one room to another.
Therefore, the programmer need not run between these three rooms several times.
Multiprogramming
Multiprogramming is a technique to execute the number of programs simultaneously by a single
processor. In multiprogramming, a number of processes reside in main memory at a time. The
OS(Operating System) picks and begins to execute one of the jobs in main memory. Consider the
following figure, it depicts the layout of the multiprogramming system. The main memory
consisting of 5 jobs at a time, the CPU executes one by one.
Multiprogramming
In the non-multiprogramming system, the CPU can execute only one program at a time, if the
running program is waiting for any I/O device, the CPU becomes idle so it will effect on the
performance of the CPU.
But in a multiprogramming environment, if any I/O wait happened in a process, then the CPU
switches from that job to another job in the job pool. So, the CPU is not idle at any time.
Advantages:
The main advantage of the time-sharing systemis efficient CPU utilization. It was
developed to provide interactive use of a computer system at a reasonable cost. A
time shared OS uses CPU scheduling and multiprogramming to provide each user
with a small portion of a time-shared computer.
Another advantage of the time-sharing system over the batch processing system is,
the user can interact with the job when it is executing, but it is not possible in batch
systems.
Parallel System
There is a trend multiprocessor system, such system have more than one processor in close
communication, sharing the computer bus, the clock, and sometimes memory and peripheral
devices.
These systems are referred to as “Tightly Coupled” system. Then the system is called a parallel
system. In the parallel system, a number of processors are executing there job in parallel.
Advantages:
Distributed System
In a distributed operating system, the processors cannot share a memory or a clock, each
processor has its own local memory. The processor communicates with one another through
various communication lines, such as high-speed buses. These systems are referred to as
“Loosely Coupled” systems.
Advantages:
It is very difficult to guess or know the time required for any job to complete.
Processors of the batch systems know how long the job would be when it is in queue
Multiple users can share the batch systems
The idle time for the batch system is very less
It is easy to manage large work repeatedly in batch systems
4. Multi Programming
In the multi-programming system, one or multiple programs can be loaded into its main
memory for getting to execute. It is capable only one program or process to get CPU for
executes for their instructions, and other programs wait for getting their turn. Main goal of using
of multiprogramming system is overcome issue of under utilization of CPU and primary
memory.
Main objective of multiprogramming is to manage entire resources of the system. The primary
components of multiprogramming system are command processor, file system, I/O control
system, and transient area.
Preemptive Multitasking
Preemptive multitasking is special task that is assigned to computer operating system, in which
it takes decision that how much time spent by one task before assigning other task for using the
operating system. Operating system has control for completing this entire process, so it is known
as “Preemptive”.
Cooperative Multitasking
Cooperative multitasking is known as “Non-Preemptive Multitasking”. Main goal of Cooperative
multitasking is to run currently task, and to release the CPU to allow another task run. This task is
performed by calling taskYIELD().Context-switch is executed when this function is called.
Background Processing
Multitasking operating system creates the better environment to execute the background
programs. These background programs are not transparent for normal users, but these programs
help to run other programs smoothly such as firewall, antivirus software, and more.
Good Reliability
Multitasking operating system provides the several flexibilities for multiple users, and they are
more satisfied to them. On which, every users can operate single or multiple programs with
smoothly.
Processor Boundation
Computer can run programs slowly due to slow speed of their processors, and its response time
can increase while handling multiple programs. Need better processing power, to overcome this
problem.
CPU Heat up
Multiple processors become busier at a time for executing any task in multitasking nature, So
CPU produces more heat.
6. 6. Multiprocessing System
Multiprocessor system is the system that contains two or more processors or CPU’s
and has ability to simultaneously execute several programs. Hence the name multi-
processor.
In such a system, multiple processors share the clock, bus, memory and peripheral
devices.
A multiprocessor system is also known as parallel system.
In such a system, instructions from different and independent programs can be
processed at the same instant of time by different CPU’s.
In this system, the CPU’s simultaneously execute different instructions from the same
program.
Types of Multiprocessors
There are mainly two types of multiprocessors i.e. symmetric and asymmetric multiprocessors.
Details about them are as follows −
Symmetric Multiprocessors
In these types of systems, each processor contains a similar copy of the operating system and they
all communicate with each other. All the processors are in a peer to peer relationship i.e. no
master – slave relationship exists between them.
An example of the symmetric multiprocessing system is the Encore version of UNIX for the
Multimax Computer.
Asymmetric Multiprocessors
In asymmetric systems, each processor is given a predefined task. There is a master processor that
gives instruction to all the other processors. Asymmetric multiprocessor system contains a master
slave relationship.
Asymmetric multiprocessor was the only type of multiprocessor available before symmetric
multiprocessors were created. Now also, this is the cheaper option.
Enhanced Throughput
If multiple processors are working in tandem, then the throughput of the system increases i.e.
number of processes getting executed per unit of time increase. If there are N processors then the
throughput increases by an amount just under N.
Increased Expense
Even though multiprocessor systems are cheaper in the long run than using multiple computer
systems, still they are quite expensive. It is much cheaper to buy a simple single processor system
than a multiprocessor system.
Complicated Operating System Required
There are multiple processors in a multiprocessor system that share peripherals, memory etc. So,
it is much more complicated to schedule processes and impart resources to processes. than in
single processor systems. Hence, a more complex and complicated operating system is required
in multiprocessor systems.
Difference between Hard real time and Soft real time system :
2. Process:
The term process (Job) refers to program code that has been loaded into a computer’s memory so
that it can be executed by the central processing unit (CPU). A process can be described as an
instance of a program running on a computer or as an entity that can be assigned to and executed
on a processor. A program becomes a process when loaded into memory and thus is an active
entity.
2. Process:
The term process (Job) refers to program code that has been loaded into a computer’s
memory so that it can be executed by the central processing unit (CPU). A process
can be described as an instance of a program running on a computer or as an entity
that can be assigned to and executed on a processor. A program becomes a process
when loaded into memory and thus is an active entity.
The operating system then adds the previously running process to the rear end of
ready queue and allocates CPU to the first process on the ready queue. These state
transitions are indicated as:
Timerunout (processname): running ready
If a running process initiates an input/output operation before its time slice expires,
the running process voluntarily release the CPU. It is sent to the waiting queue and
the process state is marked as waiting blocked. This state transition is indicated as:
Block(processname) : running blocked
After the competition of I/O task, the blocked or waiting process is restored and
placed back in the ready queue and the process state is marked as ready.
When the execution of process ends, the process state is marked as terminated and the
operating system reclaims all the process allocated to the process.
Scheduling Queues
In multiprogramming when several processes are in waiting for I/O operation, they
form queues.
The various queues maintained by operating system are:
1. Job Queue
As the process enter the system, it is put into a job queue. This queue consists of all
ready to run.
2. Ready queue
It is a doubly linked list of processes that are residing in the main memory and are
ready to run.
The various processes in ready queue are placed according to their priority i.e. higher
priority process is at the front of the queue.
The header of ready queue contains two pointers. The first pointer points to the PCB
of first process and the second pointer points to the PCB of last process in the queue.
3. Device Queue
Device queue contains all those processes that are waiting for a particular I/O device.
Each device has its own device queue.
Types of schedulers
Scheduler
A scheduler is an operating system module that selects the next job or process to be
admitted into the system.
Thus, a scheduler selects one of the processes from among the processes in the
memory that are ready to execute and allocates CPU to it.
Concept of Thread
A is a single sequential flow of execution of the tasks of a process.
A thread is a lightweight process and the smallest unit of CPU utilization. Thus a
thread is like a little miniprocess.
Each thread has a thread id, a program counter, a register set and a stack.
A thread undergoes different states such as new, ready, running, waiting and
terminated similar to that of a process.
However, a thread is not a program as it cannot run on its own. It runs within a
program.
Why Multithreading?
A thread is also known as lightweight process. The idea is to achieve parallelism by dividing a
process into multiple threads. For example, in a browser, multiple tabs can be different threads.
MS Word uses multiple threads: one thread to format the text, another thread to process inputs,
etc.
Types of Threads:
1. User Level thread (ULT)
Is implemented in the user level library, they are not created using the system calls.
Thread switching does not need to call OS and to cause interrupt to Kernel. Kernel
doesn’t know about the user level thread and manages them as if they were single-
threaded processes.
Advantages of ULT
Can be implemented on an OS that doesn’t support multithreading.
Simple representation since thread has only program counter, register set, stack space.
Simple to create since no intervention of kernel.
Thread switching is fast since no OS calls need to be made.
Disadvantages of ULT –
No or less co-ordination among the threads and Kernel.
If one thread causes a page fault, the entire process blocks.
1. When a process switches from the runningstate to the waiting state(for I/O request
or invocation of wait for the termination of one of the child processes).
2. When a process switches from the runningstate to the ready state (for example,
when an interrupt occurs).
3. When a process switches from the waitingstate to the ready state(for example,
completion of I/O).
4. When a process terminates.
In circumstances 1 and 4, there is no choice in terms of scheduling. A new process(if one exists in
the ready queue) must be selected for execution. There is a choice, however in circumstances 2
and 3.
When Scheduling takes place only under circumstances 1 and 4, we say the scheduling scheme
is non-preemptive; otherwise, the scheduling scheme is preemptive.
Non-Preemptive Scheduling
Under non-preemptive scheduling, once the CPU has been allocated to a process, the process
keeps the CPU until it releases the CPU either by terminating or by switching to the waiting state.
This scheduling method is used by the Microsoft Windows 3.1 and by the Apple Macintosh
operating systems.
It is the only method that can be used on certain hardware platforms because It does not require
the special hardware(for example a timer) needed for preemptive scheduling.
In non-preemptive scheduling, it does not interrupt a process running CPU in the middle of the
execution. Instead, it waits till the process completes its CPU burst time, and then after that it can
allocate the CPU to any other process.
Some Algorithms based on non-preemptive scheduling are: Shortest Job First (SJF basically non-
preemptive) Scheduling and Priority (non- preemptive version) Scheduling, etc.
Preemptive Scheduling
In this type of Scheduling, the tasks are usually assigned with priorities. At times it is necessary
to run a certain task that has a higher priority before another task although it is running.
Therefore, the running task is interrupted for some time and resumed later when the priority task
has finished its execution.
Thus this type of scheduling is used mainly when a process switches either from running state to
ready state or from waiting state to ready state. The resources (that is CPU cycles) are mainly
allocated to the process for a limited amount of time and then are taken away, and after that, the
process is again placed back in the ready queue in the case if that process still has a CPU burst
time remaining. That process stays in the ready queue until it gets the next chance to execute.
Some Algorithms that are based on preemptive scheduling are Round Robin Scheduling (RR),
Shortest Remaining Time First (SRTF), Priority (preemptive version) Scheduling, etc.
CPU Utilization
To make out the best use of the CPU and not to waste any CPU cycle, the CPU would be working
most of the time(Ideally 100% of the time). Considering a real system, CPU usage should range
from 40% (lightly loaded) to 90% (heavily loaded.)
Throughput
It is the total number of processes completed per unit of time or rather says the total amount of
work done in a unit of time. This may range from 10/second to 1/hour depending on the specific
processes.
Turnaround Time
It is the amount of time taken to execute a particular process, i.e. The interval from the time of
submission of the process to the time of completion of the process(Wall clock time).
Waiting Time
The sum of the periods spent waiting in the ready queue amount of time a process has been
waiting in the ready queue to acquire get control on the CPU.
Load Average
It is the average number of processes residing in the ready queue waiting for their turn to get into
the CPU.
Response Time
Amount of time it takes from when a request was submitted until the first response is produced.
Remember, it is the time till the first response and not the completion of process execution(final
response).
In general CPU utilization and Throughput are maximized and other factors are reduced for
proper optimization.
Figure 12: Bursts of CPU usage alternate with periods of waiting for I/O. (a) A CPU-bound process. (b)
An I/O-bound process.
Some processes, such as the one in, spend most of their time computing (CPU-
bound), while others, such as the one in, spend most of their time waiting for I/O
(I/O-bound).
Having some CPU-bound processes and some I/O-bound processes in memory
together is a better idea than first loading and running all the CPU-bound jobs and
then when they are finished loading and running all the I/O-bound jobs (a careful mix
of processes).
Scheduling Algorithms
To decide which process to execute first and which process to execute last to achieve maximum
CPU utilization, computer scientists have defined some algorithms, they are:
Advantages of FCFS
Simple
Easy
First come, First serve
Disadvantages of FCFS
1. The scheduling method is non preemptive, the process will run to the completion.
2. Due to the non-preemptive nature of the algorithm, the problem of starvation may
occur.
3. Although it is easy to implement, but it is poor in performance since the average
waiting time is higher as compare to other scheduling algorithms.
Example
Let’s take an example of The FCFS scheduling algorithm. In the Following schedule, there are 5
processes with process ID P0, P1, P2, P3 and P4. P0 arrives at time 0, P1 at time 1, P2 at time 2,
P3 arrives at time 3 and Process P4 arrives at time 4 in the ready queue. The processes and their
respective Arrival and Burst time are given in the following table.
The Turnaround time and the waiting time are calculated by using the following formula.
Process ID Arrival Time Burst Time Completion Time Turn Around Time Waiting Time
0 0 2 2 2 0
1 1 6 8 7 1
2 2 4 12 8 4
3 3 9 21 18 9
4 4 12 33 29 17
Avg Waiting Time=31/5
However, it is very difficult to predict the burst time needed for a process hence this algorithm is
very difficult to implement in the system.
Advantages of SJF
1. Maximum throughput
2. Minimum average waiting and turnaround time
Example
In the following example, there are five jobs named as P1, P2, P3, P4 and P5. Their arrival time
and burst time are given in the table below.
PID Arrival Time Burst Time Completion Time Turn Around Time Waiting Time
1 1 7 8 7 0
2 3 3 13 10 7
3 6 2 10 4 2
4 7 10 31 24 14
5 9 8 21 12 4
Avg Waiting Time = 27/5
Advantages
1. It can be actually implementable in the system because it is not depending on the
burst time.
2. It doesn’t suffer from the problem of starvation or convoy effect.
3. All the jobs get a fare allocation of CPU.
Disadvantages
1. The higher the time quantum, the higher the response time in the system.
2. The lower the time quantum, the higher the context switching overhead in the system.
3. Deciding a perfect time quantum is really a very difficult task in the system.
RR Scheduling Example
In the following example, there are six processes named as P1, P2, P3, P4, P5 and P6. Their
arrival time and burst time are given below in the table. The time quantum of the system is 4
units.
Ready Queue:
Initially, at time 0, process P1 arrives which will be scheduled for the time slice 4 units. Hence in
the ready queue, there will be only one process P1 at starting with CPU burst time 5 units.
P1
5
GANTT chart
The P1 will be executed for 4 units first.
Ready Queue
Meanwhile the execution of P1, four more processes P2, P3, P4 and P5 arrives in the ready
queue. P1 has not completed yet, it needs another 1 unit of time hence it will also be added back
to the ready queue.
P2 P3 P4 P5 P1
6 3 1 5 1
GANTT chart
After P1, P2 will be executed for 4 units of time which is shown in the Gantt chart.
Ready Queue
During the execution of P2, one more process P6 is arrived in the ready queue. Since P2 has not
completed yet hence, P2 will also be added back to the ready queue with the remaining burst time
2 units.
P3 P4 P5 P1 P6 P2
3 1 5 1 4 2
GANTT chart
After P1 and P2, P3 will get executed for 3 units of time since its CPU burst time is only 3
seconds.
Ready Queue
Since P3 has been completed, hence it will be terminated and not be added to the ready queue.
The next process will be executed is P4.
P4 P5 P1 P6 P2
1 5 1 4 2
GANTT chart
After, P1, P2 and P3, P4 will get executed. Its burst time is only 1 unit which is lesser then the
time quantum hence it will be completed.
Ready Queue
The next process in the ready queue is P5 with 5 units of burst time. Since P4 is completed hence
it will not be added back to the queue.
P5 P1 P6 P2
5 1 4 2
GANTT chart
P5 will be executed for the whole time slice because it requires 5 units of burst time which is
higher than the time slice.
Ready Queue
P5 has not been completed yet; it will be added back to the queue with the remaining burst time
of 1 unit.
P1 P6 P2 P5
1 4 2 1
GANTT Chart
The process P1 will be given the next turn to complete its execution. Since it only requires 1 unit
of burst time hence it will be completed.
Ready Queue
P1 is completed and will not be added back to the ready queue. The next process P6 requires only
4 units of burst time and it will be executed next.
P6 P2 P5
4 2 1
GANTT chart
P6 will be executed for 4 units of time till completion.
Ready Queue
Since P6 is completed, hence it will not be added again to the queue. There are only two
processes present in the ready queue. The Next process P2 requires only 2 units of time.
P2 P5
2 1
GANTT Chart
P2 will get executed again, since it only requires only 2 units of time hence this will be
completed.
Ready Queue
Now, the only available process in the queue is P5 which requires 1 unit of burst time. Since the
time slice is of 4 units hence it will be completed in the next burst.
P5
1
GANTT chart
P5 will get executed till completion.
The completion time, Turnaround time and waiting time will be calculated as shown in the table
below.
As, we know,
Process ID Arrival Time Burst Time Completion Time Turn Around Time Waiting Time
1 0 5 17 17 12
2 1 6 23 22 16
3 2 3 11 9 6
4 3 1 12 9 8
5 4 5 24 20 15
6 6 4 21 15 11
Avg Waiting Time = (12+16+6+8+15+11)/6 = 76/6 units
All three different type of processes have there own queue. Each queue have its own Scheduling
algorithm. For example, queue 1 and queue 2 uses Round Robin while queue 3 can use FCFS to
schedule there processes.
Scheduling among the queues : What will happen if all the queues have some processes? Which
process should get the cpu? To determine this Scheduling among the queues is necessary. There
are two ways to do so –
1. Fixed priority preemptive scheduling method –Each queue has absolute priority
over lower priority queue. Let us consider following priority order queue 1 > queue 2
> queue 3.According to this algorithm no process in the batch queue(queue 3) can
run unless queue 1 and 2 are empty. If any batch process (queue 3) is running and any
system (queue 1) or Interactive process(queue 2) entered the ready queue the batch
process is preempted.
2. Time slicing– In this method each queue gets certain portion of CPU time and can
use it to schedule its own processes.For instance, queue 1 takes 50 percent of CPU
time queue 2 takes 30 percent and queue 3 gets 20 percent of CPU time.
Example Problem :
Consider below table of four processes under Multilevel queue scheduling.Queue number denotes
the queue of the process.
Priority of queue 1 is greater than queue 2. queue 1 uses Round Robin (Time Quantum = 2) and
queue 2 uses FCFS.
At starting both queues have process so process in queue 1 (P1, P2) runs first (because of higher
priority) in the round robin fashion and completes after 7 units then process in queue 2 (P3) starts
running (as there is no process in queue 1) but while it is running P4 comes in queue 1 and
interrupts P3 and start running for 5 second and after its completion P3 takes the CPU and
completes its execution.
Advantages:
The processes are permanently assigned to the queue, so it has advantage of low
scheduling overhead.
Disadvantages:
Some processes may starve for CPU if some higher priority queues are never
becoming empty.
It is inflexible in nature.
UNIT-2
Memory Management
Introduction
In a multiprogramming computer, the operating system resides in a part of memory and the rest is
used by multiple processes. The task of subdividing the memory among different processes is
called memory management.
Static loading:- loading the entire program into a fixed address. It requires more
memory space.
Dynamic loading:- The entire program and all data of a process must be in physical
memory for the process to execute. So, the size of a process is limited to the size of
physical memory. To gain proper memory utilization, dynamic loading is used. In
dynamic loading, a routine is not loaded until it is called.
Static linking: In static linking, the linker combines all necessary program modules
into a single executable program. So there is no runtime dependency. Some operating
systems support only static linking, in which system language libraries are treated like
any other object module.
Dynamic linking: The basic concept of dynamic linking is similar to dynamic
loading. In dynamic linking, “Stub” is included for each appropriate library routine
reference. A stub is a small piece of code.
Memory Protection
Memory protection prevents a process from accessing unallocated memory in OS as it stops the
software from seizing control of an excessive amount of memory and may cause damage that will
impact other software which is currently being used or may create a loss of saved data.
1.Memory Protection using Keys: The concept of using memory protection with keys can be
found in most modern computers with the purpose of paged memory organization and for the
dynamic distribution between the parallel running programs.
2. Memory Protection using Rings: In OS, the domains related to ordered protection
are called Protection Rings. This method helps in improving fault tolerance and
provides security. These rings are arranged in a hierarchy from most privileged to
least privileged.
3. Capability-based addressing: It is a method of protecting the memory that cannot
be seen in modern commercial computers. Here, the pointers (objects consisting of a
memory address) are restored by the capabilities objects that can only be created with
the protected instructions and may only execute by a kernel, or by another process
that is authorized to execute and therefore it gives an advantage of controlling the
unauthorized processes in creating additional
separate address spaces in memory.
4.Memory Protection using masks: The masks are used in the protection of memory during the
organization of paging. In this method, before the implementation, the page numbers are
indicated to each program and are reserved for the placement of its directives.
5.Memory Protection using Segmentation: It is a method of dividing the system memory into
different segments. The data structures of x86 architecture of OS like local descriptor table and
global descriptor table are used in the protection of memory.
6.Memory Protection using Simulated segmentation: With this technique, we can monitor the
program for interpreting the machine code instructions of system architectures. Through this, the
simulator can help in protecting the memory by using a segmentation using the scheme and
validating the target address of every instruction in real-time.
7.Memory Protection using Dynamic tainting: Dynamic tainting is a technique that consists of
marking and tracking certain data in a program at runtime as it protects the process from illegal
memory accesses. I
Memory sharing
When OS dividing memory between the different process is known as memory sharing. It is in
two types Paging and segmentation.
Paging
Paging is a memory management scheme that eliminates the need for contiguous allocation of
physical memory. This scheme permits the physical address space of a process to be non-
contiguous.
Page number(p): Number of bits required to represent the pages in Logical Address
Space or Page number
Page offset(d): Number of bits required to represent a particular word in a page or
page size of Logical Address Space or word number of a page or page offset.
Physical Address is divided into
Segmentation
In Operating Systems, Segmentation is a memory management technique in which the memory is
divided into the variable size parts. Each part is known as a segment which can be allocated to a
process.
The details about each segment are stored in a table called a segment table.
Virtual Memory is a storage allocation scheme in which secondary memory can be addressed as
though it were part of the main memory. The addresses a program may use to reference memory
are distinguished from the addresses the memory system uses to identify physical storage sites,
and program-generated addresses are translated automatically to the corresponding machine
addresses. The size of virtual storage is limited by the addressing scheme of the computer system
and the amount of secondary memory is available not by the actual number of the main storage
locations.
Demand Paging :
The process of loading the page into memory on demand (whenever page fault occurs) is known
as demand paging.
The process includes the following steps :
1. If the CPU tries to refer to a page that is currently not available in the main memory,
it generates an interrupt indicating a memory access fault.
2. The OS puts the interrupted process in a blocking state. For the execution to proceed
the OS must bring the required page into the memory.
3. The OS will search for the required page in the logical address space.
4. The required page will be brought from logical address space to physical address
space. The page replacement algorithms are used for the decision-making of
replacing the page in physical address space.
5. The page table will be updated accordingly.
6. The signal will be sent to the CPU to continue the program execution and it will place
the process back into the ready state.
UNIT-3
I/O Device Management
I/O Device and Controllers
Device Controllers: A device controller is a hardware unit which is attached with the
input/output bus of the computer and provides a hardware interface between the computer and the
input/output devices. On one side it knows how to communicate with input/output devices and on
the other side it knows how to communicate with the computer system though input/output bus.
A device controller usually can control several input/output devices.
DMA Direct memory access (DMA) is a method that allows an input/output (I/O) device to send
or receive data directly to or from the main memory, bypassing the CPU to speed up memory
operations.
The process is managed by a chip known as a DMA controller (DMAC).
Memory-mapped Input/Output: Each controller has a few registers that are used for
communicating with the CPU. By writing into these registers, the operating system can command
the device to deliver data, accept data, switch itself on or off, or otherwise perform some action.
Port-mapped I/O : each control register is assigned an I/O port number, an 8- or 16-bit integer.
Using a special I/O instruction such as IN REG,PORT the CPU can read in control register PORT
and store the result in CPU register REG. Similarly, using OUT PORT,REG the CPU can write
the contents of REG to a control register.
Device Drivers
Device Drivers are very essential for a computer system to work properly because without device
driver the particular hardware fails to work accordingly means it fails in doing a particular
function/action for which it has been created.
In a very common way most term it as only a Driver also when someone says Hardware
Driver that also refers to this Device Driver.
The disk is divided into tracks. Each track is further divided into sectors. The point to be noted
here is that outer tracks are bigger in size than the inner tracks but they contain the same number
of sectors and have equal storage capacity. This is because the storage density is high in sectors
of the inner tracks whereas the bits are sparsely arranged in sectors of the outer tracks. Some
space of every sector is used for formatting. So, the actual capacity of a sector is less than the
given capacity.
Read-Write(R-W) head moves over the rotating hard disk. It is this Read-Write head that
performs all the read and writes operations on the disk and hence, the position of the R-W head is
a major concern. To perform a read or write operation on a memory location, we need to place
the R-W head over that position. Some important terms must be noted here:
1. Seek time –The time taken by the R-W head to reach the desired track from its
current position.
2. Rotational latency –Time is taken by the sector to come under the R-W head.
3. Data transfer time –Time is taken to transfer the required amount of data. It depends
upon the rotational speed.
4. Controller time –The processing time taken by the controller.
5. Average Access time –seek time + Average Rotational latency + data transfer time +
controller time.
File management
A file management system is used for file maintenance (or management) operations. It is a type
of software that manages data files in a computer system.
A file management system has limited capabilities and is designed to manage individual or group
files, such as special office documents and records.
concepts:
1. File Attributes
It specifies the characteristics of the files such as type, date of last modification, size,
location on disk etc. file attributes help the user to understand the value and location
of files. File attributes is one most important feature. It is uses to describe all the
information regarding particular file.
2. File Operations
It specifies the task that can be performed on a file such as opening and closing of
file.
3. File Access permission
It specifies the access permissions related to a file such as read and write.
4. File Systems
It specifies the logical method of file storage in a computer system. Some of the
commonly used files systems include FAT and NTFS.
5. Creating a file. Two steps are necessary to create a file.
1. Space in the file system must be found for the file.
2. An entry for the new file must be made in the directory.
6. Writing a file. To write a file, we make a system call specifying both the name of the
file and the information to be written to the file. The system must keep a write
pointerto the location in the file where the next write is to take place. The write
pointer must be updated whenever a write occurs.
7. Reading a file. To read from a file, we use a system call that specifies the name of
the file and where (in memory) the next block of the file should be put. The system
needs to keep a read pointerto the location in the file where the next read is to take
place.
8. Repositioning within a file. The directory is searched for the appropriate entry, and
the current-file-position pointer is repositioned to a given value. Repositioning within
a file need not involve any actual I/O. This file operation is also known as a file seek.
9. Deleting a file. To delete a file, we search the directory for the named file. Having
found the associated directory entry, we release all file space, so that it can be reused
by other files, and erase the directory entry.
10. Truncating a file. The user may want to erase the contents of a file but keep its
attributes. Rather than forcing the user to delete the file and then recreate it, this
function allows all attributes to remain unchanged (except for file length) but lets the
file be reset to length zero and its file space released
File Access Methods in Operating System
There are three ways to access a file into a computer system: Sequential-Access, Direct Access,
Index sequential Method.
1. Sequential Access –
It is the simplest access method. Information in the file is processed in order, one
record after the other. This mode of access is by far the most common; for example,
editor and compiler usually access the file in this fashion.
2. Direct Access –
Another method is direct access method also known as relative access method. A
filed-length logical record that allows the program to read and write record rapidly. in
no particular order. The direct access is based on the disk model of a file since disk
allows random access to any file block. For direct access, the file is viewed as a
numbered sequence of block or record. Thus, we may read block 14 then block 59,
and then we can write block 17. There is no restriction on the order of reading and
writing for a direct access file.
3. Index sequential method –
It is the other method of accessing a file that is built on the top of the sequential
access method. These methods construct an index for the file. The index, like an
index in the back of a book, contains the pointer to the various blocks. To find a
record in the file, we first search the index, and then by the help of pointer we access
the file directly.
Users want to protect the information stored in the file system from improper access and physical
damage.
To protect our information, one can make duplicate copies of the files, some systems
automatically copy the files to protect the user from losing important information if the original
files are accidentally destroyed.
Access Control
There are numerous ways to access any file, one of the prominent ones is to associate identity-
dependent access with all files and directories. A list is created called the access-control list
which enlists the names of users and the type of access granted to them.
To resolve this situation and condense the length of access-control list, the following
classifications are used:
1. If one password is used for all the files, then in a situation where the password
happens to be known by the other users, all the files will be accessible.
2. It can be difficult to remember a lengthy and large number of passwords.
UNIT-4
Introduction to Distributed Operating System
Distributed Operating System is one of the important type of operating system.
Multiple central processors are used by Distributed systems to serve multiple real-time
applications and multiple users.
Characteristics
With resource sharing facility, a user at one site may be able to use the resources
available at another.
Speedup the exchange of data with one another via electronic mail.
Failure of one site in a distributed system doesn’t affect the others, the remaining sites
can potentially continue operating.
Better service to the customers.
Reduction of the load on the host computer.
Reduction of delays in data processing.
Architecture of Distributed Operating System
A distributed operating system runs on a number of independent sites, those are connected
through a communication network, but users feel it like a single virtual machine and runs its own
operating system.
The figure below gives the architecture of a distributed system. It shows the workstations,
terminals, different servers are connected to a communication network. It shares the services
together. Each computer node has its own memory. Real life example of a distributed system is
the Internet, Intranet, mobile computing, etc.
In a multiprocessing system, symmetric multiprocessing model is used. Each processor runs the
same copy of the operating system, and these copies communicate with each other. Each
processor is assigned a specific task in this system. There is also a concept of a master processor
whose task is to controls the system. This scheme refers to as a master-slave relationship. This
system is economically beneficial as compare to single processor systems because the processors
can share peripherals, power supplies, and other devices.
A real-time operating system (RTOS) is an operating system with two key features:
predictability and determinism. In an RTOS, repeated tasks are performed within a tight time
boundary, while in a general-purpose operating system, this is not necessarily so. Predictability
and determinism, in this case, go hand in hand: We know how long a task will take, and that it
will always produce the same result.
RTOSes are subdivided into “soft” real-time and “hard” real- time systems. Soft real-time
systems operate within a few hundred milliseconds, at the scale of a human reaction.
Hard real-time systems, however, provide responses that are predictable within tens of
milliseconds or less.
Characteristics of Real-time System:
Following are the some of the characteristics of Real-time System:
1. Time Constraints:
Time constraints related with real-time systems simply means that time interval
allotted for the response of the ongoing program. This deadline means that the task
should be completed within this time interval. Real-time system is responsible for the
completion of all tasks within their time intervals.
2. Correctness:
Correctness is one of the prominent part of real-time systems. Real-time systems
produce correct result within the given time interval. If the result is not obtained
within the given time interval then also result is not considered correct. In real-time
systems, correctness of result is to obtain correct result in time constraint.
3. Embedded:
All the real-time systems are embedded now-a-days. Embedded system means that
combination of hardware and software designed for a specific purpose. Real-time
systems collect the data from the environment and passes to other components of the
system for processing.
4. Safety:
Safety is necessary for any system but real-time systems provide critical safety. Real-
time systems also can perform for a long time without failures. It also recovers very
soon when failure occurs int he system and it does not cause any harm to the data and
information.
5. Concurrency:
Real-time systems are concurrent that means it can respond to a several number of
processes at a time. There are several different tasks going on within the system and it
responds accordingly to every task in short intervals. This makes the real-time
systems concurrent systems.
6. Distributed:
In various real-time systems, all the components of the systems are connected in a
distributed way. The real-time systems are connected in such a way that different
components are at different geographical locations. Thus all the operations of real-
time systems are operated in distributed ways.
7. Stability:
Even when the load is very heavy, real-time systems respond in the time constraint
i.e. real-time systems does not delay the result of tasks even when there are several
task going on a same time.
In RTOS, The application is decomposed into small, schedulable, and sequential program units
known as “Task”, a basic unit of execution and is governed by three time-critical properties;
release time, deadline and execution time. Release time refers to the point in time from which the
task can be executed. Deadline is the point in time by which the task must complete. Execution
time denotes the time the task takes to execute.
Scheduling
The scheduler keeps record of the state of each task and selects from among them that are ready
to execute and allocates the CPU to one of them. Various scheduling algorithms are used in
RTOS
Non pre-emptive scheduling or Cooperative Multitasking: Highest priority task executes for some
time, then relinquishes control, re-enters ready state.