0% found this document useful (0 votes)
164 views29 pages

OPERATING SYSTEM CONCEPTS Unit 1 2023-2024

Uploaded by

abhishek.09tke
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
164 views29 pages

OPERATING SYSTEM CONCEPTS Unit 1 2023-2024

Uploaded by

abhishek.09tke
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

OPERATING SYSTEM CONCEPTS

UNIT 1 Introduction to OS & Process management


1 Definition of Operating System
 An operating system is a program that acts as an intermediary between a
user of a computer and the computer hardware
 An operating system is software that manages the computer hardware, as
well as providing an environment for application programs to run
 An operating system is the interface to the computer system and the user,
it is a program that manages the computer hardware
2 Need of Operating System
i) OS is a resource allocator
 It manages all resources such as CPU time, memory space, file-storage
space, I/O devices, and so on
 When there is conflicting requests for resources, the operating system
must decide how to allocate them to specific programs and users
ii) OS is a control program
 It controls the execution of user programs to prevent errors and improper
use of the computer
 It is especially concerned with the operation and control of I/O devices
 The purpose of an operating system is to provide an environment in
which a user can execute programs in a convenient and efficient manner
iii) OS as an interface between the user and the computer
 An OS provides a very easy way to interact with the computer
 It provides different features and GUI so that we can easily work on a
computer
 We have to interact just by clicking the mouse or through the keyboard.
Thus, we can say that an OS makes working very easy and efficient
iv) Managing the input/output devices
 The OS helps to operate the different input/output devices
 The OS decides which program or process can use which device
 Moreover, it decides the time for usage
 In addition to this, it controls the allocation and deallocation of devices
v) Multitasking
 The OS helps to run more than one application at a time on the computer
 It plays an important role in the multitasking, since it manages memory
and other devices during multitasking
 Therefore, it provides smooth multitasking on the system
vi) Platform for other application software
 Users require different application programs to perform specific tasks on
the system
 The OS manages and controls these applications so that they can work
efficiently
 i.e. it acts as an interface between the user and the applications
vii) Booting
 Booting is basically the process of starting the computer
 When the CPU is first switched on, it has nothing inside the memory
 So, to start the computer, we load the operating system into the main
memory
 Therefore, loading the OS to the main memory to start the computer is
booting
 Hence, the OS helps to start the computer when the power is switched on
viii) Some other need for OS are:
a. Manages the memory
 It helps in managing the main memory of the computer. Moreover, it
allocates and deallocates memory to all the applications/tasks
b. Manages the system files
 It helps to manage files on the system
 As we know, all the data on the system is in the form of files
 It makes interaction with the files easy
c. Provides Security
 It keeps the system and applications safe through authorization
 Thus, the OS provides security to the system
d. Acts as an Interface
 It is an interface between computer hardware and software. Moreover, it
is an interface between the user and the computer
3 Early systems
1. Batch systems

 This type of operating system does not interact with the computer directly
 There is an operator which takes similar jobs having the same
requirement and group them into batches
 It is the responsibility of the operator to sort jobs with similar needs
 Each user prepares his job on an off-line device like punch cards and
submits it to the computer operator
 To speed up processing, jobs with similar needs are batched together and
run as a group
 The programmers leave their programs with the operator and the operator
then sorts the programs with similar requirements into batches
Advantages
 Processors of the batch systems know how long the job would be
when it is in queue
 Multiple users can share the batch systems
 The idle time for the batch system is very less
 It is easy to manage large work repeatedly in batch systems
Disadvantages
 Lack of interaction between the user and the job
 It is very difficult to know the time required for any job to complete
 CPU is often idle, because the speed of the mechanical I/O devices is
slower than the CPU
 Difficult to provide the desired priority
 The computer operators should be well known with batch systems
 Batch systems are hard to debug
 It is sometimes costly
 The other jobs will have to wait for an unknown time if any job fails
2. Multiprogramming systems

 To overcome the problem of underutilization of CPU and main memory,


the multi-programming was introduced
 The multi-programming is interleaved execution of multiple jobs by the
same computer
 Sharing the processor, when two or more programs reside in memory at
the same time, is referred as multiprogramming
 In a multiprogramming system there are one or more programs loaded in
main memory which are ready to execute
 The main idea of multiprogramming is to maximize the use of CPU time
 In multi-programming system, when one program is waiting for I/O
transfer; there is another program ready to utilize the CPU. So it is
possible for several jobs to share the time of the CPU
 A simple process of multi-programming is shown in figure
 As shown in fig, at the particular situation, job’ A’ is not utilizing the
CPU time because it is busy in I/ O operations. Hence the CPU becomes
busy to execute the job ‘B’. Another job C is waiting for the CPU for
getting its execution time. So in this state the CPU will never be idle and
utilizes maximum of its time
 There are mainly two types of multiprogramming operating systems.
These are as follows:
a. Multitasking Operating System
b. Multiuser Operating System
 A multitasking operating system enables the execution of two or more
programs at the same time
A multiuser operating system allows many users to share processing time
on a powerful central computer from different terminals
Advantages
1. It provides less response time.
2. It may help to run various jobs in a single application simultaneously.
3. It helps to optimize the total job throughput of the computer.
4. Various users may use the multiprogramming system at once.
5. Short-time jobs are done quickly in comparison to long-time jobs.
6. It may help to improve turnaround time for short-time tasks.
7. It helps in improving CPU utilization and CPU never gets idle.
8. The resources are utilized smartly.
Disadvantages
1. It is highly complicated and sophisticated.
2. The CPU scheduling is required.
3. Memory management is needed in the operating system because all types
of tasks are stored in the main memory.
4. The harder task is to handle all processes and tasks.
5. If it has a large number of jobs, then long-term jobs will require a long
wait
3. Time Sharing systems

 Time-sharing is a technique which enables many people, located at


various terminals, to use a particular computer system at the same time
 Time-sharing or multitasking is a logical extension of multiprogramming
 Processor's time which is shared among multiple users simultaneously is
termed as time-sharing
 A time shared operating system uses CPU scheduling and multi-
programming to provide each with a small portion of a shared computer
at once
 The time sharing system provides the direct access to a large number of
users where CPU time is divided among all the users on scheduled basis
 The OS allocates a set of time (time quantum) to each user. When this
time is expired, it passes control to the next user on the system
 The time allowed is extremely small and the users are given the
impression that they each have their own CPU and they are the sole
owner of the CPU
 This short period of time during that a user gets attention of the CPU is
known as a time slice or a quantum
 The concept of time sharing system is shown in figure
 In above figure the user 5 is active but user 1, user 2, user 3, and user 4
are in waiting state whereas user 6 is in ready status
 As soon as the time slice of user 5 is completed, the control moves on to
the next ready user i.e. user 6
 In this state user 2, user 3, user 4, and user 5 are in waiting state and user1
is in ready state. The process continues in the same way and so on
Advantages
1. It provides the advantage of quick response
2. This type of operating system avoids duplication of software.
3. It reduces CPU idle time
Disadvantages
1. Time sharing has problem of reliability
2. Question of security and integrity of user programs and data can be raised
3. Problem of data communication occurs
4. Distributed systems

Figure1 A distributed system


 A distributed system is a collection of physically separate, possibly
heterogeneous, computer systems that are networked to provide the
users with access to the various resources that the system maintains
 Access to a shared resource increases computation speed,
functionality, data availability, and reliability
 A distributed system is a collection of processors that do not share
memory or a clock. Instead, each processor has its own local memory,
and the processors communicate with one another through
communication lines such as local-area networks or wide-area
networks
 Distributed systems allow users to share resources on geographically
dispersed hosts connected via a computer network
 LANs and WANs are the two basic types of networks. LANs enable
processors distributed over a small geographical area to communicate,
whereas WANs allow processors distributed over a larger area to
communicate. LANs typically are faster than WANs
 The processors communicate with one another through various
communication networks, such as high-speed buses or telephone lines
 The processors in a distributed system vary in size and function. Such
systems may include small handheld or real-time devices, personal
computers, workstations, and large mainframe computer systems
 A distributed operating system runs on a number of independent sites,
those are connected through a communication network, but users feel
it like a single virtual machine and runs its own operating system
 There are four major reasons for building distributed systems:
resource sharing, computation speedup, reliability, and
communication
 Resource sharing in distributed system provides mechanisms for
sharing files at remote sites, processing information in a distributed
database, printing files at remote sites
 If a particular computation can be partitioned into sub-computations
that can run concurrently, then a distributed system allows us to
distribute the sub-computations among the various sites and thus
provide computation speedup
 If one site fails in a distributed system, the remaining sites can
continue operating and giving the system better reliability
 When several sites are connected to one another by a communication
network, users at the various sites have the opportunity to exchange
information
Advantages
 Failure of one will not affect the other network communication, as all
systems are independent from each other.
 Electronic mail increases the data exchange speed.
 Since resources are being shared, computation is highly fast and durable.
 Load on host computer reduces
Disadvantages
 It is difficult to provide adequate security in distributed systems
 Messages and data can be lost in the network while moving from one
node to another
4. Special Purpose Systems
1. Real Time systems

Figure1. Real-Time system


 A real-time system is a computer system that requires not only that
computed results be correct but also that the results be produced
within a specified deadline period
 A real-time system is defined as a data processing system in which the
time interval required to process and respond to inputs is so small that
it controls the environment
 Real-time systems are used when there are rigid time requirements on
the operation of a processor or the flow of data and real-time systems
can be used as a control device in a dedicated application
 Real-time operating systems have well-defined, fixed-time constraints.
Processing must be done within the defined constraints, otherwise the
system will fail
 Many real-time systems are embedded in consumer and industrial
devices
 For example, scientific experiments, medical imaging systems,
industrial control systems, weapon control systems, robots, air traffic
control systems, etc.
 There are two types of real-time operating systems
i. Hard real-time systems
 Hard real-time systems guarantee that critical tasks complete on time
 i.e. they must guarantee that real-time tasks are serviced within their
deadline periods
 In hard real-time systems, secondary storage is limited or missing and
the data is stored in ROM
 In these systems, virtual memory is almost never found
 Example: Air traffic control systems, weapon control systems
ii.Soft real-time systems
 Soft real-time systems are the less restrictive, assigning real-time tasks
higher scheduling priority than other tasks
 A critical real-time task gets priority over other tasks and retains the
priority until it completes
 Soft real-time systems have limited utility than hard real-time systems
 Example: airline reservation systems, multimedia, etc.
 A real-time system changes its state as a function of physical time
 Based on this a real-time system can be decomposed into a set of
subsystems i.e., the controlled object, the real-time computer system
and the human operator. A real-time computer system must react to
stimuli from the controlled object (or the operator) within time
intervals dictated by its environment (as shown in figure1)
Advantages
 Maximum utilization of devices and systems
 Time assigned for shifting tasks in these systems is very less
 Focus on running applications and less importance to applications that are
in the queue
 Real-Time systems are used in embedded systems
Disadvantages:
 Limited Tasks
 Use of Heavy System Resources
 Complex Algorithms are used
 Needs specific device drivers
 Minimum task switching
2. Handheld Systems
 Handheld computer is a computer device that can be held in the palm of
one's hand
 A handheld system is a computer system that can conveniently be stored
in a pocket
 Handheld systems include Personal Digital Assistants(PDAs), such as
Palm-Pilots or Cellular Telephones with connectivity to a network such
as the Internet, many of which use special-purpose embedded operating
systems
 Most handheld PCs use an operating system specifically designed for
mobile use
 A handheld computer or PDA (Personal Digital Assistant) is a small
computer that fits in a shirt pocket and carries outs a small number of
functions, such as an electronic address book and memo pad
 Because of their small size, most handheld devices have small amounts of
memory, slow processors, and small display screens
 The amount of physical memory in a handheld depends on the device, it
is 1 MB to 1 GB
 The operating system and applications must manage memory efficiently
in handheld devices
 Processors for most handheld devices run at a fraction of the speed of a
processor in a PC
 The operating systems that run on these handhelds are more
sophisticated, with the ability to handle telephony, digital photography,
and other functions
 The most basic handheld systems are designed for personal information
management (PIM) applications, enabling users to keep calendars, task
lists and addresses handy
 Wireless technologies are increasing the ways in which handheld
computers can be used
 Some handheld devices use wireless technology, such as Blue tooth or
802.11, allowing remote access to e-mail and Web browsing
 Cellular telephones with connectivity to the Internet fall into this
category
 Handheld PDAs can also connect to use the company's Web portal
 A wide variety of applications are available for handheld devices
 Most handheld devices can also be equipped with Wi-Fi, Bluetooth, and
GPS capabilities that can allow connections to the Internet and other
Bluetooth- capable devices, such as an automobile or a microphone
headset
 Examples of Handheld PC devices are the NEC MobilePro 900c, HP
320LX, Sharp Telios, HP Jornada 720, IBM WorkPad Z50, and Vadem
Clio
 The handheld computer is less expensive than a normal laptop or desktop
computer
 Handheld devices are light weight
 One advantage of handheld computers is their portability
 It is easier to move with it. It can be tucked in a briefcase or pocket
 Within the classroom, handheld systems can replace notebooks and allow
students to access course material faster
 They also provide a new and innovative way to educate
 For business people, they can be easily brought onto airplanes and into
meetings
 Sometimes because it's small in size it could be easily forgotten or stolen
 Furthermore, the smaller something is, the more likely it is to be lost or
misplaced by the owner
 Since handheld computers can be stowed in bags, purses and backpacks,
the probability of them being stolen increases
 Currently, many handheld devices do not use virtual memory techniques
 HP has recently introduced the first handheld computer with a color
display
 Hand held devices most uses firmware
6. Open Source Operating Systems
 An Open-source Operating System is the Operating System in which
source code is visible publically and editable
 The term "open source" refers to computer software or applications where
the owners or copyright holders enable the users or third parties to use,
see, and edit the product's source code
 The open-source operating system allows the use of code that is freely
distributed and available to anyone
 Open- source Operating Systems are those made available in source-code
format rather than as compiled binary code
 Linux is the most famous open- source operating system and Microsoft
Windows is a well-known example of the closed - source operating
system
 Open- source operating system refers to software in which
the source code is available to the general public for use
 Open source code is typically created as a collaborative effort in which
programmers improve upon the code and share the changes within the
community
 Open-source code is more secure than closed-source code because many
more eyes are viewing the code
 In Open Source- Open means collaboration is open to all and Source
means source code is freely shared

 The different open source operating system available in the market are:
1. COSMOS
 This is an open source operating system written in programming
language C#
 Full form of COSMOS is C# Open Source Managed Operating
System
2. FreeDOS
 This was a free operating system developed for systems
compatible with IBM PC computers
 FreeDOS provides a complete environment to run softwares and
other embedded systems
 It can booted from a floppy disk or USB flash drive as required
3. Genode
 Genode is free as well as open source
 It contains a microkernel layer and different user components
 It is one of the few open source operating systems not derived
from a licensed operating system such as Unix
 Genode can be used as an operating system for computers,
tablets etc. as required
4. Ghost OS
 This is a free, open source operating system developed for PCs
 It started as a research project and developed to contain various
advanced features like graphical user interface, C library etc.
 The Ghost operating system features multiprocessing and
multitasking and is based on the Ghost Kernel
5. ITS
 The incompatible time-sharing system was developed by the MIT
Artificial Intelligence Library
 It is principally a time sharing system
 There is a remote login facility which allowed guest users to
informally try out the operating system and its features using
ARPANET
6. GNU /Linux
 It is developed by Linus Torvalds in Finland in 1991 as the first
full operating system developed by GNU
 Many different distributions of Linux have evolved from Linus's
original, including RedHat, SUSE, Fedora, Debian, Slackware,
and Ubuntu
7. BSD UNIX
 UNIX was originally developed at AT&T Bell labs, and the
source code made available to computer science students at
many universities, including the University of California at
Berkeley, UCB
 UCB students developed UNIX further, and released their
product as BSD UNIX in both binary and source-code format
8. Solaris
 Solaris is the UNIX operating system for computers from Sun
Microsystems
 Solaris was originally based on BSD UNIX
 Parts of Solaris are now open-source
 It is possible to change the open-source components of Solaris
Advantages of using open source software include our ability to:
1 View source code
2 Change and redistribute source code
3 Buy from different vendors and adopt new platforms
4Avoid proprietary information formats
5 Allow integration between products
6 Reduce software licensing cost and effort
7 Develop and deploy effectively internationally
Process Management
1. Process concept- meaning of process
 A program in execution is called process
 A process is the unit of work in a system
 A process (or job) is the fundamental unit of work in an
operating system
 A process is the basic unit of execution in an operating system

max
stack

!
t
heap

data

0 text

Figure. Process in memory

 Process in memory is divided into four sections (components) as


shown in Figure
 When a program is loaded into the memory and it becomes a
process
 process can be divided into four sections - stack, heap, text and
data
 Stack section contains temporary data (such as function
parameters, return addresses, and local variables)
 A data section contains global and static variables, allocated and
initialized prior to executing main.
 A heap is memory that is dynamically allocated during process
run time
 The text section includes the current activity represented by the
value of Program Counter and the contents of the processor's
registers
2. Process State/ Process Life Cycle/State transition diagram
of process

Figure. Diagram of process state/Process Life Cycle/State transition diagram of process

 As a process executes, it changes state. The state of a process is


defined in part by the current activity of that process
 A process may be in one of the following states at a time:
1. New state
2. Ready state
3. Running state
4. Waiting state
5. Terminated state
1. New state
 The process is being created
 When a process is first created, it occupies the "created" or
"new" state
 ln this state, the process awaits admission to the "ready" state
 The new state indicates a process has just been created but has
not been admitted to the pool of executable processes by the
operating system
 The process just been created but not yet admitted to the pool of
executable processes by the OS. Not yet loaded into main
memory.
 Typically, a new process has not yet been loaded into main
memory
 When a new process is to be added, the operating system builds
the data structures that are used to manage the process and
allocates space in main memory to the process
2. Ready state
 The process is waiting to be assigned to a processor
 A "ready" process has been loaded into main memory and is
awaiting execution on a CPU
 Processes that are ready for the CPU are kept in a ready queue
 The process is prepared to execute when given opportunity
3. Running state
 Instructions are currently being executed
 A process moves into the running state when it is chosen for
execution
 A process is running if the process is assigned to a CPU
 The CPU is working on this process's instructions
 Once the process has been assigned to a processor by the OS
scheduler, the process state is set to running and the processor
executes its instructions
 A preemptive scheduler will force a transition from running to
ready
4. Waiting state
 The process is waiting for some event to occur (such as an I/0
completion or reception of a signal).
 o the process is in secondary memory and waiting for an event to
occur
 Process moves into the waiting state if it needs to wait for a
resource, such as waiting for user input, or waiting for a file to
become available
5. Terminated state
 The process has finished execution
 A process may be terminated either from the "running" state by
completing its execution or by explicitly being killed
 In either of these cases, the process moves to the "terminated"
state
 Once the process finishes its execution, or it is terminated by the
operating system, it is moved to the terminated state where it
waits to be removed from main memory
3.Process Control Block (PCB)

Figure. Process control block (PCB)


 A process in an operating system is representing by a data
structure known as a process control block (PCB) or process
descriptor
 A Process Control Block is a data structure maintained by the
Operating System for every process. The PCB is identified by an
integer process ID (PID)
 Each process in OS is represented by process control block
(PCB) or task control block (TCB)
 A PCB is shown in Figure
 It PCB contains following the information about the process
1.Pointer
A pointer to parent process
2.Process State
The state may be new, ready running, waiting, halted, and so on
3.Process ID/ Process number
Unique identification for each of the process in the operating system
4/Program Counter
The counter indicates the address of the next instruction to be executed for this
process
5.CPU registers
Various CPU registers where process need to be stored for execution, for
running state
6.CPU Scheduling Information
This information includes a process priority, pointers to scheduling queues,
and any other scheduling parameters
7.Memory management information
This information may include such information as the value of the base and
limit registers, the page tables, or the segment tables,
depending on the memory system used by the operating system
8.Accounting information
This information includes the amount of CPU and real time used, time limits,
account numbers, job or process numbers, and so on
10.I/O status information
This information includes the list of I/O devices allocated to the process, a list
of open files, and so on
4.Process scheduling / Queueing-diagram representation of
process scheduling / Representation of Process Scheduling

Figure. Queuing-diagram representation of process scheduling


 The act of determining which process in the ready state should be moved
to the running state. That is, decide which process should run on the CPU
next is known as Process Scheduling
 Objective of Multiprogramming is to Maximize CPU utilization and
increase throughput
 To meet these objectives, the process scheduler selects a process from a
set of several processes for execution on the CPU
 For a single-processor system, there will never be more than one running
process. If there are more processes, the rest will have to wait until the CPU
is free and can be rescheduled
 Process continues this cycle until it terminates, at which time it is
removed from all queues and has its PCB and resources deallocated
 A common representation of process scheduling is a queuing diagram as
shown in Figure
 Each rectangular box represents a queue. Two types of queues are
present: the ready queue and a set of device queues
 The circles represent the resources that serve the queues, and the arrows
indicate the flow of processes in the system
 A new process is initially put in the ready queue. It waits there until it is
selected for execution, or is dispatched
 Once the process is allocated the CPU and is executing, one of several
events could occur:
1. The process could issue an I/0 request & then be placed in an I/O queue
2. The process could create a new sub process and wait for the sub
process's termination
3. The process could be removed forcibly from the CPU, as a result of an
interrupt, and be put back in the ready queue
 ln the first two cases, the process eventually switches from the waiting
state to the ready state and is then put back in the ready queue
 Process continues this cycle until it terminates, at which time it is
removed from all queues and has its PCB and resources deallocated
5. Scheduling queues /Queues scheduling

Figure. The ready queue and various I/O device queues


 All processes when enters into the system are stored in the queue is called
job queue
 The processes that are residing in main memory and are ready and
waiting to execute are kept on a list called ready queue
 Ready queue is generally stored as a linked list
 A ready-queue header contains pointers to the first and final PCBs in the
list
 Each PCB includes a pointer field that points to the next PCB in the ready
queue
 The system also includes other queues
 The list of processes waiting for a particular I/O device is called a device
queue
 Each device has its own device queue (as shown in Figure)
 When a process is allocated the CPU, it executes for a while and
eventually quits, is interrupted, or waits for the occurrence of a particular
event, such as the completion of I/O request
 Suppose the process makes I/O request to a shared device, such as a disk.
Since there are many processes in the system, the disk may be busy with the
I/O request of some other process. The process therefore may have to wait
for the disk
6.Schedulers
 A scheduler is a program module that selects a process from a ready
queue among several processes that are ready to execute and it allocates the
CPU to the select process
 There are three types of scheduler namely:
1. Short Term Scheduler/ CPU scheduler
2. Long Term Scheduler/ job scheduler 3. Medium Term Scheduler
1. Short Term Scheduler/ CPU scheduler
 Short-term scheduler is also called as CPU scheduler
 Short-term scheduler is a program module that selects a process from a
ready queue among several processes that are ready to execute and it
allocates the CPU to the select process
 short-term scheduler must select a new process for the CPU more
frequently
 Short-term scheduler is invoked very frequently
 The primary aim of this scheduler is to enhance CPU performance and
increase process execution rate
 It selects processes from ready queue
 It allocates CPU to the selected process
 It dispatches the process
 In this, a process is executed for few milliseconds (10 msecs) before
waiting for an 1/0 request
 Often, short-term scheduler executes at least once in every 100
milliseconds
 Because of the short time between the executions, the short-term
scheduler must be fast
 If it takes 10 milliseconds to decide to execute a process for 100
milliseconds, then 10/ (100+10)
 =9% of the CPU used for scheduling the work
 Short-term schedulers, also known as dispatchers, make the decision of
which process to execute next. Short-term schedulers are faster than long-
term schedulers
2. Long Term Scheduler/ job scheduler
 Long-term scheduler is also called as job scheduler
 Long-term scheduler is a program module that selects a process from the
pool of processes on a mass-storage device (disk) and loads them into
memory for execution
 Long term scheduler runs less frequently
 Long-term scheduler is invoked very infrequently
 Primary aim of the Job Scheduler is to maintain a good degree of
multiprogramming
 It controls the degree of multiprogramming (the number of processes in
the main memory)
 If the degree of multiprogramming is stable, then the average rate of
process creation must be equal to the average departure rate of processes
leaving the system
 Thus, long-term scheduler is invoked only when a process leaves the
system
 Because of the longer interval between the executions, the long-term
scheduler can afford to take more time to decide which process should be
selected for execution
 Long term scheduler must select a good mix of 1/O-bound and CPU-
bound processes
 Long term scheduler is not executed as frequently as the short-term
scheduler
 Long term scheduler may not be present in time sharing systems such as
UNIX and Microsoft windows systems
 A diagram that demonstrates scheduling using long-term and short-term
schedulers is as shown above
3. Medium Term Scheduler

 Medium Term Scheduler is a scheduler that removes the processes from


memory and reduces the degree of multiprogramming
 Medium Term Scheduler is also called as swapping scheduler
 Medium-term scheduling involves swapping out a process from main
memory. The process can be swapped in later from the point it stopped
executing. This can also be called as suspending and resuming the process
and is done by the medium-term scheduler
 Swapping is helpful in reducing the degree of multiprogramming
 Swapping is also useful to improve the mix of 1/0 bound and CPU bound
processes in the memory
 the process is swapped out and is later swapped in by the medium-term
scheduler
 During extra load, medium term scheduler picks out big processes from
the ready queue for some time, to allow smaller processes to execute,
thereby reducing the number of processes in the ready queue
 A diagram that demonstrates scheduling using medium-term scheduler is
as shown in above figure
 The key idea behind a medium-term scheduler is that sometimes it can be
advantageous to remove processes from memory and thus reduce the degree
of multiprogramming.
 Later, the process can be reintroduced into memory, and its execution can
be continued where it left off. This scheme is called swapping.
 The process is swapped out, and is later swapped in, by the medium-term
scheduler.
 Swapping may be necessary to improve the process mix or because a
change in memory requirements has overcommitted available memory,
requiring memory to be freed up

Distinguish between short-term, long-term & medium-term


schedulers

Short-term scheduler Long-term scheduler Medium-term scheduler


1. It is a CPU scheduler 1.It is a job scheduler 1.It is a process swapping
scheduler
2. Speed is fastest among other 2.Speed is lesser than short 2. Speed is in between both
two term scheduler short and long term scheduler
3.It provides lesser control 3. It controls the degree of 3. It reduces the degree of
over degree of multiprogramming multiprogramming
multiprogramming

4. It is also minimal in the time 4. It is absent in the time 4.It is a part of the time sharing
sharing system sharing system system
5. It selects those processes 5. It selects processes from 5. It can re-introduce the process
which are ready to execute pool and loads them into into memory and execution can
be continued
memory for execution
7.Context-switch/ Context-switching

 When CPU switches to another process, the system must save the state of
the old process and load the saved state for the new process. This task is
known as a context-switch/ context-switching
 When current executing process P0 interrupted, the OS saves the sate into
PCB0 of P0 and load the PCBl of process Pl and P0 process will be idle and
Pl will be executing.
 After Pl execution is completed, the OS will save state into PCBl and
reload state from PCB0 and P1 process will be idle whereas P0 will be
executing
 Switching the CPU to another process requires performing a state save of
the current process and a state restore of a different process. This task is
known as a Context-switch /context-switching
 Interrupts cause the OS to change a CPU from its current task and to run
a kernel routine
 When an interrupt occurs, the system needs to save the current context of
the process running on the CPU so that it can restore that context when its
processing is done, essentially suspending the process and then resuming it
 The context is represented in the PCB of the process, it includes the value
of the CPU registers, the process state and memory-management
information
 When a context switch occurs, the kernel saves the context of the old
process in its PCB and loads the saved context of the new process scheduled
to run
 Context-switch time is pure overhead, because the system does no useful
work while switching
 Context-switch times are highly dependent on hardware support
8.Operations on Processes
 Operations on Processes are:
1 Process creation
2 Process termination
1 Process creation
 A process may create several new processes, via create() system call,
during the course of execution
 The creating process is called a parent process, and the new processes are
called the children of that process
 Each of these new processes may in tum create other processes, forming a
tree of processes
 Each process is given an integer identifier, termed as process identifier, or
PID
 The parent PID (PPID) is also stored for each process
 When a process creates a sub process, that sub process may be able to
obtain its resources directly from the operating system, or it may be
constrained to a subset of the resources of the parent process
 The parent may have to partition its resources among its children, or it
may be able to share some resources (such as memory or files) among
several of its children
 When a process is created, it obtains, in addition to the various physical
and logical resources, initialization data (or input) that may be passed along
from the parent process to the child process
 When a process creates a new process, two possibilities exist in terms of
execution:
1. The parent continues to execute concurrently with its children.
2. The parent waits until some or all of its children have terminated
 There are also two possibilities in terms of the address space of the new
process:
1 The child process is a duplicate of the parent process (it has the same
program and data as the parent)
2 The child process has a new program loaded into it
2 Process termination
 A process terminates when it finishes executing its final statement and
asks the operating system to delete it by using the exit ( ) system call
 At that point, the process may return a status value (an integer) to its
parent process via the wait( ) system call
 All the resources of the process-including physical and virtual memory,
open files, and I/O buffers-are deallocated by the operating system
 Termination occurs under additional circumstances
 A process can cause the termination of another process via an appropriate
system call abort( )
 Usually, only the parent of the process that is to be terminated can invoke
such a system call
 A parent may terminate the execution of one of its children for a variety
of reasons:
1. The child has exceeded its usage of some of the resources that it has been
allocated
2. The task assigned to the child is no longer required
3. The parent is exiting, and the operating system does not allow a child to
continue if its parent terminates
 The parent may wait for its children to terminate before proceeding, or
the parent and children may execute concurrently
9.Interprocess communication (IPC)

Figure. Communications models. (a) Message passing. (b) Shared memory


 Cooperating processes require an interprocess communication (IPC)
mechanism that will allow them to exchange data and information
 There are two fundamental communication models of interprocess
communication (IPC):
1. Shared memory
2. Message passing
 In the shared-memory model, a region of memory that is shared by
cooperating processes is established
 Processes can then exchange information by reading and writing data to
the shared region
 In the message passing model, communication takes place by means of
messages exchanged between the cooperating processes
 The two communications models are compared in Figure
 Message passing is useful for exchanging smaller amounts of data,
because no conflicts need be avoided
 Shared memory allows maximum speed and convenience of
communication
 Shared memory is faster than Message passing
 In shared memory systems, system calls are required only to establish
shared- memory regions
1.Shared memory systems
 Interprocess communication using shared memory requires
communicating processes to establish a region of shared memory
 Typically, a shared-memory region resides in the address space of the
process creating the shared memory segment
 Other processes that wish to communicate using this shared memory
segment must attach it to their address space
 Normally, the OS tries to prevent one process from accessing another
process's memory
 Shared memory requires that two or more processes agree to remove this
restriction
 They can then exchange information by reading and writing data in the
shared areas
 The processes are also responsible for ensuring that they are not writing
to the same location simultaneously
2.Message passing systems
 Message passing provides a mechanism to allow processes to
communicate and to synchronize their actions without sharing the same
address space and is particularly useful in a distributed environment, where
the communicating processes may reside on different computers connected
by a network
 For example, a chat program used on the World Wide Web could be
designed so that chat participants communicate with one another by
exchanging messages
 A message-passing facility provides at least two operations: send( ) and
receive( )
 Messages sent by a process can be of either fixed or variable size
 If processes P and Q want to communicate, they must send messages to
and receive messages from each other; a communication link must exist
between them
 Methods for logically implementing a link and the send/receive
operations:
 Direct or indirect communication
 Symmetric or asymmetric communication
 Automatic or explicit buffering
 Send by copy or send by reference
 Fixed-sized or variable-sized messages
i.Naming
 Processes that want to communicate must have a way to refer to
each other. They can use either direct or indirect communication
ii.Direct Communication
 With direct communication, each process that wants to
communicate must explicitly name the recipient or sender of the
communication
 In this scheme, the send and receive primitives are defined as:
Send (P, message)-Send a message to process P
Receive (Q, message) -Receive a message from process Q
iii.Indirect Communication
 With indirect communication, the messages are sent to and
received from mailboxes, or ports
 In this scheme, a process can communicate with some other
process via a number of different mailboxes
 Two processes can communicate only if they share a mailbox
 The send and receive primitives are defined as follows:
send (A, message) -Send a message to mailbox A
receive (A, message) -Receive a message from mailbox A
iv.Synchronization
 Communication between processes takes place by calls to send()
and receive() primitives
 Message passing may be either blocking or non-blocking, also
known as synchronous and asynchronous
 Blocking send: The sending process is blocked until the
message is received by the receiving process or by the mailbox.
 Non-blocking send: The sending process sends the message and
resumes operation
 Blocking receive: The receiver blocks until a message is
available.
 Non-blocking receive: The receiver retrieves either a valid
message or a null
v.Buffering
 Whether the communication is direct or indirect, messages
exchanged by communicating processes reside in a temporary
queue, these queues can be implemented in three ways:
1. Zero capacity: The queue has maximum length 0. Thus, the
link cannot have any messages waiting in it
2. Bounded capacity: The queue has finite length 'n'. Thus, at
most 'n' messages can reside in it
3. Unbounded capacity: The queue has potentially infinite length.
Thus, any number of messages can wait in it. The sender never
blocks
10.Independent and co-operating processes
 The processes executing in the operating system may be either
independent processes or cooperating processes
Independent processes
 A process is independent if it cannot affect or be affected by the
other processes executing in the system
 Any process that does not share data with any other process is
independent
Co-operating processes
 A process is cooperating if it can affect or be affected by the
other processes executing in the system
 Any process that shares data with other processes is a
cooperating process
 There are several reasons for providing an environment that
allows process cooperation:
1. Information sharing
 Since several users may need the same information (for eg. a
shared file), we must provide an environment to allow concurrent
access to such information
2.Computation speedup
 If we want a particular task to run faster, we must break it into
subtasks, each of which will be executing in parallel with the
others
3.Modularity
 We may want to construct the system in a modular fashion,
dividing the system functions into separate processes or threads
4. Convenience
 Even an individual user may work on many tasks at the same time.
For e.g, a user may be editing, printing, and compiling in parallel
 Cooperating processes require an interprocess communication
(IPC) mechanism that will allow them to exchange data and
information
 There are two fundamental communication models of interprocess
communication(IPC):
1. Shared memory
2. Message passing
 In the shared-memory model, a region of memory that is shared by
cooperating processes is established
 Processes can then exchange information by reading and writing data to
the shared region
 In the message passing model, communication takes place by means of
messages exchanged between the cooperating processes
 The two communications models are compared in Figure
 Message passing is useful for exchanging smaller amounts of data,
because no conflicts need be avoided
 Shared memory allows maximum speed and convenience of
communication
 Shared memory is faster than Message passing
 In shared memory systems, system calls are required only to establish
shared- memory regions

Figure. Communications models. (a) Message passing. (b) Shared memory

You might also like