0% found this document useful (0 votes)
33 views19 pages

659892241-OS Unit-1

A process is the unit of work in an operating system. It includes program instructions and data. A process is active and changing state, while a program is passive code stored on disk. There are several process states including running, ready, waiting, and terminated. Each process is represented by a process control block (PCB) containing its state, resources, and scheduling information. The scheduler selects processes from ready queues to run on the CPU. Context switching saves the state of one process and restores another during process switching.

Uploaded by

krishna nani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views19 pages

659892241-OS Unit-1

A process is the unit of work in an operating system. It includes program instructions and data. A process is active and changing state, while a program is passive code stored on disk. There are several process states including running, ready, waiting, and terminated. Each process is represented by a process control block (PCB) containing its state, resources, and scheduling information. The scheduler selects processes from ready queues to run on the CPU. Context switching saves the state of one process and restores another during process switching.

Uploaded by

krishna nani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Process Concept

A process can be thought of as a program in execution.


• A process is the unit of the unit of work in a modern time-sharing system.
• A process generally includes the process stack, which contains temporary data (such
as method parameters, return addresses, and local variables), and a data section,
which contains global variables.
Difference between program and process
• A program is a passive entity, such as the contents of a file stored on disk, whereas
a process is an active entity, with a program counter specifying the next instruction
to execute and a set of associated resources.

Process States:

• As a process executes, it changes state.


• The state of a process is defined in part by the current activity of that process.
• Each process may be in one of the following states:

• New: The process is being created.


• Running: Instructions are being executed.
• Waiting: The process is waiting for some event to occur (such as an I/O
completion or reception of a signal).
• Ready: The process is waiting to be assigned to a processor.
• Terminated: The process has finished execution.

Process Control Block, PCB:-

Each process is represented in the operating system by a process control block (PCB) also
called a task control block. A PCB is shown in the below Figure.
It contains many pieces of information associated with a specific process, including these:
• Process state. The state may be new, ready, running, waiting, halted, and so on.
• Program counter. The counter indicates the address of the next instruction to be
executed for this process.
• CPU registers. The registers vary in number and type, depending on the computer
architecture. They include accumulators, index registers, stack pointers, and
general-purpose registers, plus any condition-code information. Along with the
program counter, this state information must be saved when an interrupt occurs, to
allow the process to be continued correctly afterward.
• CPU-scheduling information. This information includes a process priority,
pointers to scheduling queues, and any other scheduling parameters.
• Memory-management information. This information may include such
information as the value of the base and limit registers, the page tables, or the
segment tables, depending on the memory system used by the operating system.
• Accounting information. This information includes the amount of CPU and real
time used, time limits, account numbers, job or process numbers, and so on.
• I/O status information. This information includes the list of I/O devices allocated
to the process, a list of open files, and so on.

In brief, the PCB simply serves as the repository for any information that may vary from
process to process.
Process Scheduling:

The objective of multiprogramming is to have some process running at all times, to


maximize CPU utilization. The objective of time sharing is to switch the CPU among
processes so frequently that users can interact with each program while it is running. To
meet these objectives, the process scheduler selects an available process (possibly from
a set of several available processes) for program execution on the CPU. For a single-
processor system, there will never be more than one running process. If there are more
processes, the rest will have to wait until the CPU is free and can be rescheduled.

The system also includes other queues. When a process is allocated the CPU, it executes
for a while and eventually quits, is interrupted, or waits for the occurrence of a particular
event, such as the completion of an I/O request. Suppose the process makes an I/O request
to a shared device, such as a disk. Since there are many processes in the system, the disk
may be busy with the I/O request of some other process. The process therefore may have to
wait for the disk. The list of processes waiting for a particular I/O device is called a device
queue. Each device has its own device queue.

A common representation for a discussion of process scheduling is a queueing diagram,


such as that in Figure 3.7. Each rectangular box represents a queue. Two types of queues
are present: the ready queue and a set of device queues. The circles represent the resources
that serve the queues, and the arrows indicate the flow of processes in the system.

A new process is initially put in the ready queue. It waits there until it is selected for
execution, or is dispatched. Once the process is allocated the CPU and is executing, one of
several events could occur:

• The process could issue an I/O request and then be placed in an I/O queue.
• The process could create a new sub process and wait for the sub process’s
termination.
• The process could be removed forcibly from the CPU, as a result of an interrupt,
and be put back in the ready queue.

In the first two cases, the process eventually switches from the waiting state to the ready
state and is then put back in the ready queue. A process continues this cycle until it
terminates, at which time it is removed from all queues and has its PCB and resources de
allocated.

Schedulers:-

A process migrates among the various scheduling queues throughout its lifetime. The
operating system must select, for scheduling purposes, processes from these queues in some
fashion. The selection process is carried out by the appropriate scheduler. Often, in a
batch system, more processes are submitted than can be executed immediately. These
processes are spooled to a mass-storage device (typically a disk), where they are kept for
later execution. The long-term scheduler, or job scheduler, selects processes from this
pool and loads them into memory for execution. The short-term scheduler, or CPU
scheduler, selects from among the processes that are ready to execute and allocates the
CPU to one of them.

The primary distinction between these two schedulers lies in frequency of execution. The
short-term scheduler must select a new process for the CPU frequently. A process may
execute for only a few milliseconds before waiting for an I/O request.

The long-term scheduler executes much less frequently; minutes may separate the creation
of one new process and the next. The long-term scheduler controls the degree of
multiprogramming (the number of processes in memory). The long-term scheduler may
need to be invoked only when a process leaves the system. Because of the longer interval
between executions, the long-term scheduler can afford to take more time to decide which
process should be selected for execution.

It is important that the long-term scheduler make a careful selection. In general, most
processes can be described as either L/O bound or CPU bound. An I/O-bound process is
one that spends more of its time doing I/O than it spends doing computations. A CPU-
bound process, in contrast, generates I/O requests infrequently, using more of its time
doing computations. It is important that the long-term scheduler select a good process
mix of I/O-bound and CPU-bound processes. If all processes are I/O bound, the ready queue
will almost always be empty, and the short-term scheduler will have little to do. If all
processes are CPU bound, the I/O waiting queue will almost always be empty, devices will
go unused, and again the system will be unbalanced. The system with the best performance
will thus have a combination of CPU-bound and I/O-bound processes.

On some systems, the long-term scheduler may be absent or minimal. For example, time-
sharing systems such as UNIX and Microsoft Windows systems often have no long-term
scheduler but simply put every new process in memory for the short-term scheduler. The
stability of these systems depends either on a physical limitation.

Some operating systems, such as time-sharing systems, may introduce an additional,


intermediate level of scheduling. This medium-term scheduler is diagrammed in Figure
3.8. The key idea behind a medium-term scheduler is that sometimes it can be
advantageous to remove processes from memory (and from active contention for the CPU)
and thus reduce the degree of multiprogramming. Later, the process can be reintroduced
into memory, and its execution can be continued where it left off. This scheme is called
swapping. The process is swapped out, and is later swapped in, by the medium-term
scheduler. Swapping may be necessary to improve the process mix or because a change in
memory requirements has overcommitted available memory, requiring memory to be freed
up.
Figure 3.8. Medium-term scheduler

Context Switch:-

When an interrupt occurs, the system needs to save the current context of the process
currently running on the CPU so that it can restore that context when its processing is
done, essentially suspending the process and then resuming it. The context is represented
in the PCB of the process; it includes the value of the CPU registers, the process state, and
memory-management information. Generically, we perform a state save of the current
state of the CPU, be it in kernel or user mode, and then a state restore to resume operations.
Switching the CPU to another process requires performing a state save of the current
process and a state restore of a different process. This task is known as a context switch.
When a context switch occurs, the kernel saves the context of the old process in its PCB
and loads the saved context of the new process scheduled to run.
Comparison between Scheduler:-

Threads

• A thread is the basic unit of CPU utilization.


• It is sometimes called as a lightweight process.
• It consists of a thread ID, a program counter, a register set and a stack.
• It shares with other threads belonging to the same process its code section, data
section, and resources such as open files and signals.
• A traditional or heavy weight process has a single thread of control.
• If the process has multiple threads of control, it can do more than one task at a
time.
Benefits of multithreaded programming:

• Responsiveness: Multithreading an interactive application may allow a program


to continue running even if part of it is blocked or is performing a lengthy
operation, thereby increasing responsiveness to the user. For instance, a
multithreaded web browser could still allow user interaction in one thread while
an image was being loaded in another thread.

• Resource sharing: By default, threads share the memory and the resources of
the process to which they belong. The benefit of sharing code and data is that it
allows an application to have several different threads of activity within the same
address space.

• Economy: Allocating memory and resources for process creation is costly.


Because threads share resources of the process to which they belong, it is more
economical to create and context-switch threads. In Solaris, for example, creating
a process is about thirty times slower than is creating a thread, and context
switching is about five times slower.
• Utilization of multiprocessor architectures. The benefits of multithreading
can be greatly increased in a multiprocessor architecture, where threads may be
running in parallel on different processors

User thread and Kernel threads:-

User threads:

• Supported above the kernel and implemented by a thread library at the user level.
• Thread creation, management and scheduling are done in user space.
• Fast to create and manage
• When a user thread performs a blocking system call, it will cause the entire
process to block even if other threads are available to run within the application.
• Example: POSIX Pthreads, Mach C-threads and Solaris 2 UI-threads.

Kernel threads:-

• Supported directly by the OS.


• Thread creation, management and scheduling are done in kernel space.
• Slow to create and manage
• When a kernel thread performs a blocking system call, the kernel schedules
another thread in the application for execution.
• Example: Windows NT, Windows 2000, Solaris 2, BeOS and Tru64 UNIX support
kernel threads.
Multithreading models:-
Many-to-One Model
The many-to-one model maps many user-level threads to one kernel thread.

Advantage:-
• Thread management is done by the thread library in user space, so it is efficient;
Disadvantages:-
• The entire process will block if a thread makes a blocking system call.
• Only one thread can access the kernel at a time, multiple threads are unable to run
in parallel on multiprocessors.
Examples:-
• Green threads—a thread library available for Solaris—uses this model, as does
GNU Portable Threads.
One-to-One Model
The one-to-one model maps each user thread to a kernel thread.

Advantage:-
• It provides more concurrency than the many-to-one model by allowing another
thread to run when a thread makes a blocking system call;
• It also allows multiple threads to run in parallel on multiprocessors.
Disadvantages:-
• The only drawback to this model is that creating a user thread requires creating the
corresponding kernel thread. Because the overhead of creating kernel threads can
burden the performance of an application, most implementations of this model
restrict the number of threads supported by the system.
Examples:-
• Linux, along with the family of Windows operating systems—including Windows 95,
98, NT, 2000, and XP— implement the one-to-one model.
Many-to-Many Model:
The many-to-many model multiplexes many user-level threads to a smaller or equal
number of kernel threads. The number of kernel threads may be specific to either a
particular application or a particular machine.

Disadvantages:

The many-to-many model suffers from neither of these shortcomings:


• Developers can create as many user threads as necessary, and the corresponding
kernel threads can run in parallel on a multiprocessor.
• Also, when a thread performs a blocking system call, the kernel can schedule
another thread for execution.

Examples: - Solaris 2, Windows NT/2000

Threading Issues:

1. fork() and exec() system calls.


• A fork() system call may duplicate all threads or duplicate only the thread that
invoked fork().
• If a thread invoke exec() system call, the program specified in the parameter to exec
will replace the entire process.
2. Thread cancellation.
It is the task of terminating a thread before it has completed. A thread that is to be
cancelled is called a target thread. There are two types of cancellation namely
• Asynchronous Cancellation – One thread immediately terminates the target thread.
• Deferred Cancellation – The target thread can periodically check if it should
terminate, and does so in an orderly fashion.

3. Signal handling

• A signal is a used to notify a process that a particular event has occurred.

• A generated signal is delivered to the process.


• Deliver the signal to the thread to which the signal applies.
• Deliver the signal to every thread in the process.

• Deliver the signal to certain threads in the process.


• Assign a specific thread to receive all signals for the process.
• Once delivered the signal must be handled.
• Signal is handled by
o A default signal handler
o A user defined signal handler

4. Thread pools
• Creation of unlimited threads exhausts system resources such as CPU time or
memory. Hence we use a thread pool.
• In a thread pool, a number of threads are created at process startup and placed in
the pool.
• When there is a need for a thread the process will pick a thread from the pool and
assign it a task.
• After completion of the task, the thread is returned to the pool.

5. Thread specific data

• Threads belonging to a process share the data of the process. However each thread
might need its own copy of certain data known as thread-specific data.
6. Scheduler Activations
• A final issue to be considered with multithreaded programs concerns communication
between the kernel and the thread library, which may be required by the many-to-
many and two-level models.
• Many systems implementing either the many-to-many or two-level model place an
intermediate data structure between the user and kernel threads. This data
structure—typically known as a lightweight process.
• Each LWP is attached to a kernel thread, and it is kernel threads that the operating
system schedules to run on physical processors. If a kernel thread blocks (such as
while waiting for an I/O operation to complete), the LWP blocks as well.
• One scheme for communication between the user-thread library and the kernel is
known as scheduler activation. It works as follows: The kernel provides an
application with a set of virtual processors (LWPs), and the application can schedule
user threads onto an available virtual processor. Furthermore, the kernel must
inform an application about certain events. This procedure is known as an up call.
Up calls are handled by the thread library with an up call handler, and up call
handlers must run on a virtual processor.
CPU Scheduling

• CPU scheduling is the basis of multi programmed operating systems.


• The objective of multiprogramming is to have some process running at all times, in
order to maximize CPU utilization.
• Scheduling is a fundamental operating-system function.
• Almost all computer resources are scheduled before use.

CPU-I/O Burst Cycle

• Process execution consists of a cycle of CPU execution and I/O wait.


• Processes alternate between these two states.
• Process execution begins with a CPU burst.
• That is followed by an I/O burst, then another CPU burst, then another I/O burst,
and so on.
• Eventually, the last CPU burst will end with a system request to terminate
execution, rather than with another I/O burst.

CPU Scheduler
• Whenever the CPU becomes idle, the operating system must select one of the
processes in the ready queue to be executed.

• The selection process is carried out by the short-term scheduler (or CPU
scheduler).

• The ready queue is not necessarily a first-in, first-out (FIFO) queue. It may be a
FIFO queue, a priority queue, a tree, or simply an unordered linked list
Preemptive Scheduling

• CPU scheduling decisions may take place under the following four circumstances:
1. When a process switches from the running state to the waiting state
2. When a process switches from the running state to the ready state
3. When a process switches from the waiting state to the ready state
4. When a process terminates
• Under 1 & 4 scheduling scheme is non preemptive.
• Otherwise the scheduling scheme is preemptive
Non-preemptive Scheduling

• In non preemptive scheduling, once the CPU has been allocated a process, the
process keeps the CPU until it releases the CPU either by termination or by
switching to the waiting state.
• This scheduling method is used by the Microsoft windows environment.

Dispatcher

• The dispatcher is the module that gives control of the CPU to the process selected by
the short-term scheduler.
• This function involves:
1. Switching context
2. Switching to user mode
3. Jumping to the proper location in the user program to restart that program
Scheduling Criteria

1. CPU utilization: The CPU should be kept as busy as possible. CPU utilization may
range from 0 to 100 percent. In a real system, it should range from 40 percent (for a
lightly loaded system) to 90 percent (for a heavily used system).

2. Throughput: It is the number of processes completed per time unit. For long
processes, this rate may be 1 process per hour; for short transactions, throughput might
be 10 processes per second.

3. Turnaround time: The interval from the time of submission of a process to the time
of completion is the turnaround time. Turnaround time is the sum of the periods spent
waiting to get into memory, waiting in the ready queue, executing on the CPU, and
doing I/O.

4. Waiting time: Waiting time is the sum of the periods spent waiting in the ready
queue.

5. Response time: It is the amount of time it takes to start responding, but not the
time that it takes to output that response.

CPU Scheduling Algorithms

1. First-Come, First-Served Scheduling


2. Shortest Job First Scheduling
3. Priority Scheduling
4. Round Robin Scheduling
First-Come, First-Served Scheduling

• The process that requests the CPU first is allocated the CPU first.
• It is a non-preemptive Scheduling technique.
• The implementation of the FCFS policy is easily managed with a FIFO queue.
Example:
Process Burst Time
P1 24
P2 3
P3 3

• If the processes arrive in the order PI, P2, P3, and are served in FCFS order, we get
the result shown in the following Gantt chart:
Gantt chart

Average waiting time = (0+24+27) / 3 = 17 ms


Average Turnaround time = (24+27+30) / 3 = 27 ms

• The FCFS algorithm is particularly troublesome for time – sharing systems, where
it is important that each user get a share of the CPU at regular intervals.
Shortest Job First Scheduling

• The CPU is assigned to the process that has the smallest next CPU burst.
• If two processes have the same length next CPU burst, FCFS scheduling is used to
break the tie.
Example:
Process Burst Time
P1 6
P2 8
P3 7
P4 3
Gantt chart

• Average waiting time is (3 + 16 + 9 + 0)/4 = 7 milliseconds.


• Average turnaround time = ( 3+9+16+24) / 4 = 13 ms
Preemptive & non preemptive scheduling is used for SJF

Example:
Process Arrival Time Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
Preemptive Scheduling

Average waiting time:


P1: 10 – 1 = 9
P2: 1 – 1 = 0
P3: 17 – 2 = 15
P4: 5 – 3 = 2
AWT = (9+0+15+2) / 4 = 6.5 ms

• Preemptive SJF is known as shortest remaining time first


Non-preemptive Scheduling

AWT = 0 + (8 – 1) + (12 – 3) + (17 – 2) / 4 = 7.75 ms

Priority Scheduling

• The SJF algorithm is a special case of the general priority-scheduling algorithm.

• A priority is associated with each process, and the CPU is allocated to the process
with the highest priority.( smallest integer highest priority).
Example:
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

AWT=8.2 ms

• SJF is a priority scheduling where priority is the predicted next CPU burst time.
• Priority Scheduling can be preemptive or non-preemptive.

• Drawback→ Starvation – low priority processes may never execute.

• Solution Aging →It is a technique of gradually increasing the priority of processes


that wait in the system for a long time

Round-Robin Scheduling

• The round-robin (RR) scheduling algorithm is designed especially for timesharing


systems.
• It is similar to FCFS scheduling, but preemption is added to switch between
processes.
• A small unit of time, called a time quantum (or time slice), is defined.

• The ready queue is treated as a circular queue.

Example:
Process Burst Time
P1 24
P2 3
P3 3
Time Quantum = 4 ms.

Waiting time
P1 = 26 – 20 = 6
P2 = 4
P3 = 7 (6+4+7 / 3 = 5.66 ms)

• The average waiting time is 17/3 = 5.66 milliseconds.

• The performance of the RR algorithm depends heavily on the size of the time–
quantum.

• If time-quantum is very large (infinite) then RR policy is same as FCFS policy.


• If time quantum is very small, RR approach is called processor sharing and appears
to the users as though each of n process has its own processor running at 1/n the
speed of real processor.
Multilevel Queue Scheduling

• It partitions the ready queue into several separate queues .


• The processes are permanently assigned to one queue, generally based on some
property of the process, such as memory size, process priority, or process type.

• There must be scheduling between the queues, which is commonly implemented as


a fixed-priority preemptive scheduling.
• For example the foreground queue may have absolute priority over the background
queue.

Example of a multilevel queue scheduling algorithm with five queues

1. System processes
2. Interactive processes
3. Interactive editing processes
4. Batch processes
5. Student processes

Each queue has absolute priority over lower-priority queue.


Multilevel Feedback Queue Scheduling

• It allows a process to move between queues.


• The idea is to separate processes with different CPU-burst characteristics.
• If a process uses too much CPU time, it will be moved to a lower-priority queue.
• This scheme leaves I/O-bound and interactive processes in the higher-priority
queues.
• Similarly, a process that waits too long in a lower priority queue may be moved to a
higher-priority queue.

• This form of aging prevents starvation.

Example:

• Consider a multilevel feedback queue scheduler with three queues, numbered from
0 to 2.
• The scheduler first executes all processes in queue 0.

• Only when queue 0 is empty will it execute processes in queue 1.

• Similarly, processes in queue 2 will be executed only if queues 0 and 1 are empty.

• A process that arrives for queue 1 will preempt a process in queue 2.

• A process that arrives for queue 0 will, in turn, preempt a process in queue 1.
A multilevel feedback queue scheduler is defined by the following parameters:

1. The number of queues

2. The scheduling algorithm for each queue

3. The method used to determine when to upgrade a process to a higher priority queue

4. The method used to determine when to demote a process to a lower-priority queue

5. The method used to determine which queue a process will enter when that process
needs service

You might also like