Operating Systems ch-2 Part-1
Operating Systems ch-2 Part-1
PROCESS MANAGEMENT
Process Concept:
Process is a program in execution, which forms the basis of all computation .A system consist of collection
of processes in memory.
1. .Operating system Processes-executes system code
2. User processes--executes user code
All these processes can execute concurrently with in the CPU by multiplexing.
A program is a passive entity, such as a file containing a list of instructions stored on disk.
A process is an active entity with a program counter(PC) value and a set of associated resources.
A program becomes process when an executed file is loaded into memory.
Process consist of
1. Text section-consist of program code, current context. Stack
2. Stack: consist of temporary data such as function parameters,
local variables, return addresses.
3. Data section-which contains global variables.
4. Heap-memory allocated dynamically during process runtime.
Heap
Two or more processes may be associated with same program Data
eg: User may invoke many copies of the web browser program Text
Process States:
Process state
Process number
Program counter:
registers
Process state: The state may be new, ready, running, waiting, halted, and soon.
Program counter: The counter indicates the address of the next instruction to be executed for this
process.
CPU registers: The registers vary in number and type, depending on the computer architecture. They
include accumulators, index registers, stack pointers,andgeneral-purposeregisters,plusanycondition-
codeinformation. Along with the program counter, this state information must be saved when an
interrupt occurs, to allow the process to be continued correctly after ward.
CPU-scheduling information: This information includes a process priority, pointers to scheduling
queues, and another scheduling parameters.
Memory-management information: This information may include such information as the value of
the base and limit registers, the page tables, or the segment tables, depending on the memory system
used by the operating system.
Accounting information: This information includes the amount of CPU and real time used, time limits,
account numbers, job or process numbers, and soon.
status information: The information includes the list of I/O devices allocated to this process, a list of
open files, and soon.
Schedulers
A scheduler is a decision maker that selects the processes from one scheduling queue
to another or allocates CPU for execution. The Operating System has three types of
scheduler:
1. Long-term scheduler or Job scheduler
2. Short-term scheduler or CPU scheduler
3. Medium-terms scheduler
It is also called a job scheduler. Long term scheduler determines which programs
are admitted to the system for processing. Job scheduler selects processes from the
queue and loads them into memory for execution. The primary objective of the job
scheduler is to provide a balanced mix of jobs, such as I/O bound and processor
bound. It also controls the degree of multi-programming. If the degree of multi
programming is stable, then the average rate of process creation must be equal to the
average departure rate of processes leaving the system.
On some systems, the long term scheduler may not be available or minimal. Time-
sharing operating systems have no long term scheduler. When process changes the
state from new to ready, then there is use of long term scheduler.
Medium term scheduling is part of the swapping. It removes the processes from the
memory. It reduces the degree of multi-programming. The medium term scheduler is
in-charge of handling the swapped out-processes.
Running process may become suspended if it makes an I/O request. Suspended
processes cannot make any progress towards completion. In this condition, to remove
the process from memory and make space for other process, the suspended process is
moved to the secondary storage. This process is called swapping, and the process is
said to be swapped out or rolled out. Swapping may be necessary to improve the
process mix.
Context Switch
A context switch is the mechanism to store and restore the state or context of a
CPU in Process Control block so that a process execution can be resumed from the
same point at a later time. Using this technique a context switcher enables multiple
processes to share a single CPU. Context switching is an essential part of a
multitasking operating system features. When the scheduler switches the CPU from
executing one process to execute another, the context switcher saves the content of all
processor registers for the process being removed from the CPU, in its
CPU Scheduler
Selects from among the processes in memory that are ready to execute, and allocates
the CPU to one of them
CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
Dispatcher
Dispatcher module gives control of the CPU to the process selected by the short-term
scheduler; this involves:
switching context
switching to user mode
jumping to the proper location in the user program to restart that program
Dispatch latency – time it takes for the dispatcher to stop one process and start
another running
Scheduling Criteria
CPU utilization – keep the CPU as busy as possible
Throughput – of processes that complete their execution per time unit
Turnaround time – amount of time to execute a particular process
Waiting time – amount of time a process has been waiting in the ready queue
Response time – amount of time it takes from when a request was submitted
until the first response is produced, not output (for time-sharing environment)
Optimization Criteria
Scheduling algorithms
Four major scheduling algorithms here which are following :
First Come First Serve (FCFS) Scheduling
Shortest-Job-First (SJF) Scheduling
Priority Scheduling
Round Robin(RR) Scheduling
The waiting time is 0 milliseconds for process P1, 24milliseconds for process P2,
and 27 milliseconds for process P3. Thus, the average waiting time is (0 + 24 +
27)/3 = 17 milliseconds.
If the processes arrive in theorderP2, P3,Pl, however ,the results will be as shown
in the following Gantt chart:
Shortest-Job-First Scheduling
As an example, consider the following four processes, with the length of the CPU-
burst time given in milliseconds:
Priority Scheduling
The average waiting time is 8.2 milliseconds .Priorities can be defined either
internally or externally.
Internally defined priorities use some measurable quantity or quantities to compute
the priority of a process.
For example, time limits, memory requirements, the number of open files, and the
ratio of average I/O burst to average CPU burst have been used in computing
priorities. External priorities are set by criteria that are external to the operating
system, such as the importance of the process, the type and amount of funds being
paid for computer use, the department sponsoring the work, and other, often political,
factors.
Priority scheduling can be either preemptive or non preemptive. When a process
arrives at the ready queue, its priority is compared with the priority of the currently
running process. A preemptive priority-scheduling algorithm will preempt the CPU if
the priority of the newly arrived process is higher than the priority of the currently
running process. Anon preemptive priority-scheduling algorithm will simply put the
new process at the head of the ready queue.
A major problem with priority-scheduling algorithms is indefinite
blocking (or starvation). A process that is ready to run but lacking the CPU can
be considered blocked-waiting for the CPU. A priority-scheduling algorithm can
leave some low-priority processes waiting in definitely for the CPU
A solution to the problem of indefinite blockage of low-priority processes is aging.
Aging is a technique of gradually increasing the priority of processes that wait in the
system for a long time. For example, if priorities range from 127 (low) to 0 (high),
we could decrement the priority of a waiting process by 1 every 15 minutes.
Eventually, even a process with an initial priority of 127 would have the highest
priority in the system and would be executed .Infact ,it would take no more than 32
hours for a priority 127 process to age to a priority process.
Round-Robin Scheduling
Theaveragewaitingtimeis17/3=5.66milliseconds.
In the RR scheduling algorithm, no process is allocated the CPU for more
than 1 time quantum in a row .If a process' CPU burst exceeds 1 time quantum,
that process is preempted and is put back in the ready queue. The RR scheduling
algorithm is preemptive.
If there are n processes in the ready queue and the time quantum is q, then
each process gets l/n of the CPU time in chunks of at most q time units. Each
process must wait no longer than (n - 1) x q time units until its next time quantum.
For example, if there are five processes, with a time quantum of 20 milliseconds,
then eachprocesswillgetupto20millisecondsevery100 milliseconds.
The performance of the RR algorithm depends heavily on the size of the
time quantum. At one extreme, if the time quantum is very large (infinite), the
RR policy is the same as the FCFS policy. If the time quantum is very small (say
1 microsecond), the RR approach is called processor sharing, and appears (in
theory) to the users as though each of n processes has its own processor running
at l/n the speed of the real processor. This approach was used in Control Data
Corporation (CDC) hardware to implement 10 peripheral processors with only
one set of hardware and 10 sets of registers. The hardware executes one
instruction for one set of registers, then goes onto the next. This cycle continues,
resulting in 10 slow processors rather than one fast processor.
Thread
A thread, sometimes called a lightweight process (LWP), is a basic unit of CPU
utilization; it comprises a thread ID, a program counter, a register set, and a stack. It
shares with other threads belonging to the same process its code section, data section,
and other operating-system resources, such as open files and signals. A traditional
(or heavy weight) process has a single thread of control. If the process has multiple
threads of control, it can do more than one task at a time.
Motivation
Many software packages that run on modern desktop PCs are multi threaded. An
application typically is implemented as a separate process with several thread of
control.
Single-threaded and multi threaded
Ex: A web browser might have one thread display image sort ext while
another thread retrieves data from the network. A word processor may have a
thread for displaying graphics, another thread for reading key
Strokes from the user, and a third thread for performing spelling and
grammar checking in the back ground.
In certain situations a single application may be required to perform
several similar tasks. For example, a web server accepts client requests for web
pages, images, sound, and so forth. A busy web server may have several (perhaps
hundreds) of clients concurrently accessing it. If the web server ran as a
traditional single-threaded process, it would be able to service only one client at a
time.
One solution is to have the server run as a single process that accepts
requests. When the server receives a request, it creates a separate process to
service that request. In fact, this process-creation method was in common use
before threads became popular. Process creation is very heavyweight, as was
shown in the previous chapter. If the new process will perform the same tasks as
the existing process, why incur all that overhead? It is generally more
efficientforoneprocessthatcontainsmultiplethreadstoservethesamepurpose. This
approach would multithread the web-server process. The server would create a
separate thread that would listen for client requests; when a request was made,
rather than creating another process, it would create another thread to service the
request.
Threads also play a vital role in remote procedure call (RPC)systems.
RPCs allow inter-process communication by providing a communication
mechanism similar to ordinary function or procedure calls. Typically, RPC
servers are multithreaded. When a server receives a message, it services the
message using a separate thread. This allows the server to service several
concurrent requests.
Benefits
The benefits of multi threaded programming can be broken down into four
major categories:
1. Responsiveness: Multithreading an interactive application may allow a
program to continue running even if part of it is blocked or is performing a
lengthy operation, thereby increasing responsiveness to the user. For instance, a
multithreaded web browser could still allow user interaction
In one thread while an image is being loaded in another thread.
2. Resource sharing: By default, threads share the memory and the resources of
the process to which they belong. The benefit of code sharing is that it allows an
application to have several different threads of activity all with in the same
address space.
3. Economy: Allocating memory and resources for process creation is costly.
Alternatively, because threads share resources of the process to which they
belong, it is more economical to create and context switch threads. It can be
difficult to gauge empirically the difference in overhead for creating and
maintaining a process rather than a thread, but in general it is much more time
consuming to create and manage processes than threads. In Solaris 2, creating a
process is about 30 times slower than is creating a thread, and context switching
is about five times slower.
4. Utilization of multiprocessor architectures: The benefits of multithreading
can be greatly increased in a multiprocessor architecture, where each thread may
be running in parallel on a different processor. A single-threaded process can
only run on one CPU, no matter how many are available.
The OS supports the threads that can provided in following two levels:
User-Level Threads
User-level threads implement in user-level libraries, rather than via
systems calls, so thread switching does not need to call operating system and to
cause interrupt to the kernel. In fact, the kernel knows nothing about user-level
threads and manages them as if they were single-threaded processes.
Advantages:
User-level threads do not require modification to operating systems.
Simple Representation: Each thread is represented simply by a PC , registers,
stack and a small control block, all stored in the user process address space.
Simple Management: This simply means that creating a thread, switching
between threads and synchronization between threads can all be done without
intervention of the kernel.
Fast and Efficient: Threads witching is not much more expensive than a
procedure call.
Disadvantages:
Thereisalackofcoordinationbetweenthreadsandoperatingsystemkernel.
User-levelthreadsrequirenon-blockingsystemscalli.e.,amultithreaded kernel.
Kernel-Level Threads
In this method, the kernel knows about and manages the threads. Instead of
thread table in each process, the kernel has a thread table that keeps track of all
threads in the system. Operating Systems kernel provides system call to c
Because kernel has full knowledge of all threads, Scheduler may decide to
give more time to a process having large number of threads than process
having small number of threads.
Kernel-level threads are especially good for applications that frequently block.
Disadvantages:
The kernel-level threads are slow and inefficient. For instance, threads
operations are hundreds of times slower than that of user-level threads.
Since kernel must manage and schedule threads as well as processes. It require
a full thread control block (TCB) for each thread to maintain information about
threads. As a result there is significant over head and increased in kernel
complexity.
Multi-threading Models
Many systems provide support for both user and kernel threads, resulting in
different multithreading models. We look at three common types of threading
implementation.
Many-to-One Model
The many-to-one model maps many user-level threads to one kernel thread. Thread
management is done in user space, so it is efficient, but the entire process will block
if a thread makes a blocking system call .Also, because only one thread can access
the kernel at a time, multiple threads are unable to run in parallel on multiprocessors.
One-to-one Model
The one-to-one model maps each user thread to a kernel thread. It
provides more concurrency than the many-to-one model by allowing another
thread to run when a thread makes a blocking system call; it also allows multiple
threads to run in parallel on multiprocessors. The only drawback to this model is
that creating a user thread requires creating the corresponding kernel thread.
Because the overhead of creating kernel threads can burden the performance of
an application, most implementations of this model restrict the number of threads
supported by the system. Windows NT,Windows2000,andOS/2implementthe
one-to-one model.
Many-to-Many Model
The many-to-many model multiplexes many user-level threads to a
smaller or equal number of kernel threads. The number of kernel threads may be
specific to either a particular application or a particular machine(an application
may be allocated more kernel threads on a multiprocessor than on a uni
processor). Whereas the many-to-one model allows the developer to create as
many user threads as she wishes, true concurrency is not gained because the
kernel can schedule only one thread at a time. The one-to-one model allows for
greater concurrency, but the developer has to be careful not to create too many
threads within an application (and in some instances may be limited in the
number of threads she can create). The many-to-many model suffers from neither
of these short comings: Developers can create as many user threads as necessary,
and the corresponding kernel threads can run in parallel on a multiprocessor.
Process vs. Thread
Process Thread
1.Process cannot share the same 1. Threads can share memory and files.
memory area(address space)
2.It takes more time to create a process 2.It takes less time to create a thread.
3.It takes more time to complete the execution 3.Less time to terminate.
and terminate.
4.Execution is very slow. 4.Execution is very fast.
5.It takes more time to switch between 5.It takes less time to switch between two
two processes. threads.
6.System calls are required to communicate each
other 6.System calls are not required.
7.It requires more resources to execute. 7.Requires fewer resources.
8.Communication between two threads are
8.Implementing the communication
very easy to implement because threads
between processes is bit more difficult.
share the memory