Os Chapter 3
Os Chapter 3
Os Chapter 3
3
Introduction
Process State: The state may be new, ready, running, and waiting, halted and so on.
Program Counter: The counter indicates the address of the next instruction to be
executed for this process.
CPU Registers: The registers vary in number and type, depending on the computer
architecture. They include accumulators, index registers, stack pointers, and general-
3.4 Processes
purpose registers, plus any condition-code information. Along with the program
counter, the state information must be saved when an interrupt occurs.
CPU-Scheduling Information: This information includes a process priority, pointers
to scheduling queues, and any other scheduling parameters.
Memory-Management Information: This information may include such information
as the value of the base and limit registers, the page tables, or the segment tables,
depending on the memory system used by the operating system.
Accounting Information: This information includes the amount of CPU real time
used, time limits, account numbers, job or process numbers, and so on.
I/O Status Information: The information includes the list of I/O devices allocated to
this process, a list of open files and so on.
4. Threads
A thread is a lightweight process with a reduced state.
A thread is a single sequence stream within in a process. Because threads have some of
the properties of processes, they are sometimes called lightweight processes.
In a process, threads allow multiple executions of streams.
A thread can be in any of several states (Running, Blocked, Ready or terminated).
Each thread has its own stack.
A thread consists of a program counter (PC), a register set, and a stack space.
Threads are not independent of one other like processes as a result threads shares with
other threads their code section, data section, OS resources also known as task, such
as open files and signals.
Process Scheduling
The act of determining which process is in the ready state, and should be moved
to the running state is known as Process Scheduling.
Process Scheduling is an essential part of the Multiprogramming operating systems.
The prime aim of the process scheduling system is to keep the CPU busy all the time
and to deliver minimum response time for all programs.
A uni processor system can have only one running process. If more processes exist, the
rest must wait until the CPU is free and can be rescheduled.
Operating System 3.5
1. Scheduling Queues
The Scheduling Queues in the Systems are: (1) Job Queue, (2) Ready Queue and (3)
Device Queue.
When the process enters into the system, they are put into a Job Queue. This queue
consists of all processes in the system.
The processes that are residing in main memory and are ready and waiting to execute
are kept on a list called the Ready Queue. This queue is generally stored as a linked
list. A ready-queue header contains pointers to the first and final PCBs in the list.
The list of processes waiting for a particular I/O device is called a Device Queue.
A common representation of process scheduling is a Queuing Diagram, shown in
Figure 3.3.
In the figure, each rectangular box represents a queue. The circles represent the
resources that serve the queues. The arrows indicate the flow of processes in the
system.
A new process is initially put in the ready queue. It waits in the ready queue until it is
selected for execution (or dispatched). One the process is assigned to the CPU and is
executing, one of the several events could occur:
The process could issue an I/O request, and then place in an I/O queue.
The process could create a new sub process and waits for its termination.
The process could be removed forcibly from the CPU, as a result of interrupt and
be put back in the ready queue.
3.6 Processes
2. Schedulers
A Scheduler is an operating system module that selects the next job to be
admitted into the system and the next process to run.
Schedulers are of three types.
i) Long-Term Scheduler
ii) Short-Term Scheduler
iii) Medium-Term Scheduler
i) Long-Term Scheduler
The Long-Term Scheduler, or Job Scheduler, selects processes from the pool
and loads them into memory for execution.
The long-term scheduler executes much less frequently.
The long-term scheduler controls the degree of Multiprogramming – the number
of processes in memory.
The primary objective of the job scheduler is to provide a balanced mix of jobs,
such as I/O bound and CPU bound.
An I/O bound process spends more of the time doing I/O than it spends doing
computations.
A CPU-bound process, on the other hand, generates I/O requests infrequently,
using more of its time doing computation than I/O-bound process uses.
On some systems, the long-term scheduler may be absent or minimal. For
example, time-sharing systems such as UNIX often have no long-term scheduler.
When process changes the state from new to ready, then there is a long-term
scheduler.
ii) Short-Term Scheduler
The Short-Term Scheduler, or CPU Scheduler; selects from among the
processes that are ready to execute, and allocates the CPU to one of them.
The short-term scheduler must select a new process for the CPU frequently. A
process may execute for only a few milliseconds before waiting for an I/O
request.
The short-term scheduler executes at least once every 100 milliseconds.
The short-term scheduler must be fast.
Operating System 3.7
3. Context Switch
Switching the CPU to another process requires saving the state of the old process
and loading the saved state for the new process. This task is known as a Context
Switch.
The context of a process is represented in the PCB of a process; it includes the value of
the CPU registers, the process state, and the memory-management information.
When a context switch occurs, the kernel saves the context of the old processes in its
PCB and loads the saved context of the new process scheduled to run.
Context switching can significantly affect performance, since modern computers have
a lot of general and status registers to be saved.
Context-switch times are highly dependent on hardware support.
3.8 Processes
A context switch simply includes changing the pointer to the current register set. Also,
the more complex the operating system, the more work must be done during a context
switch.
Operations on Processes
The operations of process carried out by an operating system are primarily of two
types:
1. Process Creation
2. Process Termination
1) Process Creation
Process Creation is a task of creating new processes. There are different ways
to create new process.
A new process can be created at the time of initialization of operating system or
when system calls such as fork() are initiated by other processes.
The process, which creates a new process using system calls, is called parent
process while the new process that is created is called child process. The child
processes can create new processes using system calls.
A new process can also be created by an operating system based on the request
received from the user.
2) Process Termination
Process Termination is an operation in which a process is terminated after
the execution of its last instruction. This operation is used to terminate or end
any process.
When a process is terminated, the resources that were being utilized by the
process are released by the operating system.
When a child process terminates, it sends the status information back to the parent
process before terminating. The child process can also be terminated by the parent
process if the task performed by the child process is no longer needed.
When a parent process terminates, it has to terminate the child process as well
became a child process cannot run when its parent process has been terminated.
Operating System 3.9
Figure 3.5: Communications models. (a) Message passing. (b) Shared memory.
If processes P and Q want to communicate, they must send messages to and receive
messages from each other; a communication link must exist between them. This link
can be implemented in a variety of ways.
There are several methods for logically implementing a link and the send/receive
operations:
Direct or indirect communication
Symmetric or asymmetric communication
Automatic or explicit buffering
Send by copy or send by reference
Fixed-sized or variable-sized messages
2. Naming
The various schemes for specifying processes in send and receive primitives are of
two types:
Direct Communication
Indirect Communication
1. Direct Communication
With direct communication, each process that wants to communicate must explicitly
name the recipient or sender of the communication.
The send and receive primitives are defines as :
send(P, message) – Send a message to process P
receive(Q, message) – Receive a message from process Q
A communication link in this scheme has the following properties:
A link is established automatically between every pair of processes that want
to communicate.
A link is associated with exactly two processes.
This scheme exhibits symmetry in addressing; that is, both the sender and
receiver processes use the name for communication.
A variant of this scheme employs asymmetry in addressing is also possible in the
direct communication. Only the sender names the recipient; the recipient is not
required to name the sender.
3.12 Processes
The send and receive primitives in this scheme are defined as follows::
Send (P, message) - Send a message to process P.
Receive (id, message) – Receive a message from any process; the variable id is set to
the name of the process with which communication has taken place.
2. Indirect Communication
With indirect communication, the messages are sent to and receive from mailboxes,
or ports.
A mailbox may be owned either by a process or by the operating system.
A process can communicate with some other process via a number of different
mailboxes.
The send and receive primitives are defined as follows:
Send (A, message) – Send a message to mailbox A.
Receive (A, message) – Receive a message from mailbox A.
In this scheme, a communication link has the following properties:
A link is established between a pair of processes only if both members of the pair have
a shared mailbox.
A link may be associated with more than two processes.
A number of different links may exist between each pair of communicating processes,
with each link corresponding to one mailbox.
3. Synchronization
Synchronization is a term used to describe the process of taking
multiple hardware devices and making their information identical.
Message passing may be either Blocking or Non blocking - also known as
Synchronous and asynchronous.
Blocking send: The sending process is blocked until the message is received by
the receiving process or by the mailbox.
Non blocking send: The sending process sends the message and resumes
operation.
Blocking receive: The receiver blocks until a message is available.
Non blocking receive: The receiver retrieves either a valid message or a null.
Operating System 3.13
4. Buffering
The buffer is an area in the main memory that is used to store or hold the
data temporarily. The act of storing data temporarily in the buffer is called
buffering.
Buffering is used in direct and indirect communication. Messages exchanged by
communicating processes reside in a temporary queue. Buffering is implemented in
three ways:
1. Zero Capacity
2. Bounded Capacity
3. Unbounded Capacity
Zero Capacity: Maximum length of the queue is 0 (zero). Communication
link cannot have any messages waiting in it. In this case, the sender must
block until the recipient receives the message.
Bounded Capacity: The queue has finite length n; thus at most n messages
can reside in it. If the queue is not full when a new message is sent, the latter
is placed in the queue, and the sender can continue execution without waiting.
The link has a finite capacity. If the link is full, the sender must block until
space is available in the queue.
Unbounded Capacity: The queue has potentially infinite length; thus, any
number of messages can wait in it. The sender never blocks.
Exercises:
10. What are Schedulers? Discuss the different types of Schedulers in OS.
11. What do you mean by Long-Term Scheduler? What are its functions?
12. Write short notes on Short-Term Scheduler.
13. Discuss briefly about Medium-Term Scheduler.
14. What is meant by Context Switch?
15. What are the various operations that can be performed on a Process?
16. What are Cooperating Processes? What are its advantages?
17. Explain about Inter-Process Communication.
18. Differentiate between Direct and Indirect Communication.
19. What is meant by Synchronization?
20. What do you mean by Buffering? Discuss its types.
*******************************