Module 2 Os End Sums
Module 2 Os End Sums
MODULE 2
PROCESS CONCEPT
Question that arises in discussing operating systems involves what to call all the CPU activities.
A batch system executes jobs, whereas a time-shared system has user programs, or tasks. Even
on a single-user system such as Microsoft Windows, a user may be able to run several programs
at one time: a word processor, a web browser, and an e-mail package. Even if the user can
execute only one program at a time, the operating system may need to support its own internal
programmed activities, such as memory management. In many respects, all these activities are
similar, so we call all of them processes.
***What is Process?
Process is a program in execution.
A process execution must progress in sequential fashion. It has multiple parts such as Text
section, program counter, stack, data section and heap.
PROCESS IN MEMORY
Explain the process in memory
Process includes
1. Program Counter to indicate the address of the next instruction to be executed for this
process.
2. Registers Content of the processor.
3. Process Stack contains temporary data.
4. Data Section contains global variables.
5. Heap is memory that is dynamically allocated during process runtime
A program by itself is not a process.
1) A process is an active-entity.
2) A program is a passive-entity such as an executable-file stored on disk.
A program becomes a process when an executable-file is loaded into memory. If you run many
copies of a program, each is a separate process. The text-sections are equivalent, but the data-
sections vary.
Process in Memory
PROCESS STATE
*****Explain the process state with suitable transition diagram
As a process executes, it changes state. Each process may be in one of the following states
New: The process is being created.
Running: Instructions are being executed.
Waiting: The process is waiting for some event to occur (such as I/O completions).
Ready: The process is waiting to be assigned to a processor.
Terminated: The process has finished execution.
Only one process can be running on any processor at any instant.
PCB structure
PROCESS SCHEDULING
The main Objective of multiprogramming is to have some process running at all times to
maximize CPU utilization.
The main objective of time-sharing is to switch the CPU between processes so frequently that
users can interact with each program while it is running.
To meet above 2 objectives: Process scheduler is used to select an available process for
program- execution on the CPU.
SCHEDULING QUEUES
There are three types of scheduling-queues:
JOBQUEUE
This queue consists of all processes in the system.
As processes enter the system, they are put into a job-queue.
READYQUEUE
This queue consists of the processes that are
Residing in main-memory and
Ready & waiting to execute
This queue is generally stored as a linked list.
A ready-queue header contains pointers to the first and final PCBs in the list.
Each PCB has a pointer to the next PCB in the ready-queue.
DEVICE QUEUE
This queue consists of the processes that are waiting for an I/O device.
Each device has its own device-queue.
Each rectangular box represents a queue. Two types of queues are present: the ready queue and a
set of device queues. The circles represent the resources that serve the queues, and the arrows
indicate the flow of processes in the system.
A new process is initially put in the ready queue. It waits there until it is selected for execution,
or is dispatched. Once the process is allocated the CPU and is executing. When the process is
executing, one of following events could occur
1) The process could issue an I/O request and then be placed in an I/O queue.
2) The process could create a new sub-process and wait for the sub-process's termination.
3) The process could be interrupted and put back in the ready-queue.
In the first two cases, the process eventually switches from the waiting state to the ready state
and is then put back in the ready queue. A process continues this cycle until it terminates, at
which time it is removed from all queues and has its PCB and resources de-allocated.
SCHEDULERS
A process migrates among the various scheduling queues throughout its lifetime. The operating
system must select, for scheduling purposes, processes from these queues in some fashion. This
selection process is carried out by the appropriate scheduler.
Need to be invoked only when a process leaves the system and therefore executes
much less frequently.
Controls the degree of multiprogramming.
ii)
iii) queue will almost always be empty (devices will go unused) and system will
beunbalanced.
SHORT-TERM SCHEDULER
Also called CPU scheduler.
Selects which process should be executed next and allocates CPU.
Need to be invoked to select a new process for the CPU and therefore executes much
more frequently.
Must be fast, a process may execute for only a few milliseconds.
MEDIUM-TERM SCHEDULER
Some time-sharing systems have medium-term scheduler
The scheduler removes processes from memory and thus reduces the degree of
multiprogramming.
Later, the process can be reintroduced into memory, and its execution can be
continued where it left off. This scheme is called swapping.
The process is swapped out, and is later swapped in, by the scheduler.
Swapping is necessary to improve the process mix or requiring memory to be freed up.
CONTEXT SWITCH
***Define context switch? What is the need for context switch?
Context-switch means saving the state of the old process and switching the CPU to another
process.
In general purpose systems interrupts cause the OS to change a CPU from its current task and to
run a kernel routine. When an interrupt occurs, the system needs to save the current context of
the process currently running on the CPU so that it can restore that context when its processing is
done, essentially suspending the process and then resuming it.
The context of a process is represented in the PCB of the process; it includes
value of CPU registers
process-state and
memory-management information.
Disadvantages:
Context-switch time is pure overhead, because the system does no useful
work while switching.
Context-switch times are highly dependent on hardware support
OPERATIONS ON PROCESSES
1) Process Creation and
2) Process Termination
Process Creation
• A process may create a new process via a create-process system-call.
• The creating process is called a parent-process.The new process created by the parent is called
the child-process (Sub-process).
• OS identifies processes by pid (process identifier), which is typically an integer-number.
• A process needs following resources to accomplish the task:
→ CPU time
→ memory and
→ I/O devices.
• Child-process may
→ get resources directly from the OS or
Sinchana M N Assistant Professor Page 9
OPERATING SYSTEMS MODULE 2
→ get resources of parent-process. This prevents any process from overloading the
system.
• Two options exist when a process creates a new process:
1) The parent & the children execute concurrently.
2) The parent waits until all the children have terminated.
• Two options exist in terms of the address-space of the new process:
1) The child-process is a duplicate of the parent-process (it has the same
program and data as the parent).
2) The child-process has a new program loaded into it.
PROCESS CREATION IN UNIX
In UNIX, each process is identified by its process identifier (pid), which is a unique integer. A
new process is created by the fork() system-call. The new process consists of a copy of the
address-space of the original process.
Both the parent and the child continue execution with one difference:
1) The return value for the fork() is
zero for the new (child) process.
2) The return value for the fork() is
Non-zero pid of the child for the parent-process.
Typically, the exec() system-call is used after a fork() system-call by one of the two
processes to replace the process's memory-space with a new program. The parent can
issue wait() system-call to move itself off the ready-queue.
PROCESS TERMINATION
A process terminates when it executes the last statement (in the program). Then, the OS deletes
the process by using exit() system-call. Then, the OS de-allocates all the resources of the process.
The resources include memory, open files and I/O buffers.
Process termination can occur in following cases:
A process can cause the termination of another process via TerminateProcess() system-
call.
Users could arbitrarily kill the processes.
A parent terminates the execution of children for following reasons:
The child has exceeded its usage of some resources.
The task assigned to the child is no longer required.
The parent is exiting, and the OS does not allow a child to continue.
In some systems, if a process terminates, then all its children must also be terminated.
This phenomenon is referred to as cascading termination.
The speed can be improved only if computer has multiple processing elements such as
CPUs or I/O channels.
3) Modularity
Divide the system-functions into separate processes or threads.
4) Convenience
An individual user may work on many tasks at the same time.
For example, a user may be editing, printing, and compiling in parallel.
What is Inter-process communication? Briefly explain its types
Inter-process communication (IPC) is a set of programming interfaces that allow a programmer
to coordinate activities among different program processes that can run concurrently in
an operating system.
Cooperating processes require an IPC mechanism that will allow them to exchange data and
information.
Two basic models of IPC:
1. Shared-memory and
2. Message passing.
SHARED MEMORY SYSTEMS
The processes can then exchange information by reading and writing data in the shared-
memory. The processes are also responsible for ensuring that they are not writing
to the same location simultaneously
Let us illustrate Cooperative process with
PRODUCER-CONSUMER PROBLEM
Producer-process produces information that is consumed by a consumer-process.
Example (client- server, compiler-assembler, loader)
To allow producer and consumer to run concurrently, have a buffer of items to be filled by the
producer and emptied by the consumer. The buffer resides in a memory shared by producer and
consumer. Producer and consumer must be synchronized. Two types of buffer that can be used
are:
1. Unbounded-buffer places no practical limit on the size of the buffer
2. Bounded-buffer assumes that there is a fixed buffer size. Advantages: It allows
maximum speed and convenience of communication, faster
Explain the implementation of producer-consumer processes using bounded buffer in
shared memory systems.
MESSAGE-PASSING SYSTEMS
Symmetric addressing: Both sender and receiver processes must name the other to
communicate.
Messages are sent to/received from mailboxes (or ports).
Properties of a communication link:
A link is established between a pair of processes only if both members have a shared
mailbox.
A link may be associated with more than two processes.
A number of different links may exist between each pair of communicating processes.
Mailbox owned by a process:
The owner can only receive, and the user can only send.
The mailbox disappears when its owner process terminates.
Mailbox owned by the OS:
The OS allows a process to:
1. Create a new mailbox
2. Send & receive messages via it
3. Delete a mailbox.
2. SYNCHRONIZATION
Communication takes place through send() and receive primitives. Message passing may
be either blocking or non-blocking (also known as synchronous and asynchronous).
MULTITHREADED PROGRAMMING
What is a thread?
A thread is a basic unit of CPU utilization. It comprises thread id, program counter, register set,
and a stack.
Thread shares with other threads belonging to the same process its code section, data
section and other OS resources. Threads run within an application.
Traditional process has single thread of control (heavy weight) .
If a process has multiple threads of control, it can perform more than one task at a time.
Multiple tasks with the application can be implemented by separate threads such as Update
display, Fetch data, Spell checking and Answer a network request
What is the difference between process and thread?
S. No. Process Thread
MULTI-THREADING BENEFITS
******Discuss the benefits of multi-threaded programming.
The benefits of multi-threaded programming are:
1. Responsiveness
2. Resource Sharing
3. Economy
4. Scalability
MULTITHREADING MODELS
*********Explain multi-threading models in detail OR
******Discuss the three common ways of establishing relationship between user and kernel
threads.
Three ways of establishing relationship between user-threads & kernel-threads (Multi-threading
model):
1) Many-to-one model
2) One-to-one model and
3) Many-to-many model.
Many to One model:
Many-to-Many model:
• Many user-level threads are multiplexed to a smaller number of kernel threads
Advantages:
1) Developers can create as many user threads as necessary
2) The kernel threads can run in parallel on a multiprocessor.
3) When a thread performs a blocking system-call, kernel can schedule another thread
for execution.
Many-to-many model
THREAD LIBRARIES
It provides the programmer with an API for the creation and management of threads.
Two ways of implementation:
1) First Approach
Provides a library entirely in user space with no kernel support.
All code and data structures for the library exist in the user space.
2) Second Approach
Implements a kernel-level library supported directly by the OS.
Code and data structures for the library exist in kernel space.
Three main thread libraries: 1) POSIX Pthreads
2) Win32 and
3) Java.
Pthreads
• This is a POSIX standard API for thread creation and synchronization.
• This is a specification for thread-behavior, not an implementation.
• OS designers may implement the specification in any way they wish.
• Commonly used in: UNIX and Solaris.
Java Threads
• Threads are the basic model of program-execution in Java program and Java language.
• The API provides a rich set of features for the creation and management of threads.
• All Java programs comprise at least a single thread of control.
THREADING ISSUES
***Discuss any 3 threading issues that come with multi-threaded programs
2. Thread cancellation
3. Signal handling
4. Thread pools
System call fork() is used to create a separate, duplicate process. If one thread in a program calls
fork(), then some systems duplicates all threads and other systems duplicate only the thread that
invoked the fork(). If a thread invokes the exec(), the program specified in the parameter to exec()
will replace the entire process including all threads.
Thread Cancellation: This is the task of terminating a thread before it has completed. Target
thread is the thread that is to be canceled. Thread cancellation occurs in two different cases:
3) Asynchronous cancellation: One thread immediately terminates the target thread.
4) Deferred cancellation: The target thread periodically checks whether it should be
terminated.
Signal Handling: In UNIX, a signal is used to notify a process that a particular event has
occurred. All signals follow this pattern:
1. A signal is generated by the occurrence of a certain event.
2. A generated signal is delivered to a process.
3. Once delivered, the signal must be handled.
A signal handler is used to process signals. A signal may be received either synchronously or
asynchronously, depending on the source.
1) Synchronous signals
Delivered to the same process that performed the operation causing the signal.
E.g. illegal memory access and division by 0.
2) Asynchronous signals
Generated by an event external to a running process.
E.g. user terminating a process with specific keystrokes <ctrl><c>.
Every signal can be handled by one of two possible handlers:
1) A Default Signal Handler
Run by the kernel when handling the signal.
2) A User-defined Signal Handler
Overrides the default signal handler.
In single-threaded programs, delivering signals is simple.
In multithreaded programs, delivering signals is more complex. Then, the following options exist:
1) Deliver the signal to the thread to which the signal applies.
2) Deliver the signal to every thread in the process.
3) Deliver the signal to certain threads in the process.
4) Assign a specific thread to receive all signals for the process.
Thread Pools: The basic idea is to create a no. of threads at process-startup and place the threads
into a pool (where they sit and wait for work).
Procedure:
1. When a server receives a request, it awakens a thread from the pool.
2. If any thread is available, the request is passed to it for service.
Once the service is completed, the thread returns to the pool.
Advantages:
1) Servicing a request with an existing thread is usually faster than waiting to create a
thread.
2) The pool limits the no. of threads that exist at any one point.
No. of threads in the pool can be based on factors such as: no. of CPUs, amount of
memory and expected no. of concurrent client-requests.
PROCESS SCHEDULING
Basic Concepts: In a single-processor system, only one process may run at a time and other
processes must wait until the CPU is rescheduled. The main objective of multiprogramming is to
have some process running at all times, in order to maximize CPU utilization.
CPU-I/O Burst Cycle: Process execution consists of a cycle of CPU execution and an I/O wait as
shown in below figure. Process execution begins with a CPU burst, followed by an I/O burst, then
another CPU burst, etc… Finally, a CPU burst ends with a request to terminate execution. An I/O-
bound program typically has many short CPU bursts. A CPU-bound program might have a few
long CPU bursts.
CPU SCHEDULER
CPU scheduler selects a waiting-process from the ready-queue and allocates CPU to the waiting-
process. The ready-queue could be a FIFO, priority queue, tree and list. The records in the queues
are generally process control blocks (PCBs) of the processes.
CPU SCHEDULING
Four situations under which CPU scheduling decisions take place:
1. When a process switches from the running state to the waiting state. For ex; I/O request.
2. When a process switches from the running state to the ready state. For ex: when an
interrupt occurs.
3. When a process switches from the waiting state to the ready state. For ex: completion of
I/O.
4. When a process terminates.
PRIORITY SCHEDULING
A priority is associated with each process. The CPU is allocated to the process with the highest
priority. Equal-priority processes are scheduled in FCFS order. Priorities can be defined either
internally or externally. Internally-defined priorities use some measurable quantity to compute the
priority of a process.
For example: time limits, memory requirements, no. of open files.
Externally-defined priorities set by criteria that are external to the OS
For example: importance of the process, political factors
Priority scheduling can be either preemptive or non-preemptive.
Preemptive
The CPU is preempted if the priority of the newly arrived process is higher
than the priority of the currently running process.
Non Preemptive
The new process is put at the head of the ready-queue
Advantage: Higher priority processes can be executed first.
Disadvantage: Indefinite blocking, where low-priority processes are left waiting indefinitely for
CPU.
Solution: Aging is a technique of increasing priority of processes that wait in system for a long
time.
Example: Consider the following set of processes, assumed to have arrived at time 0, in
the order PI, P2, ..., P5, with the length of the CPU-burst time given in milliseconds.
There must be scheduling among the queues, which is commonly implemented as fixed-
priority preemptive scheduling.
For example, the foreground queue may have absolute priority over the background queue.
Time slice: each queue gets a certain amount of CPU time which it can schedule among
its processes; i.e., 80% to foreground in RR 20% to background in FCFS.
THREAD SCHEDULING
On OSs, it is kernel-level threads but not processes that are being scheduled by the OS. The user-
level threads are managed by a thread library, and the kernel is unaware of them. To run on a
CPU, user-level threads must be mapped to an associated kernel-level thread.
Contention Scope
Two approaches:
1) Process-Contention scope
On systems implementing the many-to-one and many-to-many models, the
thread library schedules user-level threads to run on an available LWP.
Competition for the CPU takes place among threads belonging to the same process.
2) System-Contention scope
The process of deciding which kernel thread to schedule on the CPU.
Competition for the CPU takes place among all threads in the system.
Systems using the one-to-one model schedule threads using only SCS.
Pthread Scheduling
Pthread API that allows specifying either PCS or SCS during thread creation.
Pthreads identifies the following contention scope values:
1. PTHREAD_SCOPEJPROCESS schedules threads using PCS scheduling.
2. PTHREAD-SCOPE_SYSTEM schedules threads using SCS scheduling.
Pthread IPC provides following two functions for getting and setting the contention scope policy:
1) pthread_attr_setscope(pthread_attr_t *attr, int scope)
2) pthread_attr_getscope(pthread_attr_t *attr, int *scope)
Solution steps:
1. Draw Gantt chart for the given problem.
2. From Gantt chart find the completion time of each process
3. Determine the Turnaround time using the formula:
Turnaround time = Completion time – Arrival time
4. Determine the waiting time using the formula:
Waiting time = Turnaround time – Burst time
5. Determine the Response time using the formula:
Response time = First time process scheduled – Arrival time
1. Consider the following set of processes with CPU burst time (in ms)
Process Arrival time Burst Time
P0 0 6
P1 1 3
P2 2 1
P3 3 4
Compute the waiting time and average turnaround time for the above process using
FCFS, SRT and RR (time quantum = 2ms) scheduling algorithm.
Solution:
i.) The Gantt chart for the FCFS schedule is as follows:
P2 2 1 10 8 7
P3 3 4 14 11 7
P2 2 1 3 1 0
P3 3 4 9 6 2
P2 2 1 5 3 2
P3 3 4 14 11 7
P3 3 6 6
P4 5 4 3
Consider the large number as highest priority. Calculate the average waiting time and
turnaround time and draw Gantt chart for preemptive priority scheduling and preemptive
SJF scheduling.
Solution:
i.) The Gantt chart for the preemptive priority schedule is as follows: (Priority high= larger
number)
P3 3 6 6 9 6 0
P4 5 4 3 20 15 11
P2 3 5 8 5 0
P3 3 6 18 15 9
P4 5 4 12 7 3
3. For the following example calculate average waiting time and average turnaround time
using FCFS, preemptive SJF and RR ( 1 time unit) CPU scheduling algorithms
Process Arrival time Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
Solution:
i.) The Gantt chart for the FCFS schedule is as follows:
P3 2 9 21 19 10
P4 3 5 26 23 18
ii.) The Gantt chart for the preemptive SJF schedule is as follows:
P3 2 9 26 24 15
P4 3 5 10 7 2
iii.) The Gantt chart for the RR(1 unit time) schedule is as follows
P3 2 9 26 24 15
P4 3 5 20 17 12
4.
Process Arrival time Burst Time Priority
P1 0 6 4
P2 3 5 2
P3 3 3 6
P4 5 5 3
i.) The Gantt chart for the preemptive Priority schedule is as follows
P3 3 3 6 19 16 13
P4 5 5 3 13 8 3
P3 3 3 6 19 16 13
P4 5 5 3 16 11 6
= 41/4 = 10.25ms
Average waiting time = Sum waiting time/ no. of processes
= 22/4 = 5.5ms
P3 3 2 0
P4 5 20 3
Draw Gantt charts and calculate average waiting time, average turnaround time using
following CPU scheduling algorithm
i. Preemptive shortest job
ii. Non preemptive priority (0 = high priority)
i.)The Gantt chart for the preemptive shortest job schedule is as follows:
P3 3 2 5 2 0
P4 5 20 37 32 12
P3 3 2 0 12 9 7
P4 5 20 3 37 32 12
P3 4 3
P4 5 5
Draw Gantt charts and calculate average waiting time, average turnaround time using
following CPU scheduling algorithm
i. FCFS
ii. SRTF
iii. RR (quantum = 1msec)
i.) The Gantt chart for the FCFS schedule is as follows:
P3 4 3 12 8 5
P4 5 5 17 12 7
P3 4 3 8 4 1
P4 5 5 17 12 7
iii.)The Gantt chart for the RR(time quantum = 1ms) schedule is as follows:
P3 4 3 13 9 6
P4 5 5 17 12 7
P3 2 3
P4 1 4
P5 5 2
The processes are assumed to have arrived in the order P1, P2, P3, P4, P5 all at time 0. Draw
Gantt charts and calculate average waiting time, average turnaround time using following
CPU scheduling algorithm
i. FCFS
ii. SJF
iii. RR (quantum = 1msec)
i) The Gantt chart for the FCFS schedule is as follows:
P3 2 13 13 11
P4 1 14 14 13
P5 5 19 19 14
P3 0 2 4 4 2
P4 0 1 2 2 1
P5 0 5 9 9 4
P3 0 2 7 7 5
P4 0 1 4 4 3
P5 0 5 14 14 9
P3 3 2 3
P4 5 1 4
P5 10 5 2
Draw Gantt charts and calculate average waiting time, average turnaround time using
preemptive priority scheduling algorithm. Assume highest priority = 1 and lowest priority =
4
The Gantt chart for the preemptive priority schedule is as follows:
P3 3 2 3 18 15 13
P4 5 1 4 19 14 13
P5 10 5 2 16 6 1
Average Turnaround time = Sum of turnaround time/ no. of processes = 47/5 = 9.4ms
Average waiting time = Sum waiting time/ no. of processes = 28/5 = 5.6ms
P2 2 1
P3 3 4
Draw Gantt charts and calculate average waiting time, average turnaround time using
SRTF and non preemptive SJF
i). The Gantt chart for the SRTF schedule is as follows:
P2 2 1 3 1 0
P3 3 4 9 6 2
Average Turnaround time = Sum of turnaround time/ no. of processes = 25/4 = 6.25ms
Average waiting time = Sum waiting time/ no. of processes = 11/4 = 2.75ms
ii). The Gantt chart for the non preemptive SJF schedule is as follows:
P2 2 1 7 5 4
P3 3 4 14 11 7
Average Turnaround time = Sum of turnaround time/ no. of processes = 31/4 = 7.75ms
Average waiting time = Sum waiting time/ no. of processes = 17/4 = 4.25ms
SYNCHRONIZATION
What is synchronization?
Synchronization is the method which ensures the orderly execution of cooperating processes that
share logical address space (ie: code and data) or share data through files or messages through
threads so that data consistency is maintained.
The concurrent-access to shared-data may result in data-inconsistency. To maintain data-
consistency: the orderly execution of co-operating processes is necessary.
Suppose that we wanted to provide a solution to producer-consumer problem that fills
all full buffers. We can do so by having a variable counter that keeps track of the no. of
full buffers
Initially, counter=0.
counter is incremented by the producer after it produces a new item to buffer.
counter is decremented by the consumer after it consumes an item from buffer.
Shared-data: