OS Module II Part1
OS Module II Part1
Process Management
Process Concept
Process Scheduling
Operations on Processes
Interprocess Communication
Examples of IPC Systems
2
Objectives
To introduce the notion of a process -- a program in execution, which forms
the basis of all computation
To describe the various features of processes, including scheduling,
creation and termination, and communication
To explore interprocess communication using shared memory and message
passing
To introduce the notion of a thread—a fundamental unit of CPU utilization
that forms the basis of multithreaded computer systems.
To introduce the critical-section problem, whose solutions can be used to
ensure the consistency of shared data
To present both software and hardware solutions of the critical-section
problem
To examine several classical process-synchronization problems
3
Objectives
To introduce CPU scheduling, which is the basis for multiprogrammed
operating systems
To describe various CPU-scheduling algorithms
To discuss evaluation criteria for selecting a CPU-scheduling algorithm for a
particular system
To develop a description of deadlocks, which prevent sets of concurrent
processes from completing their tasks.
To present a number of different methods for preventing or avoiding
deadlocks in a computer system
4
Process Concept
An operating system executes a variety of programs:
5
Process in Memory
6
Program vs Process?
7
Process State
As a process executes, it changes state
8
Process States-Real time Scenario?
Online Food Ordering System?
9
Process Control Block (PCB)
Information associated with each process
(also called task control block)
Process state – running, waiting, etc
Program counter – location of instruction to
execute next
CPU registers – contents of all process-centric
registers
CPU scheduling information- priorities,
scheduling queue pointers
Memory-management information – memory
allocated to the process
Accounting information – CPU used, clock time
elapsed since start, time limits
I/O status information – I/O devices allocated to
process, list of open files
10
PCB Follow a Specific Order?
It varies with the OS but it enables OS for;
1. Fast Access
2. Easy Context Switch
11
CPU Switch From Process to Process
12
Process Scheduling
Maximize CPU use, quickly switch processes onto CPU for time sharing
Process scheduler selects among available processes for next execution
on CPU
Maintains scheduling queues of processes
13
Ready Queue And Various I/O Device Queues
Each queue has a head and a tail, managing the insertion and removal of
PCBs.
Processes move between queues as their state changes (e.g., from waiting
for I/O to ready for execution).
14
Representation of Process Scheduling
15
Schedulers
Short-term scheduler (or CPU scheduler) – selects which process should
be executed next and allocates CPU
Short-term scheduler is invoked frequently (milliseconds) ⇒ (must be
fast)
Long-term scheduler (or job scheduler) – selects which processes should
be brought into the ready queue
Long-term scheduler is invoked infrequently (seconds, minutes) ⇒ (may
be slow)
The long-term scheduler controls the degree of multiprogramming
Processes can be described as either:
I/O-bound process – spends more time doing I/O than computations,
many short CPU bursts
CPU-bound process – spends more time doing computations; few very
long CPU bursts
Long-term scheduler strives for good process mix
16
Medium Term Scheduling
The time-sharing systems can use the Medium-term scheduler (MTS) will
help to reduce the degree of multi-programming.
The MTS is responsible for temporarily removing (swapping out) processes
from memory (RAM) to secondary storage (disk) and bringing them back
when needed (using swapping).
17
Medium Term Scheduling-Real Time Scenario?
Scenario: Running Multiple Applications on a PC
You have opened Google Chrome, Photoshop, and a Game.
Your RAM is full, and the system is slowing down.
The OS swaps out Photoshop (since it's idle) to disk storage.
This frees up RAM for the game to run smoothly.
When you switch back to Photoshop, the MTS swaps it back into RAM.
18
Context Switch
When CPU switches to another process, the system must save the state of the
old process and load the saved state for the new process via a context switch
19
Operations on Processes
20
Process Creation
Parent process create children processes, which, in turn create other
processes, forming a tree of processes
Generally, process identified and managed via a process identifier (pid)
Resource sharing options
Parent and children share all resources
Children share subset of parent’s resources
Parent and child share no resources
Execution options
Parent and children execute concurrently
Parent waits until children terminate
21
A Tree of Processes in Linux
22
Process Creation (Cont.)
Address space
Child duplicate of parent
Child has a program loaded into it
UNIX examples
fork() system call creates new process
exec() system call used after a fork() to replace the process’
memory space with a new program
23
C Program Forking Separate Process
Note:
1. fork() return value in parent → child’s PID (> 0)
2. fork() return value in child → 0
3. Actual PID of child process → Always > 0 (from getpid())
24
Process Termination
Process executes last statement and then asks the operating system to
delete it using the exit() system call.
Returns status data from child to parent (via wait())
Process’ resources are deallocated by operating system
25
Process Termination
Some operating systems do not allow child to exists if its parent has terminated. If a
process terminates, then all its children must also be terminated.
The parent process may wait for termination of a child process by using the
wait()system call. The call returns status information and the pid of the terminated
process
pid = wait(&status);
26
Process Termination-Zombie Process?
Note:
27
Interprocess Communication
Processes within a system may be independent or cooperating
Cooperating process can affect or be affected by other processes, including sharing data
Reasons for cooperating processes:
Information sharing
Computation speedup
Modularity
Convenience
28
Communication Models
(a) Message passing. (b) shared memory.
29
Cooperating Processes
Independent process cannot affect or be affected by the execution of
another process
Computation speed-up
Modularity
Convenience
30
Producer-Consumer Problem
Paradigm for cooperating processes, producer process produces
information that is consumed by a consumer process
31
Bounded-Buffer – Shared-Memory Solution
Shared data
#define BUFFER_SIZE 10
typedef struct {
. . .
} item;
item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;
32
Bounded-Buffer – Producer
item next_produced;
while (true) {
/* produce an item in next produced */
while (((in + 1) % BUFFER_SIZE) == out)
; /* do nothing */
buffer[in] = next_produced;
in = (in + 1) % BUFFER_SIZE;
}
33
Bounded Buffer – Consumer
item next_consumed;
while (true) {
while (in == out)
; /* do nothing */
next_consumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
34
Interprocess Communication – Shared Memory
The communication is under the control of the users processes not the
operating system.
Major issues is to provide mechanism that will allow the user processes to
synchronize their actions when they access shared memory.
35
Interprocess Communication – Message Passing
Mechanism for processes to communicate and to synchronize their actions
send(message)
receive(message)
36
Message Passing (Cont.)
37
Message Passing (Cont.)
Shared memory
Hardware bus
Network
Logical:
Direct or indirect
Synchronous or asynchronous
38
Direct Communication
Naming
Processes must name each other explicitly:
39
Indirect Communication
Messages are directed and received from mailboxes (also referred to as
ports)
40
Indirect Communication
Operations
destroy a mailbox
41
Indirect Communication
Mailbox sharing
Solutions
Allow a link to be associated with at most two processes
42
Synchronization
Synchronisation ensures that multiple processes or threads coordinate
access to shared resources without conflicts
Message passing may be either blocking or non-blocking
43
Buffering
Messages exchanged by communicating processes reside in a temporary
queue implemented in one of three ways
Zero capacity – no messages are queued on a link. Sender must wait for
receiver (rendezvous)
Bounded capacity – finite length of n messages Sender must wait if link
full
Unbounded capacity – infinite length Sender never waits
44
Pipes
Acts as a conduit allowing two processes to communicate
Issues:
Is communication unidirectional or bidirectional?
In the case of two-way communication, is it half or full-duplex?
Must there exist a relationship (i.e., parent-child) between the
communicating processes?
Can the pipes be used over a network?
Ordinary pipes – cannot be accessed from outside the process that
created it. Typically, a parent process creates a pipe and uses it to
communicate with a child process that it created.
Named pipes – can be accessed without a parent-child relationship.
45
Ordinary Pipes
Ordinary Pipes allow communication in standard producer-consumer style
Producer writes to one end (the write-end of the pipe)
Consumer reads from the other end (the read-end of the pipe)
Ordinary pipes are therefore unidirectional
Require parent-child relationship between communicating processes
46
Named Pipes
Named Pipes are more powerful than ordinary pipes
Communication is bidirectional
No parent-child relationship is necessary between the communicating
processes
Several processes can use the named pipe for communication
Provided on both UNIX and Windows systems
47
Multicore Programming
48
Multicore Programming (Cont.)
Types of parallelism
Data parallelism – distributes subsets of the same data across
multiple cores, same operation on each
Task parallelism – distributing threads across cores, each thread
performing unique operation
As number of threads grows, so does architectural support for threading
CPUs have cores as well as hardware threads
Consider Oracle SPARC T4 with 8 cores, and 8 hardware threads per
core
49
Concurrency vs. Parallelism
Concurrent execution on single-core system
Concurrency = The ability to handle multiple tasks at the same time (but
not necessarily executing them at the same time).
50
Concurrency vs. Parallelism
51
Amdahl’s Law
Identifies performance gains from adding additional cores to an application
that has both serial and parallel components
S is serial portion
N processing cores
That is, if application is 75% parallel / 25% serial, moving from 1 to 2 cores
results in speedup of 1.6 times
52
Multithreaded Server Architecture
53
Benefits of multithreaded programming
54
Single and Multithreaded Processes
A thread is a basic unit of CPU utilization.
If a process has multiple threads of control, it can perform more than one
task at a time.
It shares its code section, data section, and other operating-system
resources, such as open files and signals with the other threads
55
Multithreading Scenario-Play Game?
The main game engine acts as the main thread or process that
coordinates everything.
56
User Threads and Kernel Threads
User threads - Created and managed by user-space thread libraries.
The operating system kernel is unaware of these threads, and they are
scheduled in user space by the thread library.
Faster to create and switch (no system calls required).
If one thread blocks (e.g., waiting for I/O), the entire process may get
blocked.
Three primary thread libraries:
POSIX Pthreads
Windows threads
Java threads
57
User Threads and Kernel Threads
Kernel threads - Supported by the Kernel
Each kernel thread is recognised by the OS scheduler.
Can take advantage of multi-core processors (true parallelism).
If one thread blocks, other threads in the process can still run.
Who creates them? The operating system kernel, typically via system
calls like pthread_create() in Linux.
58
Multithreading Models
Many-to-One
One-to-One
Many-to-Many
59
Many-to-One
60
One-to-One
Each user-level thread maps to kernel thread
Creating a user-level thread creates a kernel thread
More concurrency than many-to-one
Number of threads per process sometimes restricted
due to overhead
Examples
Windows
Linux
Solaris 9 and later
61
Many-to-Many Model
Allows many user level threads to be
mapped to many kernel threads
Allows the operating system to create a
sufficient number of kernel threads
Solaris prior to version 9
Windows with the ThreadFiber package
62
Two-level Model
63
Thread Cancellation?
Thread cancellation involves terminating a thread before it has completed.
A thread that is to be canceled is often referred to as the target thread.
Main challenge?
Handling the resources have been allocated to a canceled thread or
where a thread is canceled before updating its data which is shared
with other?
64
Summary
Process States
Process Schedulers
Process Operations
Process Communication and Synchronisation
Threads and Thread Models
65