Unit 2
Unit 2
SYLLABUS
Processes
• Process Concept
• Process Scheduling
• Operations on Processes
• Interprocess Communication
• Examples of IPC Systems
• Communication in Client-Server Systems
Objectives
• To introduce the notion of a process -- a program in execution, which forms the basis of all
computation
Process in Memory
Process State
As a process executes, it changes state
Threads
So far, process has a single thread of execution
Consider having multiple program counters per process
Multiple locations can execute at once
Multiple threads of control -> threads
Must then have storage for thread details, multiple program counters in PCB.
Process Representation in Linux
Represented by the C structure task_struct
Process Scheduling
Maximize CPU use, quickly switch processes onto CPU for time sharing
Process scheduler selects among available processes for next execution on CPU
Maintains scheduling queues of processes
Job queue – set of all processes in the system
Ready queue – set of all processes residing in main memory, ready and waiting to execute
Device queues – set of processes waiting for an I/O device
Processes migrate among the various queues Ready Queue And Various I/O Device Queues
Schedulers
• Short-term scheduler (or CPU scheduler) – selects which process should be executed
next and allocates CPU
Sometimes the only scheduler in a system
Short-term scheduler is invoked frequently (milliseconds) (must be fast)
• Long-term scheduler (or job scheduler) – selects which processes should be brought
into the ready queue
Long-term scheduler is invoked infrequently (seconds, minutes) (may be slow)
The long-term scheduler controls the degree of multiprogramming
Processes can be described as either:
I/O-bound process – spends more time doing I/O than computations, many short CPU
bursts
CPU-bound process – spends more time doing computations; few very long CPU bursts
Long-term scheduler strives for good process mix Addition of Medium Term Scheduling
● Medium-term scheduler can be added if degree of multiple programming needs to
decrease
– process creation,
– process termination,
– and so on as detailed next
Process Creation
• Parent process create children processes, which, in turn create other processes, forming
a tree of processes
• Address space
– Child duplicate of parent
– Child has a program loaded into it
• UNIX examples
– fork() system call creates new process
– exec() system call used after a fork() to replace the process’ memory space with a new
program
Interprocess Communication
• Processes within a system may be independent or cooperating
• Cooperating process can affect or be affected by other processes, including sharing data
• Reasons for cooperating processes:
– Information sharing
– Computation speedup
– Modularity
Cooperating Processes
• Mailbox sharing
* 21CSC202J Operating Systems UNIT 2 15
– P , P , and P share mailbox A – P , sends; P and P receive – Who gets
1 2 3 1 2 3
the message?
• Solutions
– Allow a link to be associated with at most two processes
– Allow only one process at a time to execute a receive operation
– Allow the system to select arbitrarily the receiver. Sender is notified who the
receiver was.
Synchronization
● Message passing may be either blocking or non-blocking
● Return immediately
● (LPC) facility
o Only works between processes on the same system
● The server creates two private communication ports and returns the handle to
one of them to the client.
● The client and server use the corresponding port handle to send messages or
callbacks and to listen for replies.
● Local Procedure Calls in Windows
• Sockets
• Remote Procedure Calls
• Pipes
• Remote Method Invocation (Java)
Sockets
● A socket is defined as an endpoint for communication
● All ports below 1024 are well known, used for standard services
● Socket Communication
– Connectionoriented (TCP)
* 21CSC202J Operating Systems UNIT 2 20
– Connectionless
(UDP) – MulticastSock et class– data can be sent to multiple recipients
● Client Sends Request: The client program initiates an RPC request by calling a function
or method that is implemented on the remote server. This function call appears as if it
were a local call.
● Request Serialization: The parameters of the function call are serialized (converted into
a format that can be transmitted over the network). This includes the function name,
parameters, and any other necessary information.
● Network Transmission: The serialized request is sent over the network to the remote
server where the requested procedure will be executed.
● Server Receives Request: The remote server receives the serialized request and
deserializes it to extract the function name and parameters.
● Procedure Execution: The server looks up the requested function or method and
executes it using the provided parameters.
● Response Serialization: The server serializes the result of the function call (return value
or any output) and prepares to send it back to the client.
● Network Transmission: The serialized response is sent back over the network to the
client.
● Client Receives Response: The client receives the response, deserializes it to extract the
result, and can then continue its execution based on the received information.
● RPC mechanisms and protocols, such as gRPC, CORBA (Common Object Request Broker
Architecture), Java RMI (Remote Method Invocation), SOAP (Simple Object Access
Protocol)Execution of RPC
● Consumer reads from the other end (the read-end of the pipe)
● Named Pipes
● Communication is bidirectional
● Motivation
● Multicore Programming
Multicore Programming
● Types of parallelism
o Data parallelism – distributes subsets of the same data across multiple cores,
same operation on each
o Task parallelism – distributes threads across cores, each thread performing unique
operation
● As # of threads grows, so does architectural support for threading
o CPUs have cores as well as hardware threads
o Consider Oracle SPARC T4 with 8 cores, and 8 hardware threads per core
● Concurrency vs. Parallelism
Amdahl’s Law
● Identifies performance gains from adding additional cores to an application that has
both serial and parallel components
● S is serial portion
● N processing cores
● This formula states that the maximum improvement in speed of a process is limited by
the proportion of the program that can be made parallel.
● That is, if application is 75% parallel / 25% serial, moving from 1 to 2 cores results in
speedup of 1.6 times
● As N approaches infinity, speedup approaches 1 / S
● One-to-One
● Many-to-Many
Many-to-One
● Many user-level threads mapped to single kernel thread
● Multiple threads may not run in parallel on muticore system because only one may
be in kernel at a time • Few systems currently use this model • Examples:
o Solaris Green Threads
● Examples
o Windows
o Linux
o Solaris 9 and later
Many-to-Many Model
● A POSIX standard (IEEE 1003.1c) API for thread creation and synchronization
● Pthreads Example
Thread Libraries
• Thread library provides programmer with API for creating and managing threads
• Two primary ways of implementing
– Library entirely in user space
– Kernel-level library supported by the OS
Pthreads Example (Cont.)
Java Threads
OpenMP
• Set of compiler directives and an API for C, C++, FORTRAN
•
•
• Provides support for parallel
programming in shared-memory
environments
• Identifies parallel regions –
blocks of code that can run in parallel
#pragma omp parallel
Create as many threads as there are cores
#pragma omp parallel for for(i=0;i<N;i++) {
c[i] = a[i] + b[i];
}
Run for loop in parallel
Grand Central Dispatch
• Apple technology for Mac OS X and iOS operating systems
• Extensions to C, C++ languages, API, and run-time library
• Allows identification of parallel sections
• Manages most of the details of threading
• Block is in “^{ }” - ˆ{ printf("I am a block"); } • Blocks placed in dispatch queue
– Assigned to available thread in thread pool when removed from queue
Grand Central Dispatch
1. default
2. user-defined
● Every signal has default handler that kernel runs when handling
signal
• Thread-local storage (TLS) allows each thread to have its own copy of data
• Useful when you do not have control over the thread creation process (i.e., when using a
thread pool)
• Scheduler activations provide upcalls - a communication mechanism from the kernel to the
upcall handler in the thread library
• This communication allows an application to maintain the correct number kernel threads
Operating System Examples
• Windows Threads
• Linux Threads
• Windows implements the Windows API – primary API for Win 98, Win NT, Win 2000,
Win XP, and Win 7
– ETHREAD (executive thread block) – includes pointer to process to which thread belongs
and to KTHREAD, in kernel space
– TEB (thread environment block) – thread id, user-mode stack, thread-local storage, in user
space
Windows Threads Data Structures
• Background
• The Critical-Section Problem • Peterson’s Solution
• Synchronization Hardware
• Mutex Locks
• Semaphores
• Classic Problems of Synchronization
• Monitors
• Synchronization Examples
• Alternative Approaches
Objectives
– Non-preemptive – runs until exits kernel mode, blocks, or voluntarily yields CPU
• Essentially free of race conditions in kernel mode
Peterson’s Solution
interrupted
• The array is used to indicate if a process is ready to enter the critical section.
flag flag[i] = true implies
that process Pi is ready!
Algorithm for Process Pi
do {
flag[i] = true; turn = j;
while (flag[j] && turn = = j);
critical section
flag[i] = false;
remainder section
} while (true);
Peterson’s Solution (Cont.)
• Provable that the three CS requirement are met: 1. Mutual exclusion
is preserved Pi enters CS only if:
either flag[j] = false or turn = i
1. Executed atomically
1. Executed atomically
3. Set the variable “value” the value of the passed parameter “new_value” but only if
“value” ==“expected”. That is, the swap takes place only under this condition.
Solution using compare_and_swap
• Shared integer “lock” initialized to 0; • Solution:
do { while (compare_and_swap(&lock, 0, 1) != 0)
; /* do nothing */
/* critical section */ lock = 0;
/* remainder section */
} while (true);
Bounded-waiting Mutual Exclusion with test_and_set
do
{ waiting[i
] = true;
key = true;
while (waiting[i] && key)
key = test_and_set(&lock);
}
signal(semaphore *S) { S->value++; if (S->value
<= 0) { remove a process P from S->list;
wakeup(P);
}
}
Deadlock and Starvation
• Deadlock – two or more processes are waiting indefinitely for an event that can be caused
by only one of the waiting processes
• Let S and Q be two semaphores initialized to 1
P0 P1
wait(S); wait(Q); wait(Q); wait(S);
... ...
signal(S); signal(Q);
signal(Q); signal(S);
• Starvation – indefinite blocking
– A process may never be removed from the semaphore queue in which it is suspended
• Priority Inversion – Scheduling problem when lower-priority process holds a lock needed
by higher-priority process
– Solved via priority-inheritance protocol
Classical Problems of Synchronization
• First variation – no reader kept waiting unless writer has permission to use shared object
• Second variation – once writer is ready, it performs the write ASAP
• Both may have starvation leading to even more variations
• Problem is solved on some systems by kernel providing reader-writer locks
Dining-Philosophers Problem
• Deadlock handling
– Allow at most 4 philosophers to be sitting simultaneously at the table.
– Allow a philosopher to pick up the forks only if both are available (picking must be done in a
critical section.
– Use an asymmetric solution -- an odd-numbered philosopher picks up first the left chopstick
and then the right chopstick. Even-numbered philosopher picks up first the right chopstick and
then the left chopstick.
Problems with Semaphores
Condition Variables
• condition x, y;
• Two operations are allowed on a condition variable:
– – a process that invokes the operation is suspended until x.signal()
x.wait()
• Allocate a single resource among competing processes using priority numbers that
specify the maximum time a process plans to use the resource
R.acquire(t); ...
access the resurce; ...
R.release;