Processes and Their Scheduling Multiprocessor Scheduling Threads Distributed Scheduling/migration
Processes and Their Scheduling Multiprocessor Scheduling Threads Distributed Scheduling/migration
Computer Science
CS677: Distributed OS
Lecture 6, page 1
Processes: Review
Multiprogramming versus multiprocessing
Kernel data structure: process control block (PCB)
Each process has an address space
Contains code, global and local variables..
CS677: Distributed OS
Lecture 6, page 2
Process Behavior
Processes: alternate between CPU and I/O
CPU bursts
Most bursts are short, a few are very long (high variance)
Modeled using hyperexponential behavior
If X is an exponential r.v.
Pr [ X <= x] = 1 e-x
E[X] = 1/
If X is a hyperexponential r.v.
Pr [X <= x] = 1 p e-x -(1-p) e-x
E[X] = p/ p)/
Computer Science
CS677: Distributed OS
Lecture 6, page 3
Process Scheduling
Priority queues: multiples queues, each with a different
priority
Use strict priority scheduling
Example: page swapper, kernel tasks, real-time tasks, user tasks
CS677: Distributed OS
Lecture 6, page 4
Computer Science
CS677: Distributed OS
Lecture 6, page 5
Threads
Each thread has its own stack, PC, registers
Share address space, files,
Computer Science
CS677: Distributed OS
Lecture 6, page 6
Computer Science
CS677: Distributed OS
Lecture 6, page 7
Why Threads?
Single threaded process: blocking system calls, no
parallelism
Finite-state machine [event-based]: non-blocking with
parallelism
Multi-threaded process: blocking system calls with
parallelism
Threads retain the idea of sequential processes with
blocking system calls, and yet achieve parallelism
Software engineering perspective
Applications are easier to structure as a collection of threads
Each thread performs several [mostly independent] tasks
Computer Science
CS677: Distributed OS
Lecture 6, page 8
CS677: Distributed OS
Lecture 6, page 9
Computer Science
CS677: Distributed OS
Lecture 6, page 10
Thread Management
Creation and deletion of threads
Static versus dynamic
Critical sections
Synchronization primitives: blocking, spin-lock (busy-wait)
Condition variables
Computer Science
CS677: Distributed OS
Lecture 6, page 11
Ease of scheduling
Flexibility: many parallel programming models and
schedulers
Process blocking a potential problem
Computer Science
CS677: Distributed OS
Lecture 6, page 12
User-level Threads
Threads managed by a threads library
Kernel is unaware of presence of threads
Advantages:
No kernel modifications needed to support threads
Efficient: creation/deletion/switches dont need system calls
Flexibility in scheduling: library can use different scheduling
algorithms, can be application dependent
Disadvantages
Need to avoid blocking system calls [all threads block]
Threads compete for one another
Does not take advantage of multiprocessors [no real parallelism]
Computer Science
CS677: Distributed OS
Lecture 6, page 13
User-level threads
Computer Science
CS677: Distributed OS
Lecture 6, page 14
Kernel-level threads
Kernel aware of the presence of threads
Better scheduling decisions, more expensive
Better for multiprocessors, more overheads for uniprocessors
Computer Science
CS677: Distributed OS
Lecture 6, page 15
Light-weight Processes
Several LWPs per heavy-weight process
User-level threads package
Create/destroy threads and synchronization primitives
CS677: Distributed OS
Lecture 6, page 16
LWP Example
Computer Science
CS677: Distributed OS
Lecture 6, page 17
Thread Packages
Posix Threads (pthreads)
Java Threads
Native thread support built into the language
Threads are scheduled by the JVM
Computer Science
CS677: Distributed OS
Lecture 6, page 18