Threads
Threads
Threads
Review
• Process Model
– Pseudo-parallelism (Multi-programming, quantum or time slice)
– Context Switch (user mode kernel mode, switch CPU to other
process – load/store PCB)
– Scheduling algorithm
• PCB
– Id, registers, scheduling information, memory management
information, accounting information, I/O status information, …
– State (New, Running, Ready, Blocked, Terminal)
• CPU Utilization
– Shows the CPU utilization
– 1 – pn
Objectives…
• Threads
– Overview
– Models
– Benefit
– Implementing threads in User Space
– Implementing threads in the Kernels
– Hybrid Implementations
– Scheduler Activations
– Pop-Up threads
– Making Single Threaded Code Multithreaded
Threads
Context
• Each process has an address space
• The CPU is allocated only one process at one time
• Context switching
• Problems
– First, (in Network Services)
• We want to search something using the Google Web
• Our request is transferred to web server that is busy with serving many
client concurrently
• So, the server can serve only one client at a time
– Second, (in Word processor)
• We uses the word processor to type the document
• The word processor supports some of the features as automatically saving
the entire file in every 5 minutes and display the graphics. Besides
reading, he/she types on the keyboards,
• So, when the automatically saving is executed, the reading or display can
be not progressed
Threads
Overview
• It is desirable to have multiple threads of control in the
same address space running in quasi-parallel, as though
they were separate processes
Threads
Models
• Threads of one process (miniprocess)
– Describe an sequential execution within a process
– Share the same address space and resources of the process
– Each thread has its own PC, registers and stack of execution
– There is no protection between threads in one process
– Lightweight processes (contains some properties of processes)
– Have its own stack
– Multithreading (multiple threads in same process)
• Having multiple threads running concurrently within a
process is analogous to having multiple processes running
in parallel in one computer
Threads
Model (cont)
• At time T3, the I/O completes. Again, the kernel must notify the user-level thread
system of the event, but this notification requires a processor. The kernel preempts
one of the processors running in the address space and uses it to do the upcall. (If
there are no processors assigned to the address space when the I/O completes, the
upcall must wait until the kernel allocates one). This upcall notifies the user level of
two things: the I/0 completion and the preemption. The upcall invokes code in the
user-level thread system that (1) puts the thread that had been blocked on the ready
list and (2) puts the thread that was preempted on the ready list. At this point,
scheduler activations A and B can be discarded.
Threads
Scheduler Activations – Example
• Finally, at time T4, the upcall takes a thread off the ready
list and starts running it.
Threads
• Problem Pop-Up Threads
– The sender send the message to respond to the receiver's request
– When the receiver waits the incoming message, it’s process or thread is
blocked until the message arrives to process it
→ waste time to unblocked and reloaded thread information combining with
unpacking the message, then parsing message’s content and processing it
• Solution: using Pop-up threads
– Handles the incoming message by the system creates a new thread that are
brand new
– This thread is identical to all the others, but it does not have any history
(registers, stack, …) that must be restored
– It can be implemented in kernel or user mode
• Advantages Tanenbaum, Fig. 2-18.
– Create quickly (Do not have any threads information that must be stored)
– The latency between message arrival and the start of processing can be made
very short
Summary
• Threads
Q&A
Next Lecture
• InterProcess Communication