0% found this document useful (0 votes)
37 views25 pages

Threads

This document discusses processes, threads, and how they are implemented in operating systems. It covers: - Processes use context switching for pseudo-parallelism while threads within a process share resources and address space. - Threads allow for responsiveness and better CPU utilization by overlapping I/O and computation. They have lower overhead than processes. - Threads can be implemented in user space with a runtime library or in the kernel where the kernel schedules all threads. Hybrid models combine these approaches. - Scheduler activations allow user space threads to block and schedule without kernel transitions, improving performance of user space thread implementations.

Uploaded by

Tâm Lê
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views25 pages

Threads

This document discusses processes, threads, and how they are implemented in operating systems. It covers: - Processes use context switching for pseudo-parallelism while threads within a process share resources and address space. - Threads allow for responsiveness and better CPU utilization by overlapping I/O and computation. They have lower overhead than processes. - Threads can be implemented in user space with a runtime library or in the kernel where the kernel schedules all threads. Hybrid models combine these approaches. - Scheduler activations allow user space threads to block and schedule without kernel transitions, improving performance of user space thread implementations.

Uploaded by

Tâm Lê
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 25

Processes & Threads

Threads
Review
• Process Model
– Pseudo-parallelism (Multi-programming, quantum or time slice)
– Context Switch (user mode  kernel mode, switch CPU to other
process – load/store PCB)
– Scheduling algorithm
• PCB
– Id, registers, scheduling information, memory management
information, accounting information, I/O status information, …
– State (New, Running, Ready, Blocked, Terminal)
• CPU Utilization
– Shows the CPU utilization
– 1 – pn
Objectives…
• Threads
– Overview
– Models
– Benefit
– Implementing threads in User Space
– Implementing threads in the Kernels
– Hybrid Implementations
– Scheduler Activations
– Pop-Up threads
– Making Single Threaded Code Multithreaded
Threads
Context
• Each process has an address space
• The CPU is allocated only one process at one time
• Context switching
• Problems
– First, (in Network Services)
• We want to search something using the Google Web
• Our request is transferred to web server that is busy with serving many
client concurrently
• So, the server can serve only one client at a time
– Second, (in Word processor)
• We uses the word processor to type the document
• The word processor supports some of the features as automatically saving
the entire file in every 5 minutes and display the graphics. Besides
reading, he/she types on the keyboards,
• So, when the automatically saving is executed, the reading or display can
be not progressed
Threads
Overview
• It is desirable to have multiple threads of control in the
same address space running in quasi-parallel, as though
they were separate processes
Threads
Models
• Threads of one process (miniprocess)
– Describe an sequential execution within a process
– Share the same address space and resources of the process
– Each thread has its own PC, registers and stack of execution
– There is no protection between threads in one process
– Lightweight processes (contains some properties of processes)
– Have its own stack
– Multithreading (multiple threads in same process)
• Having multiple threads running concurrently within a
process is analogous to having multiple processes running
in parallel in one computer
Threads
Model (cont)

Tanenbaum, Fig. 2-11, 2-13.


Three processes each with one thread One process with three threads
Multiprogramming Multithreading
Threads
Model – Example
Threads
Model – Example
Threads
Benefits
• Responsiveness and better resource sharing
– A program may continue running even if part of it is blocked.
– The application’s performance may improve since we can overlap I/O and
CPU computation.
• Economy:
– Allocating memory and resources for process creation (faster, easier) is costly.
– Thread creation may be up to 100 times faster! (Takes less time to create a new
thread than a process)
• Useful on systems with multiple CPUs.
• Less time to terminate a thread than a process
• Less time to switch between two threads within the same process
(serve many task with the same purpose)
• Since threads within the same process share memory and files, they
can communicate with each other without invoking the kernel
• But, they introduce a number of complications:
– E.g., since they share data, one thread may read and another may write the same
location – care is needed!
Threads
Multithreading
• Operating system supports multiple threads of
execution within a single process
• MS-DOS supports a single thread
• UNIX supports multiple user processes but only
supports one thread per process
• Windows 2000, Solaris, Linux, Mach, and OS/2
support multiple threads
• Multithread is effective in multiprocessors
because the thread can execute in concurrently
• …
Threads
Implementing Threads in User Space
• The kernel knows nothing about threads
– The approach is suitable for OS that does not support threads
– Threads are implemented by a user-level library (with code and data structure)
• The threads run on top of a runtime system (which is a collection of
procedures that manage threads)
• Each process has its own thread table
• Advantages
– Thread switching and scheduling is faster (because it’s done at user mode)
than to trapping the kernel mode
– Each process can have its own customized scheduling algorithm
– Scale better (can vary the table space and stack space in flexibility)
• Disadvantages
– The implementation of blocking system calls is complex → instead of
blocking thread, the process is blocked
– The need that a thread voluntarily gives up the CPU → The OS doesn’t know
this, so if any user-level thread is blocked, the entire process is blocked
– The developer wants threads precisely in applications → make system call
constantly
Threads
Implementing Threads in User Space (cont)

Tanenbaum, Fig. 2-16.


Threads
Implementing Threads in the Kernel
• The kernel knows about the threads and manage the
threads (no run-time system is needed)
• The kernel schedules all the threads
• The kernel has a thread table (using kernel call to create
or destroy thread)
• Advantages
– The kernel can switch between threads belonging to different
processes
– No problem with blocking system calls
– Useful if multiprocessor support is available (multiple CPUs)
• Disadvantages
– Greater cost (time and resources to manage threads create and
terminate) → Solution: recycling
– Thread creation, saving is slow (needs system call)
Threads
Implementing Threads in the Kernel (cont)

Tanenbaum, Fig. 2-16.


Threads
Libraries
• There are 3 primitive libraries
• POSIX Pthreads.
– May be provided as either a user- or kernel-level
library.
• Win32 threads.
– Kernel-level library, available on Windows systems.
• Java threads.
– JVM is running on top of a host operating system, the
implementation depends on the host system.
• On Windows systems, Java threads are
implemented using the Win32 API;
• UNIX-based systems often use Pthreads.
Threads
Hybrid Implementations
• Combine the advantages of user-level threads with
kernel-level threads
– Using kernel-level threads and then multiplex user-level
threads onto some or all of the kernel threads (ultimate in
flexibility)

Tanenbaum, Fig. 2-17.


Threads
• Context Scheduler Activations
– The kernel threads are better than user level threads but they are slower
– When a thread blocks, other threads with in same process can be run
– Avoiding unnecessary transitions between user mode and kernel mode
– The user mode can block the thread and schedule a new one by itself
→ mimic the functionality of kernel threads and associate with threads packages
implemented in user space (Scheduler Activations)
• Upcall
– The notified signal with information as thread’s ID and description is used
to activate the runtime system
• Scheduler Activation mechanism
– When a thread has been blocked, the kernel make the upcall to the process’s
runtime system (in user mode) to inform this event
– The user mode can reschedule its threads by
• Marking the current thread as blocked
• Taking another thread from ready list, loading and restarting it
– Later, when the blocked thread, that was marked, is ready and can run again,
the kernel make another upcall
– The runtime system can either restart the blocked thread immediately or put
in on the ready list to be run later
Threads
Scheduler Activations – Example

T.E Anderson, Fig. 1 – Example I/O.

• At time T1, the kernel allocates the application two


processors. On each processor, the kernel upcalls to user-
level code that removes a thread from the ready to running
status.
Threads
Scheduler Activations – Example

T.E Anderson, Fig. 1 – Example I/O.

• At time T2, one of the user-level threads (thread 1) is blocked in the


kernel. To notify the user level of this event, the kernel takes the
processor that had been running thread 1 and performs an upcall in
the context of a fresh scheduler activation. The user-level thread
scheduler can then use the processor to take another thread off the
ready list and start running it.
Threads
Scheduler Activations – Example

T.E Anderson, Fig. 1 – Example I/O.

• At time T3, the I/O completes. Again, the kernel must notify the user-level thread
system of the event, but this notification requires a processor. The kernel preempts
one of the processors running in the address space and uses it to do the upcall. (If
there are no processors assigned to the address space when the I/O completes, the
upcall must wait until the kernel allocates one). This upcall notifies the user level of
two things: the I/0 completion and the preemption. The upcall invokes code in the
user-level thread system that (1) puts the thread that had been blocked on the ready
list and (2) puts the thread that was preempted on the ready list. At this point,
scheduler activations A and B can be discarded.
Threads
Scheduler Activations – Example

T.E Anderson, Fig. 1 – Example I/O.

• Finally, at time T4, the upcall takes a thread off the ready
list and starts running it.
Threads
• Problem Pop-Up Threads
– The sender send the message to respond to the receiver's request
– When the receiver waits the incoming message, it’s process or thread is
blocked until the message arrives to process it
→ waste time to unblocked and reloaded thread information combining with
unpacking the message, then parsing message’s content and processing it
• Solution: using Pop-up threads
– Handles the incoming message by the system creates a new thread that are
brand new
– This thread is identical to all the others, but it does not have any history
(registers, stack, …) that must be restored
– It can be implemented in kernel or user mode
• Advantages Tanenbaum, Fig. 2-18.
– Create quickly (Do not have any threads information that must be stored)
– The latency between message arrival and the start of processing can be made
very short
Summary
• Threads

Q&A
Next Lecture
• InterProcess Communication

You might also like