Operating System (Lab) : Project Report: Threads
Operating System (Lab) : Project Report: Threads
Date: _____________
Operating system
(Lab)
PROJECT REPORT : THREADS
Submitted To:
Mr. Muhammad Naveed
Student Name:
Kamran Amin
Mashal ud din
Ilyas Darwesh
Muhammad Ali
Reg. Number:
1880199
1880200
1880159
1880180
In this project we are going to present Threads topic of OS. As we thought that this is one of
very important topic in OS. Here we start with
Types of Thread
There are two types of threads:
User-level threads are easier and faster to create than kernel-level threads. They can
also be more easily managed.
User-level threads can be run on any operating system.
There are no kernel mode privileges required for thread switching in user-level threads.
Disadvantages of User-Level Threads
Some of the disadvantages of user-level threads are as follows −
Kernel-Level Threads
Kernel-level threads are handled by the operating system directly and the thread management
is done by the kernel. The context information for the process as well as the process threads
is all managed by the kernel. Because of this, kernel-level threads are slower than user-level
threads.
Advantages of Kernel-Level Threads
Some of the advantages of kernel-level threads are as follows −
A mode switch to kernel mode is required to transfer control from one thread to
another in a process.
Kernel-level threads are slower to create as well as manage as compared to user-level
threads.
User thread are implemented by users. kernel threads are implemented by OS.
OS doesn’t recognized user level threads. Kernel threads are recognized by OS.
If one user level thread perform blocking If one kernel thread perform blocking
operation then entire process will be operation then another thread can continue
blocked execution.
User level threads are designed as Kernel level threads are designed as
dependent threads. independent threads.
There exist a strong a relationship between user level threads and kernel level threads.
Dependencies between ULT and KLT :
2. Synchronization :
The subtasks (functions) within each task (process) can be executed concurrently or
parallely depending on the application. In that case, single-threaded process is not
suitable. There evokes multithreaded process. A unique subtask is allocated to every
thread within the process. These threads may use the same data section or different data
section. Typically, threads within the same process will share the code section, data
section, address space, open files etc.
When subtasks are concurrently performed by sharing the code section, it may result in data
inconsistency. Ultimately, requires suitable synchronization techniques to maintain the control
flow to access the shared data (critical section).
In a multithreaded process, synchronization adopted using four different models :
1. Mutex Locks – This allows only one thread at a time to access the shared resource.
2. Read/Write Locks – This allows exclusive writes and concurrent read of a shared resource.
3. Counting Semaphore – This count refers to the number of shared resource that can be
accessed simultaneously at a time. Once the count limit is reached, the remaining threads are
blocked.
4. Condition Variables – This blocks the thread until the condition satisfies(Busy Waiting).
All these synchronization models are carried out within each process using thread library. The
memory space for the lock variables is allocated in the user address space. Thus, requires no
kernel intervention.
5.
1. Scheduling :
The application developer during the thread creation sets the priority and scheduling
policy of each ULT thread using the thread library. On the execution of program, based on the
defined attributes the scheduling takes place by the thread library. In this case, the system
scheduler has no control over thread scheduling as the kernel is unaware of the ULT threads.
2. Context Switching :
Switching from one ULT thread to other ULT thread is faster within the same process,
as each thread has its own unique thread control block, registers, stack. Thus, registers are
saved and restored. Does not require any change of address space. Entire switching takes place
within the user address space under the control of thread library.
For example, consider a program to copy the content(read) from one file and to paste(write) in
the other file. Additionaly, a pop-up that displays the percentage of progress completion.
This process contains three subtasks each allocated to a ULT,
Thread A – Read the content from source file. Store in a global variable X within the
process address space.
Thread B – Read the global variable X. Write in the destination file.
Thread C – Display the percentage of progress done in a graphical representation.
Here, the application developer will schedule the multiple flow of control within a program
using the thread library.
Order of execution: Begins with Thread A, Then thread B and Then thread C.
Thread A and Thread B shares the global variable X. Only when after thread A writes on X,
thread B can read X. In that case, synchronization is to be adopted on the shared variable to
avoid thread B from reading old data. Context switching from thread A to Thread B and then
Thread C takes place within the process address space. Each thread saves and restores the
registers in its own thread control block (TCB). Thread C remains in blocked state, until thread
B starts its first write operation on the destination file. This is the reason behind; the graphical
indication of 100% pops-up a few seconds later although process completion.
Dependency between ULT and KLT :
The one and only major dependency between KLT and ULT arise when an ULT is in
need of the Kernel resources. Every ULT thread is associated to a virtual processor called
Light-weight process. This is created and binned to ULT by the thread library according to the
application need. Whenever a system call invoked, a kernel level thread is created and
scheduled to the LWPs by the system scheduler. These KLT are scheduled to access the kernel
resources by the system scheduler which is unaware of the ULT. Whereas the KLT is aware of
each ULT associated to it via LWPs.
Multithreading Models
The user threads must be mapped to kernel threads, by one of the following strategies:
In the many to one model, many user-level threads are all mapped onto a single
kernel thread.
Thread management is handled by the thread library in user space, which is efficient
in nature.
The one to one model creates a separate kernel thread to handle each and every user
thread.
Most implementations of this model place a limit on how many threads can be
created.
Linux and Windows from 95 to XP implement the one-to-one model for threads.
3. Java threads: Since Java generally runs on a Java Virtual Machine, the
Benefits of Multithreading
1. Responsiveness
2. Resource sharing, hence allowing better utilization of resources.
3. Economy. Creating and managing threads becomes easier.
4. Scalability. One thread runs on one CPU. In Multithreaded processes, threads can be
distributed over a series of processors to scale.
5. Context Switching is smooth. Context switching refers to the procedure followed by
CPU to change from one task to another.
Multithreading Issues
Below we have mentioned a few issues related to multithreading. Well, it's an old saying, All
good things, come at a price.
Thread Cancellation
Thread cancellation means terminating a thread before it has finished working. There can be
two approaches for this, one is Asynchronous cancellation, which terminates the target
thread immediately. The other is Deferred cancellation allows the target thread to
periodically check if it should be cancelled.
Signal Handling
Signals are used in UNIX systems to notify a process that a particular event has occurred.
Now in when a Multithreaded process receives a signal, to which thread it must be delivered?
It can be delivered to all, or a single thread.
fork() System Call
fork() is a system call executed in the kernel through which a process creates a copy of itself.
Now the problem in Multithreaded process is, if one thread forks, will the entire process be
copied or not?
Security Issues
Yes, there can be security issues because of extensive sharing of resources between multiple
threads.
There are many other issues that you might face in a multithreaded process, but there are
appropriate solutions available for them. Pointing out some issues here was just to study both
sides of the coin.
PROCESS THREAD
Process takes more time to terminate. Thread takes less time to terminate.
It takes more time for creation. It takes less time for creation.
It also takes more time for context switching. It takes less time for context switching.
Process is called heavy weight process. Thread is called light weight process.
Process switching uses interface in operating Thread switching does not require to call a
system. operating system and cause an interrupt to the
kernel.
If one server process is blocked no other server Second thread in the same task could run, while
process can execute until the first process one server thread is blocked.
unblocked.
Process has its own Process Control Block, Stack Thread has Parents’ PCB, its own Thread Control
and Address Space. Block and Stack and common Address space.
Applications –
Threading is used widely in almost every field. Most widely it is seen over the internet now
days where we are using transaction processing of every type like recharges, online transfer,
banking etc. Threading is a segment which divide the code into small parts that are of very
light weight and has fewer burdens on CPU memory so that it can be easily worked out and
can achieve goal in desired field. The concept of threading is designed due to the problem of
fast and regular changes in technology and less the work in different areas due to less
application. Then as says “need is the generation of creation or innovation” hence by following
this approach human mind develop the concept of thread to enhance the capability of
programming.
Linux has a unique implementation of threads. To the Linux kernel, there is no concept of a
thread. Linux implements all threads as standard processes. The Linux kernel does not
provide any special scheduling semantics or data structures to represent threads. Instead, a
thread is merely a process that shares certain resources with other processes. Each thread has
a unique task_struct and appears to the kernel as a normal process (which just happens to
share resources, such as an address space, with other processes).
This approach to threads contrasts greatly with operating systems such as Microsoft
Windows or Sun Solaris, which have explicit kernel support for threads (and sometimes call
threads lightweight processes). The name "lightweight process" sums up the difference in
philosophies between Linux and other systems. To these other operating systems, threads are
an abstraction to provide a lighter, quicker execution unit than the heavy process. To Linux,
threads are simply a manner of sharing resources between processes (which are already quite
lightweight)11. For example, assume you have a process that consists of four threads. On
systems with explicit thread support, there might exist one process descriptor that in turn
points to the four different threads. The process descriptor describes the shared resources,
such as an address space or open files. The threads then describe the resources they alone
possess. Conversely, in Linux, there are simply four processes and thus four
normal task_struct structures. The four processes are set up to share certain resources.
Threads are created like normal tasks, with the exception that the clone() system call is
passed flags corresponding to specific resources to be shared:
The previous code results in behavior identical to a normal fork(), except that the address
space, filesystem resources, file descriptors, and signal handlers are shared. In other words,
the new task and its parent are what are popularly called threads.
In contrast, a normal fork() can be implemented as
clone(SIGCHLD, 0);
And vfork() is implemented as
The flags provided to clone() help specify the behavior of the new process and detail what
resources the parent and child will share. Table 3.1 lists the clone flags, which are defined
in <linux/sched.h>, and their effect.
Flag Meaning
Flag Meaning
CLONE_SIGHAND Parent and child share signal handlers and blocked signals.
CLONE_VFORK vfork() was used and the parent will sleep until the child
wakes it.
Flag Meaning
Kernel Threads
It is often useful for the kernel to perform some operations in the background. The kernel
accomplishes this via kernel threads—standard processes that exist solely in kernel-space.
The significant difference between kernel threads and normal processes is that kernel threads
do not have an address space (in fact, their mm pointer is NULL). They operate only in
kernel-space and do not context switch into user-space. Kernel threads are, however,
schedulable and preemptable as normal processes.
Linux delegates several tasks to kernel threads, most notably the pdflush task and
the ksoftirqd task. These threads are created on system boot by other kernel threads. Indeed, a
kernel thread can be created only by another kernel thread. The interface for spawning a new
kernel thread from an existing one is
The new task is created via the usual clone() system call with the specified flags argument.
On return, the parent kernel thread exits with a pointer to the child's task_struct. The child
executes the function specified by fn with the given argument arg. A special clone
flag, CLONE_KERNEL, specifies the usual flags for kernel
threads: CLONE_FS, CLONE_FILES, and CLONE_SIGHAND. Most kernel threads pass
this for their flags parameter.
Typically, a kernel thread continues executing its initial function forever (or at least until the
system reboots, but with Linux you never know). The initial function usually implements a
loop in which the kernel thread wakes up as needed, performs its duties, and then returns to
sleep.