OS Chapter 4 Final
OS Chapter 4 Final
THREADS
It is a path of execution.
Note: if one thread makes changes in the code segment area than
all the other threads can see that.
Multithreaded Programming
Each Thread belongs to a process and no thread exist outside the process so each
thread represent separate flow of execution.
A thread shares with its peer threads few information like code segment, data
segment and open files. When one thread alters a code segment memory item, all
other threads see that.
If a process has multiple threads of control, it can perform more than one task at a
time.
Threads have been successfully used in implementing network servers and web server.
They also provide a suitable foundation for parallel execution of applications on shared
memory multiprocessors. The following figure shows the working of a single-threaded and
a multithreaded process.
Difference between Process and Thread
5 Blocking a process will not block Blocking a user level thread will block entire
another process. Processes are process. However threads are interdependent.
independent.
6 Multiple processes without using Multiple threaded processes use fewer resources.
threads use more resources.
7 In multiple processes each process One thread can read, write or change another
operates independently of the others. thread's data.
In a non multi threaded environment, a server listens to the port for some request and
when the request comes, it processes the request and then resume listening to another
request. The time taken while processing of request makes other users wait unnecessarily.
Instead a better approach would be to pass the request to a worker thread and continue
listening to port.
For example, a multi threaded web browser allow user interaction in one thread while an
video is being loaded in another thread.
So instead of waiting for the whole web-page to load the user can continue viewing some
portion of the web-page.
Benefits of Multithreading in Operating System
Resource Sharing –
Processes may share resources only through techniques such as-
Message Passing
Shared Memory
Such techniques must be explicitly organized by programmer.
However, threads share the memory and the resources of the process to which
they belong by default.
The benefit of sharing code and data is that it allows an application to have
several threads of activity within same address space.
Benefits of Multithreading in Operating System
Economy –
Allocating memory and resources for process creation is a costly job in terms of time and
space.
Since, threads share memory with the process it belongs, it is more economical to create
and context switch threads. Generally much more time is consumed in creating and
managing processes than in threads.
In Solaris, for example, creating process is 30 times slower than creating threads and context
switching is 5 times slower.
Scalability –
Single threaded process can run only on one processor regardless of how many processors
are available.
Concurrent systems that we create using multicore programming have multiple tasks
executing in parallel. This is known as concurrent execution.
Concurrency
Concurrency means that an application is making progress on more than one
task at the same time (concurrently). Well, if the computer only has one CPU the
application may not make progress on more than one task at exactly the same
time, but more than one task is being processed at a time inside the
application. It does not completely finish one task before it begins the next.
Instead, the CPU switches between the different tasks until the tasks are complete.
Parallelism means that an application splits its tasks up into smaller subtasks which can be
processed in parallel, for instance on multiple CPUs at the exact same time.
Let’s take an example, summing the contents of an array of size N. For a single-core system,
one thread would simply sum the elements [0] . . . [N − 1]. For a dual-core system, however,
thread A, running on core 0, could sum the elements [0] . . . [N/2 − 1] and while thread B,
running on core 1, could sum the elements [N/2] . . . [N − 1]. So the Two threads would be
running in parallel on separate computing cores.
Task Parallelism
Task Parallelism means concurrent execution of the different task on multiple
computing cores.
Consider again our example above, an example of task parallelism might involve two
threads, each performing a unique statistical operation on the array of elements. Again The
threads are operating in parallel on separate computing cores, but each is performing a
unique operation.
Programming Challenges
For application programmers, the challenge is to switch existing
programs with its style new programs that are multithreaded. In
general, 5 areas of challenges are there in programming for
multicore systems −
Identifying tasks − This involves examining applications to seek out areas that may be
divided into separate, concurrent tasks. Ideally, tasks are independent of one another
and therefore will run in parallel on individual cores.
Balance − Whereas distinctive tasks that may run in parallel, programmers should
make sure that the tasks perform equal work of equal worth. In some instances, a
definite task might not contribute the maximum amount worth to the method as
different tasks. employing a separate execution core to run that task might not be well
worth the cost.
Data splitting − Even as applications are divided into separate tasks, the data accessed
and manipulated by the tasks should be divided to run on separate cores.
Data dependency − The data accessed by the tasks should be examined for
dependencies between 2 or additional tasks. once one task depends on data
from another, programmers should make sure that the execution of the tasks
is synchronal to accommodate the data dependency.
Kernel-Level Threads
Kernel-level threads are handled by the operating system directly and the thread
management is done by the kernel. The context information for the process as well as the
process threads is all managed by the kernel. Because of this, kernel-level threads are
slower than user-level threads.
Difference between User-Level & Kernel-Level Thread
Some operating system provide a combined user level thread and Kernel level thread
facility. Solaris is a good example of this combined approach. In a combined system,
multiple threads within the same application can run in parallel on multiple processors
and a blocking system call need not block the entire process. Multithreading models are
three types
The following diagram shows the many-to-many threading model where 6 user level
threads are multiplexing with 6 kernel level threads.
In this model, developers can create as many user threads as necessary and the
corresponding Kernel threads can run in parallel on a multiprocessor machine. This model
provides the best accuracy on concurrency and when a thread performs a blocking system
call, the kernel can schedule another thread for execution.
Many to One Model
Many-to-one model maps many user level threads to one Kernel-level thread. Thread
management is done in user space by the thread library.
When thread makes a blocking system call, the entire process will be blocked.
Only one thread can access the Kernel at a time, so multiple threads are unable to run in
parallel on multiprocessors .
If the user-level thread libraries are implemented in the operating system in such a way
that the system does not support them, then the Kernel threads use the many-to-one
relationship modes.
One to One Model
There is one-to-one relationship of user-level thread to the kernel-level thread. This
model provides more concurrency than the many-to-one model.
It also allows another thread to run when a thread makes a blocking system call. It
supports multiple threads to execute in parallel on microprocessors.
Disadvantage of this model is that creating user thread requires the corresponding
Kernel thread. OS/2, windows NT and windows 2000 use one to one relationship model
int main()
#include<iostream> {
#include<pthread.h> int r1,r2;
#include<iostream> cout<<"Process id: "<<getpid();
#include<sys/wait.h> cout<<"\nBefore creation of thread\n";
using namespace std;
pthread_t t1,t2;
void* routine(void *args)
{ r1=pthread_create(&t1,NULL,routine,NULL);
cout<<"Hello from threads\n"; r2=pthread_create(&t2,NULL,routine,NULL);
cout<<"Process Id:\n"<<getpid(); if (r1==0)
} cout<<"Thread1 created successfully:\n";
else
cout<<"Problem in creating thread1:\n";
if (r2==0)
cout<<"Thread2 created successfully:\n";
else
cout<<"Problem in creating thread2:\n";
pthread_join(t1,NULL);
pthread_join(t2,NULL);
cout<<"Thread termonated\n";
return 0;
}
#include<iostream> int main()
#include<pthread.h> {
#include<iostream> int r;
#include<sys/wait.h> cout<<"Process id: "<<getpid();
using namespace std; cout<<"\nBefore creation of thread\n";
pthread_join(t,NULL);
cout<<"Thread termonated\n";
return 0;
}
#include<iostream>
#include<pthread.h>
#include<iostream>
void* fibonacci(void*);
using namespace std;
int n;
void* fibonnacci(void* arg)
{
int c, first = 0, second = 1, next;
for ( c = 0 ; c <n; c++ )
{
if ( c <= 1 )
next = c;
else
{
next = first + second;
first = second;
second = next;
}
cout << next << endl;
}
}
int main()
{
pthread_t t;
cout << "Enter the number of terms of Fibonacci series you want" << endl;
cin >> n;
cout << "First " << n << " terms of Fibonacci series are :- " << endl;
pthread_join(t,NULL);
pthread_exit(NULL);
return 0;
}
1. What is a Thread?
A thread is a single sequence stream within a process. Because threads have some of
the properties of processes, they are sometimes called lightweight processes.
Threads are a popular way to improve the application through parallelism. For
example, in a browser, multiple tabs can be different threads. MS Word uses multiple
threads, one thread to format the text, another thread to process inputs, etc.
As a result, threads share with other threads their code section, data section, and OS
resources like open files and signals.
It makes the system more responsive and enables resource sharing. It leads to the use
of multiprocess architecture. It is more economical and preferred.