Thread
A thread refers to a single sequential flow of
activities being executed in a process; it is also
known as the thread of execution or the thread of
control. Now, thread execution is possible within
any OS’s process. Apart from that, a process can
have several threads. A distinct programme
counter, a stack of activation records as well as
control blocks are used by each thread of the same
process. Thread is frequently described as a light
technique.
The procedure can be easily broken down into
numerous different threads. Multiple tabs in a
browser, for example, can be considered threads.
MS Word employs many threads to prepare the
text in one thread, receive input in another thread, and so on.
Why Do We Need Thread?
● Creating a new thread in a current process requires significantly less time than creating a new process.
● Threads can share common data without needing to communicate with each other.
● When working with threads, context switching is faster.
● Terminating a thread requires less time than terminating a process.
Types of Threads
1. User-Level Thread
The user-level thread is ignored by the operating
system. User threads are simple to implement and are
done so by the user. The entire process is blocked if a
user executes a user-level operation of thread blocking.
The kernel-level thread is completely unaware of the
user-level thread. User-level threads are managed as
single-threaded processes by the kernel-level
thread.Threads in Java, POSIX, and other languages are
examples.
2. Kernel-Level Thread
The operating system is recognised by the kernel thread. Each
thread and process in the kernel-level thread has its own thread
control block as well as process control block in the system. The
operating system implements the kernel-level thread. The
kernel is aware of all threads and controls them. The
kernel-level thread provides a system call for user-space thread
creation and management. Kernel threads are more complex to
build than user threads. The kernel thread’s context switch time is
longer. The execution of the Banky thread can continue in case a
kernel thread performs a blocking operation.Solaris, for
example.
Multithreading Model
Multithreading allows the application to
divide its task into individual threads. In
multi-threads, the same process or task
can be done by the number of threads, or
we can say that there is more than one
thread to perform the task in
multithreading. With the use of
multithreading, multitasking can be
achieved.
The main drawback of single threading systems is that only one task can be performed at a time, so to
overcome the drawback of this single threading, there is multithreading that allows multiple tasks to be
performed.
For example:
In the above example, client1, client2, and client3 are accessing the web server without any waiting. In
multithreading, several tasks can run at the same time.
In an operating system, threads are divided into the user-level thread and the Kernel-level thread. User-level
threads handled independent form above the kernel and thereby managed without any kernel support. On the
opposite hand, the operating system directly manages the kernel-level threads. Nevertheless, there must be a
form of relationship between user-level and kernel-level threads.
There exists three established multithreading models classifying these relationships are:
o Many to one multithreading model
o One to one multithreading model
o Many to Many multithreading models
Many to one multithreading model:
The many to one model maps many user levels threads to one kernel thread.
This type of relationship facilitates an effective context-switching
environment, easily implemented even on the simple kernel with no thread
support.
The disadvantage of this model is that since there is only one kernel-level
thread schedule at any given time, this model cannot take advantage of the
hardware acceleration offered by multithreaded processes or multi-processor
systems. In this, all the thread management is done in the userspace. If
blocking comes, this model blocks the whole system.
In the figure, the many to one model associates all user-level threads to single
kernel-level threads.
One to one multithreading model
The one-to-one model maps a single user-level thread to a single kernel-level thread.
This type of relationship facilitates the running of multiple threads in parallel.
However, this benefit comes with its drawback. The generation of every new user
thread must include creating a corresponding kernel thread causing an overhead,
which can hinder the performance of the parent process. Windows series and Linux
operating systems try to tackle this problem by limiting the growth of the thread
count.In the figure, one model associates that one user-level thread to a single
kernel-level thread.
Many to Many Model multithreading model
In this type of model, there are several user-level threads and several
kernel-level threads. The number of kernel threads created depends
upon a particular application. The developer can create as many
threads at both levels but may not be the same. The many to many
model is a compromise between the other two models. In this
model, if any thread makes a blocking system call, the kernel can
schedule another thread for execution. Also, with the introduction of
multiple threads, complexity is not present as in the previous
models. Though this model allows the creation of multiple kernel
threads, true concurrency cannot be achieved by this model. This is
because the kernel can schedule only one process at a time.
Many to many versions of the multithreading model associate
several user-level threads to the same or much less variety of
kernel-level threads in the figure.
Operating System Structure
Operating systems are implemented using many types of structures, as will be discussed below:
SIMPLE STRUCTURE
It is the most straightforward operating system structure, but it lacks definition and is only appropriate for
usage with tiny and restricted systems. Since the interfaces and degrees of functionality in this structure are
clearly defined, programs are able to access I/O routines, which may result in unauthorized access to I/O
procedures.
This organizational structure is used by the MS-DOS operating system:
o There are four layers that make up the MS-DOS operating system, and each has its own set of features.
o These layers include ROM BIOS device drivers, MS-DOS device drivers, application programs, and
system programs.
o The MS-DOS operating system benefits from layering because each level can be defined independently
and, when necessary, can interact with one another.
o If the system is built in layers, it will be simpler to design, manage, and update. Because of this, simple
structures can be used to build constrained systems that are less complex.
o When a user program fails, the operating system as whole crashes.
o Because MS-DOS systems have a low level of abstraction, programs and I/O procedures are visible to
end users, giving them the potential for unwanted access.
The following figure illustrates layering in simple structure:
Advantages of Simple Structure:
o Because there are only a few interfaces and levels, it is
simple to develop.
o Because there are fewer layers between the hardware and
the applications, it offers superior performance.
Disadvantages of Simple Structure:
o The entire operating system breaks if just one user
program malfunctions.
o Since the layers are interconnected, and in communication
with one another, there is no abstraction or data hiding.
o The operating system's operations are accessible to layers,
which can result in data tampering and system failure.
MONOLITHIC STRUCTURE
The monolithic operating system controls all aspects of the operating system's operation, including file
management, memory management, device management, and operational operations.
The core of an operating system for computers is called the kernel (OS). All other System components are
provided with fundamental services by the kernel. The operating system and the hardware use it as their main
interface. When an operating system is built into a single piece of hardware, such as a keyboard or mouse, the
kernel can directly access all of its resources.
The monolithic operating system is often referred to as the monolithic kernel. Multiple programming
techniques such as batch processing and time-sharing increase a processor's usability. Working on top of the
operating system and under complete command of all hardware, the monolithic kernel performs the role of a
virtual computer. This is an old operating system that was used in banks to carry out simple tasks like batch
processing and time-sharing, which allows numerous users at different terminals to access the Operating
System.
The following diagram represents the monolithic structure:
Advantages of Monolithic Structure:
o Because layering is unnecessary and the kernel
alone is responsible for managing all operations, it is
easy to design and execute.
o Due to the fact that functions like memory
management, file management, process scheduling,
etc., are implemented in the same address area, the
monolithic kernel runs rather quickly when compared
to other systems. Utilizing the same address speeds
up and reduces the time required for address
allocation for new processes.
Disadvantages of Monolithic Structure:
o The monolithic kernel's services are interconnected in address space and have an impact on one
another, so if any of them malfunctions, the entire system does as well.
o It is not adaptable. Therefore, launching a new service is difficult.
LAYERED STRUCTURE
The OS is separated into layers or levels in this kind of arrangement. Layer 0 (the lowest layer) contains the
hardware, and layer 1 (the highest layer) contains the user interface (layer N). These layers are organized
hierarchically, with the top-level layers making use of the capabilities of the lower-level ones.
The functionalities of each layer are separated in this method, and abstraction is also an option. Because
layered structures are hierarchical, debugging is simpler, therefore all lower-level layers are debugged before
the upper layer is examined. As a result, the present layer alone has to be reviewed since all the lower layers
have already been examined.
The image below shows how OS is organized into layers:
Advantages of Layered Structure:
o Work duties are separated since each layer has its own
functionality, and there is some amount of abstraction.
o Debugging is simpler because the lower layers are examined
first, followed by the top layers.
Disadvantages of Layered Structure:
o Performance is compromised in layered structures due to
layering.
o Construction of the layers requires careful design because upper
layers only make use of lower layers' capabilities.
MICRO-KERNEL STRUCTURE
The operating system is created using a micro-kernel framework that strips the kernel of any unnecessary
parts. Systems and user applications are used to implement these optional kernel components. So,
Micro-Kernels is the name given to these systems that have been developed.
Each Micro-Kernel is created separately and is kept apart from the others. As a result, the system is now more
trustworthy and secure. If one Micro-Kernel malfunctions, the remaining operating system is unaffected and
continues to function normally.
The image below shows Micro-Kernel Operating System Structure:
Advantages of Micro-Kernel Structure:
o It enables portability of the operating
system across platforms.
o Due to the isolation of each
Micro-Kernel, it is reliable and secure.
o The reduced size of Micro-Kernels
allows for successful testing.
o The remaining operating system
remains unaffected and keeps running
properly even if a component or Micro-Kernel
fails.
Disadvantages of Micro-Kernel Structure:
o The performance of the system is decreased by increased inter-module communication.
o The construction of a system is complicated.
Multiprocessing Operating System
Multiprocessor operating systems are used in operating systems to boost the performance of multiple CPUs
within a single computer system.
Multiple CPUs are linked together so that a job can be divided and executed more quickly. When a job is
completed, the results from all CPUs are compiled to provide the final output. Jobs were required to share
main memory, and they may often share other system resources. Multiple CPUs can be used to run multiple
tasks at the same time, for example, UNIX.
One of the most extensively used operating systems is the multiprocessing operating system. The following
diagram depicts the basic organisation of a typical multiprocessing system.
The computer system should have the following
features to efficiently use a multiprocessing
operating system:
In a multiprocessing OS, a motherboard can
handle many processors.
Processors can also be utilised as a part of a
multiprocessing system.
Pros of Multiprocessing OS
Increased reliability: Processing tasks can be
spread among numerous processors in the multiprocessing system. This promotes reliability because if one
processor fails, the task can be passed on to another.
Increased throughout: More work could be done in less time as the number of processors increases.
The economy of scale: Multiprocessor systems are less expensive than single-processor computers because
they share peripherals, additional storage devices, and power sources.
Cons of Multiprocessing OS
Multiprocessing operating systems are more complex and advanced since they manage many CPUs at the
same time.
Types of Multiprocessing OS
Symmetrical
Each processor in a symmetrical multiprocessing system
runs the same copy of the OS, makes its own decisions,
and collaborates with other processes to keep the
system running smoothly. CPU scheduling policies are
straightforward. Any new job that is submitted by a user
could be assigned to the least burdened processor. It also
means that at any given time, all processors are equally
taxed.Since the processors share memory along with the
I/O bus or data channel, the symmetric multiprocessing
OS is sometimes known as a “shared everything” system.
The number of processors in this system is normally limited to 16.
Characteristics
● Any processor in this system can run any process or job.
● Any CPU can start an Input and Output operation in this way.
Asymmetric
The processors in an asymmetric system have a master-slave relationship. In addition, one processor may
serve as a master or supervisor processor, while the rest are treated as illustrated below.
In the asymmetric processing system represented
above, CPU n1 serves as a supervisor, controlling
the subsequent CPUs. Each processor in such a
system is assigned a specific task, and the actions
of the other processors are overseen by a master
processor.
We have a maths coprocessor, for example, that
can handle mathematical tasks better than the
main CPU. We also have an MMX processor, which
is designed to handle multimedia-related tasks. We
also have a graphics processor to handle graphics-related tasks more efficiently than the main processor.
Whenever a user submits a new job, the operating system must choose which processor is most suited for the
task, and that processor is subsequently assigned to the newly arriving job. This processor is the system’s
master and controller. All other processors search for masters for instructions or have jobs that are
predetermined. The master is responsible for allocating work to other processors.
Critical Section
The critical section problem is one of the classic problems in Operating
Systems. In operating systems, there are processes called cooperative
processes that share and access a single resource. In these kinds of
processes, the problem of synchronization occurs. The critical section
problem is a problem that deals with this synchronization. The critical
section is a code segment where the shared variables can be accessed. An
atomic action is required in a critical section i.e. only one process can
execute in its critical section at a time. All the other processes have to
wait to execute in their critical sections.
What is the Critical Section in OS?
● Critical Section refers to the segment of code or the program that
tries to access or modify the value of the variables in a shared
resource.
● The section above the critical section is called the Entry Section.
The process that is entering the critical section must pass the entry section.
● The section below the critical section is called the Exit Section.
● The section below the exit section is called the Reminder Section and this section has the remaining
code that is left after execution.
In the above diagram, the entry section handles the entry into the critical section. It acquires the resources
needed for execution by the process. The exit section handles the exit from the critical section. It releases the
resources and also informs the other processes that the critical section is free.
Inter Process Communication
Processes in operating system needs to communicate with each other. That is called Interprocess
communication. Inter process communication (IPC) is used for exchanging data between multiple threads in
one or more processes or programs. The Processes may be running on single or multiple computers connected
by a network. The full form of IPC is Inter-process communication.
It is a set of programming interface which allow a programmer to coordinate activities among various program
processes which can run concurrently in an operating system. This allows a specific program to handle many
user requests at the same time. Since every single user request may result in multiple processes running in the
operating system, the process may require to communicate with each other. Each IPC protocol approach has
its own advantage and limitation, so it is not unusual for a single program to use all of the IPC methods.
Thrashing
A state in which the CPU performs lesser “productive” work and more “swapping” is known as thrashing. It
occurs when there are too many pages in the memory and each page refers to another one. The CPU is busy
swapping and hence its utilization falls.
What are the causes of thrashing?
The process scheduling mechanism tries to load many processes in the system at a time and hence the degree
of multiprogramming is increased. In this scenario, there are far more processes than the number of frames
available. The memory soon fills up and the process starts spending a lot of time for the required pages to be
swapped in, causing the utilization of the CPU to fall low, as every process has to wait for pages. Thrashing
affects the performance of execution in the Operating system. Also, thrashing results in severe performance
problems in the Operating system.
When the utilization of CPU is low, then the process scheduling mechanism tries to load many processes into
the memory at the same time due to which degree of Multiprogramming can be increased. Now in this
situation, there are more processes in the memory as compared to the available number of frames in the
memory. Allocation of the limited amount of frames to each process. Whenever any process with high priority
arrives in the memory and if the frame is not freely available at that time then the other process that has
occupied the frame is residing in the frame will move to secondary storage and after that this free frame will
be allocated to higher priority process.
We can also say that as soon as the memory fills up, the process starts spending a lot of time for the required
pages to be swapped in. Again the utilization of the CPU becomes low because most of the processes are
waiting for pages. Thus a high degree of multiprogramming and lack of frames are two main causes of
thrashing in the Operating system.
Segmentation
Segmentation divides processes into smaller subparts known as modules. The divided segments need not be
placed in contiguous memory. Since there is no contiguous memory allocation, internal fragmentation does
not take place. The length of the segments of the program and memory is decided by the purpose of the
segment in the user program.
We can say that logical address space or the main memory is a collection of segments.
Segmentation came into existence
because of the problems in the paging
technique. In the case of the paging
technique, a function or piece of code is
divided into pages without considering
that the relative parts of code can also
get divided. Hence, for the process in
execution, the CPU must load more than
one page into the frames so that the
complete related code is there for
execution. Paging took more pages for a process to be loaded into the main memory. Hence, segmentation
was introduced in which the code is divided into modules so that related code can be combined in one single
block.
Operating system doesn't care about the User's view of the process. It may divide the same function into
different pages and those pages may or may not be loaded at the same time into the memory. It decreases the
efficiency of the system.
It is better to have segmentation which divides the process into the segments. Each segment contains the
same type of functions such as the main function can be included in one segment and the library functions can
be included in the other segment.
Translation of Logical address into physical address by segment table
CPU generates a logical address which contains two parts:
1. Segment Number
2. Offset
For Example:
Suppose a 16 bit address is used with 4 bits
for the segment number and 12 bits for the
segment offset so the maximum segment size
is 4096 and the maximum number of
segments that can be refereed is 16.
When a program is loaded into memory, the
segmentation system tries to locate space
that is large enough to hold the first segment
of the process, space information is obtained
from the free list maintained by memory
manager. Then it tries to locate space for
other segments. Once adequate space is
located for all the segments, it loads them
into their respective areas.The operating system also generates a segment map table for each program.
With the help of segment map tables and hardware assistance, the operating system can easily translate a
logical address into physical address on execution of a program.
The Segment number is mapped to the segment table. The limit of the respective segment is compared with
the offset. If the offset is less than the limit then the address is valid otherwise it throws an error as the
address is invalid.
In the case of valid addresses, the base address of the segment is added to the offset to get the physical
address of the actual word in the main memory.
The above figure shows how address translation is done in case of segmentation.
Advantages of Segmentation
1. No internal fragmentation
2. Average Segment Size is larger than the actual page size.
3. Less overhead
4. It is easier to relocate segments than entire address space.
5. The segment table is of lesser size as compared to the page table in paging.
Disadvantages
1. It can have external fragmentation.
2. it is difficult to allocate contiguous memory to variable sized partition.
3. Costly memory management algorithms.
Difference between Paging and Segmentation
S.N. Paging Segmentation
1 Non-Contiguous memory allocation Non-contiguous memory allocation
2 Paging divides program into fixed size pages. Segmentation divides program into variable size segments.
3 OS is responsible Compiler is responsible.
4 Paging is faster than segmentation Segmentation is slower than paging
5 Paging is closer to Operating System Segmentation is closer to User
6 It suffers from internal fragmentation It suffers from external fragmentation
7 There is no external fragmentation There is no external fragmentation
8 Logical address is divided into page number and Logical address is divided into segment number and
page offset segment offset
9 Page table is used to maintain the page Segment Table maintains the segment information
information.
10 Page table entry has the frame number and some Segment table entry has the base address of the segment
flag bits to represent details about pages. and some protection bits for the segments.