0% found this document useful (0 votes)
163 views46 pages

Unit-Ii: (1) Process States

The document discusses process states and process control in an operating system. It describes the typical states a process passes through from creation to completion - new, ready, running, blocked/wait, and terminated. It also discusses operations on processes like creation, scheduling, execution, and deletion. The document outlines the different process queues like the job queue, ready queue, and waiting queue. It explains the need for and an example of context switching to allow a CPU to handle multiple processes.

Uploaded by

jeeturathia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
163 views46 pages

Unit-Ii: (1) Process States

The document discusses process states and process control in an operating system. It describes the typical states a process passes through from creation to completion - new, ready, running, blocked/wait, and terminated. It also discusses operations on processes like creation, scheduling, execution, and deletion. The document outlines the different process queues like the job queue, ready queue, and waiting queue. It explains the need for and an example of context switching to allow a CPU to handle multiple processes.

Uploaded by

jeeturathia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 46

1 | M .

s c S e m 1 ADVANCED OPERATING SYSTEM

UNIT- II Processes and Process Control Strategy

(1) Process States,


Process States
State Diagram

The process, from its creation to completion, passes through various states. The minimum number of
states is five.

The names of the states are not standardized although the process may be in one of the following states
during execution.

1. New
A program which is going to be picked up by the OS into the main memory is called a new process.

2. Ready
Whenever a process is created, it directly enters in the ready state, in which, it waits for the CPU to be
assigned. The OS picks the new processes from the secondary memory and put all of them in the main
memory.

The processes which are ready for the execution and reside in the main memory are called ready state
processes. There can be many processes present in the ready state.
2 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

3. Running
One of the processes from the ready state will be chosen by the OS depending upon the scheduling
algorithm. Hence, if we have only one CPU in our system, the number of running processes for a
particular time will always be one. If we have n processors in the system then we can have n processes
running simultaneously.

4. Block or wait
From the Running state, a process can make the transition to the block or wait state depending upon the
scheduling algorithm or the intrinsic behavior of the process.

When a process waits for a certain resource to be assigned or for the input from the user then the OS
move this process to the block or wait state and assigns the CPU to the other processes.

5. Completion or termination
When a process finishes its execution, it comes in the termination state. All the context of the process
(Process Control Block) will also be deleted the process will be terminated by the Operating system.

6. Suspend ready
A process in the ready state, which is moved to secondary memory from the main memory due to lack of
the resources (mainly primary memory) is called in the suspend ready state.

If the main memory is full and a higher priority process comes for the execution then the OS have to
make the room for the process in the main memory by throwing the lower priority process out into the
secondary memory. The suspend ready processes remain in the secondary memory until the main
memory gets available.

7. Suspend wait
Instead of removing the process from the ready queue, it's better to remove the blocked process which is
waiting for some resources in the main memory. Since it is already waiting for some resource to get
available hence it is better if it waits in the secondary memory and make room for the higher priority
process. These processes complete their execution once the main memory gets available and their wait is
finished.

Operations on the Process


1. Creation
Once the process is created, it will be ready and come into the ready queue (main memory) and will be
ready for the execution.

2. Scheduling
3 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

Out of the many processes present in the ready queue, the Operating system chooses one process and
start executing it. Selecting the process which is to be executed next, is known as scheduling.

3. Execution
Once the process is scheduled for the execution, the processor starts executing it. Process may come to
the blocked or wait state during the execution then in that case the processor starts executing the other
processes.

4. Deletion/killing
Once the purpose of the process gets over then the OS will kill the process. The Context of the process
(PCB) will be deleted and the process gets terminated by the Operating system.

(2) Process Scheduling in OS (Operating System)


1. Long term scheduler
Long term scheduler is also known as job scheduler. It chooses the processes from the pool (secondary
memory) and keeps them in the ready queue maintained in the primary memory.

Long Term scheduler mainly controls the degree of Multiprogramming. The purpose of long term
scheduler is to choose a perfect mix of IO bound and CPU bound processes among the jobs present in
the pool.

If the job scheduler chooses more IO bound processes then all of the jobs may reside in the blocked state
all the time and the CPU will remain idle most of the time. This will reduce the degree of
Multiprogramming. Therefore, the Job of long term scheduler is very critical and may affect the system
for a very long time.

2. Short term scheduler


Short term scheduler is also known as CPU scheduler. It selects one of the Jobs from the ready queue and
dispatch to the CPU for the execution.

A scheduling algorithm is used to select which job is going to be dispatched for the execution. The Job of
the short term scheduler can be very critical in the sense that if it selects job whose CPU burst time is very
high then all the jobs after that, will have to wait in the ready queue for a very long time.

This problem is called starvation which may arise if the short term scheduler makes some mistakes while
selecting the job.

3. Medium term scheduler


Medium term scheduler takes care of the swapped out processes.If the running state processes needs
some IO time for the completion then there is a need to change its state from running to waiting.
4 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

Medium term scheduler is used for this purpose. It removes the process from the running state to make
room for the other processes. Such processes are the swapped out processes and this procedure is called
swapping. The medium term scheduler is responsible for suspending and resuming the processes.

It reduces the degree of multiprogramming. The swapping is necessary to have a perfect mix of
processes in the ready queue.

Process Queues
The Operating system manages various types of queues for each of the process states. The PCB related to
the process is also stored in the queue of the same state. If the Process is moved from one state to
another state then its PCB is also unlinked from the corresponding queue and added to the other state
queue in which the transition is made.

There are the following queues maintained by the Operating system.

1. Job Queue
In starting, all the processes get stored in the job queue. It is maintained in the secondary memory. The
long term scheduler (Job scheduler) picks some of the jobs and put them in the primary memory.

2. Ready Queue
Ready queue is maintained in primary memory. The short term scheduler picks the job from the ready
queue and dispatch to the CPU for the execution.

3. Waiting Queue
When the process needs some IO operation in order to complete its execution, OS changes the state of
the process from running to waiting. The context (PCB) associated with the process gets stored on the
waiting queue which will be used by the Processor when the process finishes the IO.

(3)Process Control block,


5 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

The need for Context switching


A context switching helps to share a single CPU across all processes to complete its execution and store
the system's tasks status. When the process reloads in the system, the execution of the process starts at
the same point where there is conflicting.

Following are the reasons that describe the need for context switching in the Operating system.

1. The switching of one process to another process is not directly in the system. A context switching helps the
operating system that switches between the multiple processes to use the CPU's resource to accomplish its
tasks and store its context. We can resume the service of the process at the same point later. If we do not
store the currently running process's data or context, the stored data may be lost while switching between
processes.
2. If a high priority process falls into the ready queue, the currently running process will be shut down or
stopped by a high priority process to complete its tasks in the system.
3. If any running process requires I/O resources in the system, the current process will be switched by another
process to use the CPUs. And when the I/O requirement is met, the old process goes into a ready state to
wait for its execution in the CPU. Context switching stores the state of the process to resume its tasks in an
operating system. Otherwise, the process needs to restart its execution from the initials level.
4. If any interrupts occur while running a process in the operating system, the process status is saved as
registers using context switching. After resolving the interrupts, the process switches from a wait state to a
ready state to resume its execution at the same point later, where the operating system interrupted occurs.
5. A context switching allows a single CPU to handle multiple process requests simultaneously without the
need for any additional processors.

Example of Context Switching


Suppose that multiple processes are stored in a Process Control Block (PCB). One process is running state
to execute its task with the use of CPUs. As the process is running, another process arrives in the ready
queue, which has a high priority of completing its task using CPU. Here we used context switching that
switches the current process with the new process requiring the CPU to finish its tasks. While switching
the process, a context switch saves the status of the old process in registers. When the process reloads
into the CPU, it starts the execution of the process when the new process stops the old process. If we do
not save the state of the process, we have to start its execution at the initial level. In this way, context
switching helps the operating system to switch between the processes, store or reload the process when
it requires executing its tasks.

Context switching triggers


6 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

Following are the three types of context switching triggers as follows.

1. Interrupts
2. Multitasking
3. Kernel/User switch

Interrupts: A CPU requests for the data to read from a disk, and if there are any interrupts, the context
switching automatic switches a part of the hardware that requires less time to handle the interrupts.

Multitasking: A context switching is the characteristic of multitasking that allows the process to be
switched from the CPU so that another process can be run. When switching the process, the old state is
saved to resume the process's execution at the same point in the system.

Kernel/User Switch: It is used in the operating systems when switching between the user mode, and the
kernel/user mode is performed.

What is the PCB?


A PCB (Process Control Block) is a data structure used in the operating system to store all data related
information to the process. For example, when a process is created in the operating system, updated
information of the process, switching information of the process, terminated process in the PCB.

Steps for Context Switching


There are several steps involves in context switching of the processes. The following diagram represents
the context switching of two processes, P1 to P2, when an interrupt, I/O needs, or priority-based process
occurs in the ready queue of PCB.

As we can see in the diagram, initially, the P1 process is running on the CPU to execute its task, and at the
same time, another process, P2, is in the ready state. If an error or interruption has occurred or the
7 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

process requires input/output, the P1 process switches its state from running to the waiting state. Before
changing the state of the process P1, context switching saves the context of the process P1 in the form of
registers and the program counter to the PCB1. After that, it loads the state of the P2 process from the
ready state of the PCB2 to the running state.

The following steps are taken when switching Process P1 to Process 2:

1. First, thes context switching needs to save the state of process P1 in the form of the program counter and
the registers to the PCB (Program Counter Block), which is in the running state.
2. Now update PCB1 to process P1 and moves the process to the appropriate queue, such as the ready queue,
I/O queue and waiting queue.
3. After that, another process gets into the running state, or we can select a new process from the ready state,
which is to be executed, or the process has a high priority to execute its task.
4. Now, we have to update the PCB (Process Control Block) for the selected process P2. It includes switching
the process state from ready to running state or from another state like blocked, exit, or suspend.
5. If the CPU already executes process P2, we need to get the status of process P2 to resume its execution at
the same time point where the system interrupt occurs.

Similarly, process P2 is switched off from the CPU so that the process P1 can resume execution. P1
process is reloaded from PCB1 to the running state to resume its task at the same point. Otherwise, the
information is lost, and when the process is executed again, it starts execution at the initial level.

(4) Execution of the Operating System


The kernel is a computer program at the core of a computer's operating system that has complete
control over everything in the system. It is the "portion of the operating system code that is always
resident in memory", and facilitates interactions between hardware and software components. On most
systems, the kernel is one of the first programs loaded on startup (after the bootloader). It handles the
rest of startup as well as memory, peripherals, and input/output (I/O) requests from software,
translating them into data-processing instructions for the central processing unit.

Introduction to System Call

In computing, a system call when a program program requests a service from the kernel of the
operating system it is executed on. A system call is a way for programs to interact with the operating
system. Application programs are NOT allowed to perform certain tasks, such as open a file, or create a
new process. System calls provide the services of the operating system to the application programs via
Application Program Interface(API). It provides an interface between a process and operating system to
allow user-level processes, that is the applications that users are running on the system, to request
services of the operating system. System calls are the only entry points into the kernel system. All
programs needing resources must use system calls.

Services Provided by System Calls :

1. Process creation and management


8 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

2. Main memory management


3. File Access, Directory and File system management
4. Device handling(I/O)
5. Protection
6. Networking, etc.

Types of System Calls : There are 5 different categories of system calls

1. Process control: end, abort, create, terminate, allocate and free memory.
2. File management: create, open, close, delete, read file etc.
3. Device management
4. Information maintenance
5. Communication

The following are some examples of system calls in Windows and Linux. So, if a user is running a word
processing tool, and wants to save the document - the word processor asks the operating system to
create a file, or open a file, to save the current set of changes. If the application has permission to write
to the requested file then the operating system performs the task. Otherwise, the operating system
returns a status telling the user they do not have permission to write to the requested file. This concept
of user versus kernel allows the operating system to maintain a certain level of control.

Windows Linux

CreateProcess() fork()
Process Control ExitProcess() exit()
WaitForSingleObject() wait()

CreateFile() open()
ReadFile() read()
File Manipulation WriteFile() write()
CloseHandle() close()

SetConsoleMode() ioctl()
Device Manipulation ReadConsole() read()
WriteConsole() write()

GetCurrentProcessID() getpid()
Information Maintenance SetTimer() alarm()
Sleep() sleep()

CreatePipe() pipe()
Communication CreateFileMapping() shmget()
MapViewOfFile() mmap()
9 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

Windows Linux

SetFileSecurity()
Protection InitlializeSecurityDescriptor()
SetSecurityDescriptorGroup()

(5) Security lssues


Operating System Security
Every computer system and software design must handle all security risks and implement the necessary
measures to enforce security policies. At the same time, it's critical to strike a balance because strong
security measures might increase costs while also limiting the system's usability, utility, and smooth
operation. As a result, system designers must assure efficient performance without compromising
security.

In this article, you will learn about operating system security with its issues and other features.

What is Operating System Security?


The process of ensuring OS availability, confidentiality, integrity is known as operating system security.
OS security refers to the processes or measures taken to protect the operating system from dangers,
including viruses, worms, malware, and remote hacker intrusions. Operating system security comprises all
preventive-control procedures that protect any system assets that could be stolen, modified, or deleted if
OS security is breached.

Security refers to providing safety for computer system resources like software, CPU, memory, disks, etc.
It can protect against all threats, including viruses and unauthorized access. It can be enforced by
assuring the operating system's integrity, confidentiality, and availability. If an illegal user runs a
computer application, the computer or data stored may be seriously damaged.

System security may be threatened through two violations, and these are as follows:

1. Threat

A program that has the potential to harm the system seriously.

2. Attack

A breach of security that allows unauthorized access to a resource.

There are two types of security breaches that can harm the system: malicious and accidental. Malicious
threats are a type of destructive computer code or web script that is designed to cause system
10 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

vulnerabilities that lead to back doors and security breaches. On the other hand, Accidental Threats are
comparatively easier to protect against.

Security may be compromised through the breaches. Some of the breaches are as follows:

1. Breach of integrity

This violation has unauthorized data modification.

2. Theft of service

It involves the unauthorized use of resources.

3. Breach of confidentiality

It involves the unauthorized reading of data.

4. Breach of availability

It involves the unauthorized destruction of data.

5. Denial of service

It includes preventing legitimate use of the system. Some attacks may be accidental.

The goal of Security System


There are several goals of system security. Some of them are as follows:

1. Integrity

Unauthorized users must not be allowed to access the system's objects, and users with insufficient rights
should not modify the system's critical files and resources.

2. Secrecy

The system's objects must only be available to a small number of authorized users. The system files
should not be accessible to everyone.

3. Availability

All system resources must be accessible to all authorized users, i.e., no single user/process should be able
to consume all system resources. If such a situation arises, service denial may occur. In this case, malware
may restrict system resources and preventing legitimate processes from accessing them.

Types of Threats
There are mainly two types of threats that occur. These are as follows:
11 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

Program threats
The operating system's processes and kernel carry out the specified task as directed. Program Threats
occur when a user program causes these processes to do malicious operations. The common example of
a program threat is that when a program is installed on a computer, it could store and transfer user
credentials to a hacker. There are various program threats. Some of them are as follows:

1.Virus

A virus may replicate itself on the system. Viruses are extremely dangerous and can modify/delete user
files as well as crash computers. A virus is a little piece of code that is implemented on the system
program. As the user interacts with the program, the virus becomes embedded in other files and
programs, potentially rendering the system inoperable.

2. Trojan Horse

This type of application captures user login credentials. It stores them to transfer them to a malicious user
who can then log in to the computer and access system resources.

3. Logic Bomb

A logic bomb is a situation in which software only misbehaves when particular criteria are met; otherwise,
it functions normally.

4. Trap Door

A trap door is when a program that is supposed to work as expected has a security weakness in its code
that allows it to do illegal actions without the user's knowledge.

System Threats
System threats are described as the misuse of system services and network connections to cause user
problems. These threats may be used to trigger the program threats over an entire network, known as
program attacks. System threats make an environment in which OS resources and user files may be
misused. There are various system threats. Some of them are as follows:

1. Port Scanning

It is a method by which the cracker determines the system's vulnerabilities for an attack. It is a fully
automated process that includes connecting to a specific port via TCP/IP. To protect the attacker's
identity, port scanning attacks are launched through Zombie Systems, which previously independent
systems now serve their owners while being utilized for such terrible purposes.

2. Worm

The worm is a process that can choke a system's performance by exhausting all system resources. A
Worm process makes several clones, each consuming system resources and preventing all other
processes from getting essential resources. Worm processes can even bring a network to a halt.
12 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

3. Denial of Service

Denial of service attacks usually prevents users from legitimately using the system. For example, if a
denial-of-service attack is executed against the browser's content settings, a user may be unable to
access the internet.

Threats to Operating System


There are various threats to the operating system. Some of them are as follows:

Malware
It contains viruses, worms, trojan horses, and other dangerous software. These are generally short code
snippets that may corrupt files, delete the data, replicate to propagate further, and even crash a system.
The malware frequently goes unnoticed by the victim user while criminals silently extract important data.

Network Intrusion
Network intruders are classified as masqueraders, misfeasors, and unauthorized users. A masquerader is
an unauthorized person who gains access to a system and uses an authorized person's account. A
misfeasor is a legitimate user who gains unauthorized access to and misuses programs, data, or
resources. A rogue user takes supervisory authority and tries to evade access constraints and audit
collection.

Buffer Overflow
It is also known as buffer overrun. It is the most common and dangerous security issue of the operating
system. It is defined as a condition at an interface under which more input may be placed into a buffer
and a data holding area than the allotted capacity, and it may overwrite other information. Attackers use
such a situation to crash a system or insert specially created malware that allows them to take control of
the system.

How to ensure Operating System Security?


There are various ways to ensure operating system security. These are as follows:

Authentication
The process of identifying every system user and associating the programs executing with those users is
known as authentication. The operating system is responsible for implementing a security system that
ensures the authenticity of a user who is executing a specific program. In general, operating systems
identify and authenticate users in three ways.

1. Username/Password

Every user contains a unique username and password that should be input correctly before accessing a
system.
13 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

2. User Attribution

These techniques usually include biometric verification, such as fingerprints, retina scans, etc. This
authentication is based on user uniqueness and is compared to database samples already in the system.
Users can only allow access if there is a match.

3. User card and Key

To login into the system, the user must punch a card into a card slot or enter a key produced by a key
generator into an option provided by the operating system.

One Time passwords


Along with standard authentication, one-time passwords give an extra layer of security. Every time a user
attempts to log into the One-Time Password system, a unique password is needed. Once a one-time
password has been used, it cannot be reused. One-time passwords may be implemented in several ways.

1. Secret Key

The user is given a hardware device that can generate a secret id that is linked to the user's id. The
system prompts for such a secret id, which must be generated each time you log in.

2. Random numbers

Users are given cards that have alphabets and numbers printed on them. The system requests numbers
that correspond to a few alphabets chosen at random.

3. Network password

Some commercial applications issue one-time passwords to registered mobile/email addresses, which
must be input before logging in.

Firewalls
Firewalls are essential for monitoring all incoming and outgoing traffic. It imposes local security, defining
the traffic that may travel through it. Firewalls are an efficient way of protecting network systems or local
systems from any network-based security threat.

Physical Security
The most important method of maintaining operating system security is physical security. An attacker
with physical access to a system may edit, remove, or steal important files since operating system code
and configuration files are stored on the hard drive.
14 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

Operating System Security Policies and Procedures


Various operating system security policies may be implemented based on the organization that you are
working in. In general, an OS security policy is a document that specifies the procedures for ensuring that
the operating system maintains a specific level of integrity, confidentiality, and availability.

OS Security protects systems and data from worms, malware, threats, ransomware, backdoor intrusions,
viruses, etc. Security policies handle all preventative activities and procedures to ensure an operating
system's protection, including steal, edited, and deleted data.

As OS security policies and procedures cover a large area, there are various techniques to addressing
them. Some of them are as follows:

1. Installing and updating anti-virus software


2. Ensure the systems are patched or updated regularly
3. Implementing user management policies to protect user accounts and privileges.
4. Installing a firewall and ensuring that it is properly set to monitor all incoming and outgoing traffic.

OS security policies and procedures are developed and implemented to ensure that you must first
determine which assets, systems, hardware, and date are the most vital to your organization. Once that is
completed, a policy can be developed to secure and safeguard them properly.

(6)Processes and Threads


Process Vs. Thread | Difference Between
Process and Thread
"Difference between process and thread" is one of the widely asked questions of technical interviews.
Both processes and threads are related to each other and very much similar, hence create confusion to
understand the differences between both of them. The process and thread are an independent sequence
of execution, but both are differentiated in a way that processes execute in different memory spaces,
whereas threads of the same process execute in shared memory space.

In this topic, we will understand the brief introduction of processes and threads and what are other
differences between both of them.

What is Process?
A process is an instance of a program that is being executed. When we run a program, it does not
execute directly. It takes some time to follow all the steps required to execute the program, and following
these execution steps is known as a process.

A process can create other processes to perform multiple tasks at a time; the created processes are
known as clone or child process, and the main process is known as the parent process. Each process
15 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

contains its own memory space and does not share it with the other processes. It is known as the active
entity. A typical process remains in the below form in memory.

A process in OS can remain in any of the following states:

o NEW: A new process is being created.


o READY: A process is ready and waiting to be allocated to a processor.
o RUNNING: The program is being executed.
o WAITING: Waiting for some event to happen or occur.
o TERMINATED: Execution finished.

How do Processes work?


When we start executing the program, the processor begins to process it. It takes the following steps:

o Firstly, the program is loaded into the computer's memory in binary code after translation.
o A program requires memory and other OS resources to run it. The resources such that registers, program
counter, and a stack, and these resources are provided by the OS.
o A register can have an instruction, a storage address, or other data that is required by the process.
o The program counter maintains the track of the program sequence.
o The stack has information on the active subroutines of a computer program.
o A program may have different instances of it, and each instance of the running program is knowns as the
individual process.

Features of Process

o Each time we create a process, we need to make a separate system call for each process to the OS.
The fork() function creates the process.
o Each process exists within its own address or memory space.
o Each process is independent and treated as an isolated process by the OS.
o Processes need IPC (Inter-process Communication) in order to communicate with each other.
o A proper synchronization between processes is not required.

What is Thread?
A thread is the subset of a process and is also known as the lightweight process. A process can have
more than one thread, and these threads are managed independently by the scheduler. All the threads
within one process are interrelated to each other. Threads have some common information, such as data
segment, code segment, files, etc., that is shared to their peer threads. But contains its own registers,
stack, and counter.
16 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

How does thread work?


As we have discussed that a thread is a subprocess or an execution unit within a process. A process can
contain a single thread to multiple threads. A thread works as follows:

o When a process starts, OS assigns the memory and resources to it. Each thread within a process shares the
memory and resources of that process only.
o Threads are mainly used to improve the processing of an application. In reality, only a single thread is
executed at a time, but due to fast context switching between threads gives an illusion that threads are
running parallelly.
o If a single thread executes in a process, it is known as a single-threaded And if multiple threads execute
simultaneously, then it is known as multithreading.

Types of Threads
There are two types of threads, which are:

1. User Level Thread

As the name suggests, the user-level threads are only managed by users, and the kernel does not have its
information.

These are faster, easy to create and manage.

The kernel takes all these threads as a single process and handles them as one process only.

The user-level threads are implemented by user-level libraries, not by the system calls.

2. Kernel-Level Thread

The kernel-level threads are handled by the Operating system and managed by its kernel. These threads
are slower than user-level threads because context information is managed by the kernel. To create and
implement a kernel-level thread, we need to make a system call.

Features of Thread

o Threads share data, memory, resources, files, etc., with their peer threads within a process.
o One system call is capable of creating more than one thread.
o Each thread has its own stack and register.
o Threads can directly communicate with each other as they share the same address space.
o Threads need to be synchronized in order to avoid unexpected scenarios.
17 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

Key Differences Between Process and Thread


o A process is independent and does not contained within another process, whereas all threads are logically
contained within a process.
o Processes are heavily weighted, whereas threads are light-weighted.
o A process can exist individually as it contains its own memory and other resources, whereas a thread cannot
have its individual existence.
o A proper synchronization between processes is not required. In contrast, threads need to be synchronized
in order to avoid unexpected scenarios.
o Processes can communicate with each other using inter-process communication only; in contrast, threads
can directly communicate with each other as they share the same address space.

Difference Table Between Process and Thread


Process Thread

A process is an instance of a program that is Thread is a segment of a process or a lightweight process


being executed or processed. that is managed by the scheduler independently.

Processes are independent of each other and Threads are interdependent and share memory.
hence don't share a memory or other resources.

Each process is treated as a new process by the The operating system takes all the user-level threads as a
operating system. single process.

If one process gets blocked by the operating If any user-level thread gets blocked, all of its peer threads
system, then the other process can continue the also get blocked because OS takes all of them as a single
execution. process.

Context switching between two processes takes Context switching between the threads is fast because they
much time as they are heavy compared to thread. are very lightweight.

The data segment and code segment of each Threads share data segment and code segment with their
process are independent of the other. peer threads; hence are the same for other threads also.

The operating system takes more time to Threads can be terminated in very little time.
terminate a process.

New process creation is more time taking as each A thread needs less time for creation.
new process takes all the resources.
18 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

(7) Thread functionality

Now, we will speak about threads. The threads' functionalities are very close to the
fork ones, but with some important differences. A thread will create a new stream to
your running process. Its starting point is a function that is specified as a
parameter. A thread will also be executed in the same context as its parent. The
main implication is that the memory is the same, but it's not the only one. If the
parent process dies, all its threads will die too.

These two points can be a problem if you don't know how to deal with them. Let's
take an example of the concurrent memory access.

Let's say that you have a global variable in your program named var. The main
process will then create a thread. This thread will then write into var and at the
same time, the main process can write in it too. This will result in an undefined
behavior. There are different solutions to avoid this behavior and the common one is
to lock the access to this variable with a mutex.

To put it simply...

Adding multithreading to our games

We will now modify our Gravitris to paralyze the physics calculations from the rest
of the program. We will need to change only two files: Game.hpp and Game.cpp.
In the header file, we will not only need to add the required header, but also change
the prototype of the update_physics() function and finally add some attributes to the
class. So here are the different steps to follow:
1. Add #include <SFML/System.hpp>, this will allow us to have access to all the classes
needed.

2. Then, change the following code snippet:

void updatePhysics(const sf::Time& deltaTime,const sf::Time& timePerFrame)

to:

void updatePhysics()

The reason is that a thread is not able to pass any parameters to its wrapped
function so we will use another solution: member variables.
19 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

3. Add the following variables into the Game class as private:


4. sf::Thread _physicsThread;

5. sf::Mutex _mutex;

6. bool _isRunning;

int _physicsFramePerSeconds

All these variables will be used by the physics thread...

Summary

In this chapter, we covered the use of multithreading and applied it to our existing
Gravitris project. We have learned the reason for this, the different possible uses,
and the protection of the shared variables.

In our actual game, multithreading is a bit overkill, but in a bigger one for instance
with hundreds of players, networking, and real-time strategies; it becomes a must
have.

In the next chapter, we will build an entire new game and introduce new things such
as the isometric view, component system, path finding, and more.

(8) Windows Thread and SMP Management


Windows 2000 Threads and SMP Management
Windows 2000 supports processes, jobs, threads and fibres. The process has virtual address spaces and containers
for resources. Threads are the unit of execution and it is scheduled by the operating system. Fibres are lightweight
threads that are scheduled entirely in user space. Jobs are collections of processes a collection of threads and those
threads are a collection of fibres.

Characteristics of Windows 2000 Process and Thread:


1. Processes are implemented as objects in windows 2000.
2. An executable process may contain one or more threads.
3. Both process and thread objects have built-in synchronization capabilities.

Thread States in Windows 2000:


20 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

A thread in windows 2000 is in one of the following 6 states:

1. Ready: The thread is ready to schedule for execution.


2. Standby: A standby thread has been selected to run next on a processor. If the standby thread priority is greater
than the currently executing thread priority then executing thread should be preempted, otherwise, the standby
thread has to wait.
3. Running: A thread shift from the standby state to the running state only when the thread is scheduled for
execution.
4. Waiting: A thread is waiting for an event or voluntarily waits for state only when the thread scheduled for
execution.
5. Transition: A thread needs a resource, then it enters into waiting for the state. If the resource isn’t available for a
long time, then it enters into a transition state. Whenever the resource is available then the thread moved to the
ready state from the transition state.
6. Terminated: Generally threads terminate with the following reasons:
i. Thread terminated by itself or another thread.
ii. Thread terminated when parent process terminates.

(9)Linux Process and Thread Management


A process in LINUX is a single program running in its own virtual space on the operating system. To create a
process in LINUX, the ‘parent process‘ initiate a fork().
Fork() suggests that the process creates a copy of itself. A process in LINUX is represented by a data structure
(like PCB) is called ‘task-struct‘. It contains the all information about the process.

LINUX Task Structure:


1. PID: It is a unique process identification number.
2. State: It specifies one of the 5 states of a process.
21 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

3. Scheduling Information: It specifies that which type of information needed.


4. Links: Process having the link to its parent process if it is a child process. If it is a parent process links to all of
its children.
5. File System: It includes pointers to any files opened by this process.
6. Virtual Memory: It defines the virtual memory assigned to this process.
7. Times and Timers: It specifies process creation time, CPU burst time of a process, etc.
8. Processor-specific context: The registers and stack information that constitute the context of this process.

Process / Thread States in LINUX:


(10) Solaris Threads in Operating System
Solaris Multi-threaded Architecture:
Solaris supports separate thread-related concepts. These are process, user-level threads, light-weight processes and
kernel threads.

1. Process: A user process is a collection of one or more application threads or user-level threads. A process
includes the user’s address space, stack and PCB (Process Control Block).
2. User-Level Threads: It implements through a threads library in the address space of a process. These are
invisible to the operating system.
3. Light-Weight Process: It creates the interface between user-level threads and kernel threads. Each Light-Weight
Process is associated with a kernel thread. It is scheduled by the kernel independently and it may execute in parallel
on multi-processors.
4. Kernel Threads: Each user process is associated with at least one kernel thread. Kernel threads are the only
entity to which the kernel has access for scheduling purposes.
22 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

Process Structure in Solaris:


1. Process ID: It is the unique identification number of each process.
2. User ID: It specifies the user identification which includes login names.
3. Signal Dispatch Table: It specifies what to do when sending a signal to a person.
4. File Descriptors: These describe the state of files in use by this process.
5. Memory Map: It defines the memory space for this process.

[dropshadowbox align=”none” effect=”raised” width=”auto” height=”” background_color=”#ffffff”


border_width=”1″ border_color=”#dddddd” ]1. Running: It includes two states:
i. Ready
ii. Executing
2. Interruptible: It is a suspended state when the process or thread is waiting for an event, then it enters into an
interruptible state.
3. Uninterruptible: It is another suspended state, here the process or thread is waiting directly on Hardware
conditions.
4. Stopped: The process or thread has been halted and it can only resume by a positive action from another process
or thread.
5. Zombie: It terminates when the execution completes.

(11) Linux Process Memory Usage


Determining the program often needs detecting the memory usage of the system, which consumes all
CPU resources or the program which is responsible for slowing down the CPU's activities. Tracing process
memory usage is essential in order to specify the load on the server. By parsing usage data, the servers
can balance the load without slowing down the system and serving the user's request.
23 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

Commands Used to Check the Process Memory


Usage in Linux
There are various commands to check process memory usage in Linux:

1. Free
This command shows the amount of memory that is presently available and used by the system for both
swapped as well as physical. The free command collects this data via parsing/proc/meminfo. By default,
the amount of memory is shown in kilobytes.

If we want to execute the program periodically, then we can use the watch command.

Syntax:

1. Watch -n 7 free -m

According to the above image, there is 3842 MB RAM and 7628 MB of swap space allotted to the Linux
system. Out of 3852 MB RAM, 678 MB is presently used; however, 2373 MB is free. Correspondingly for
swap space, out of 7628 MB, 0 MB is used, and 7628 MB is free presently in the system.

2. vmstat
If we want to display the virtual memory statistics of the system, then we can use the vmstat command.
This command display data related to the memory, disk, paging, CPU activities etc. When we use this
command the first time, then this returns averages of data since the last reboot. The next use returns the
data according to the sampling periods of the length delay.

1. vmstat -d // Reports disk statistics


24 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

1. vmstat -s // shows the amount of memory that is used and available

3. Top
The top command is used to show all the processes presently running in the system. The top command
shows the list of thread and processes which are presently being managed by the kernel. In order to
monitor the total amount of memory usage we can also use this command.

1. Top -H Threads-mode operation


25 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

2. This will show each thread hat present in the system. If we do not use this command option, then the summation
of every thread in each process is shown.

4. /proc/meminfo
This file includes all the data related to the usage of memory. It provides you the detail of current
memory usage instead of the old stored values.

5. htop
It is an interactive process viewer. The Htop command is the same as top command except that we can
scroll horizontally and vertically in order to permit users to view each process running on the system, with

(12) Principles of Concurrency


What is Concurrency?
It refers to the execution of multiple instruction sequences at the same time. It occurs in an operating
system when multiple process threads are executing concurrently. These threads can interact with one
another via shared memory or message passing. Concurrency results in resource sharing, which causes
issues like deadlocks and resource scarcity. It aids with techniques such as process coordination, memory
allocation, and execution schedule to maximize throughput.

Principles of Concurrency
Today's technology, like multi-core processors and parallel processing, allows multiple processes and
threads to be executed simultaneously. Multiple processes and threads can access the same memory
space, the same declared variable in code, or even read or write to the same file.

The amount of time it takes a process to execute cannot be simply estimated, and you cannot predict
which process will complete first, enabling you to build techniques to deal with the problems that
concurrency creates.

Interleaved and overlapping processes are two types of concurrent processes with the same problems. It
is impossible to predict the relative speed of execution, and the following factors determine it:

1. The way operating system handles interrupts


2. Other processes' activities
3. The operating system's scheduling policies

Problems in Concurrency
There are various problems in concurrency. Some of them are as follows:
26 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

1. Locating the programming errors

It's difficult to spot a programming error because reports are usually repeatable due to the varying states
of shared components each time the code is executed.

2. Sharing Global Resources

Sharing global resources is difficult. If two processes utilize a global variable and both alter the variable's
value, the order in which the many changes are executed is critical.

3. Locking the channel

It could be inefficient for the OS to lock the resource and prevent other processes from using it.

4. Optimal Allocation of Resources

It is challenging for the OS to handle resource allocation properly.

Issues of Concurrency
Various issues of concurrency are as follows:

1. Non-atomic

Operations that are non-atomic but interruptible by several processes may happen issues. A non-atomic
operation depends on other processes, and an atomic operation runs independently of other processes.

2. Deadlock

In concurrent computing, it occurs when one group member waits for another member, including itself,
to send a message and release a lock. Software and hardware locks are commonly used to arbitrate
shared resources and implement process synchronization in parallel computing, distributed systems, and
multiprocessing.

3. Blocking

A blocked process is waiting for some event, like the availability of a resource or completing an I/O
operation. Processes may block waiting for resources, and a process may be blocked for a long time
waiting for terminal input. If the process is needed to update some data periodically, it will be very
undesirable.

4. Race Conditions

A race problem occurs when the output of a software application is determined by the timing or
sequencing of other uncontrollable events. Race situations can also happen in multithreaded software,
runs in a distributed environment, or is interdependent on shared resources.

5. Starvation
27 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

A problem in concurrent computing is where a process is continuously denied the resources it needs to
complete its work. It could be caused by errors in scheduling or mutual exclusion algorithm, but resource
leaks may also cause it.

Concurrent system design frequently requires developing dependable strategies for coordinating their
execution, data interchange, memory allocation, and execution schedule to decrease response time and
maximize throughput.

Advantages and Disadvantages of Concurrency in


Operating System
Various advantages and disadvantages of Concurrency in Operating systems are as follows:

Advantages
1. Better Performance

It improves the operating system's performance. When one application only utilizes the processor, and
another only uses the disk drive, the time it takes to perform both apps simultaneously is less than the
time it takes to run them sequentially.

2. Better Resource Utilization

It enables resources that are not being used by one application to be used by another.

3. Running Multiple Applications

It enables you to execute multiple applications simultaneously.

Disadvantages

1. It is necessary to protect multiple applications from each other.


2. It is necessary to use extra techniques to coordinate several applications.
3. Additional performance overheads and complexities in OS are needed for switching between applications.

(13) Mutual Exclusion


Mutual section from the resource point of view is the fact that a resource can never be used by more
than one process simultaneously which is fair enough but that is the main reason behind the deadlock. If
a resource could have been used by more than one process at the same time then the process would
have never been waiting for any resource.

However, if we can be able to violate resources behaving in the mutually exclusive manner then the
deadlock can be prevented.
28 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

Spooling
For a device like printer, spooling can work. There is a memory associated with the printer which stores
jobs from each of the process into it. Later, Printer collects all the jobs and print each one of them
according to FCFS. By using this mechanism, the process doesn't have to wait for the printer and it can
continue whatever it was doing. Later, it collects the output when it is produced.

Although, Spooling can be an effective approach to violate mutual exclusion but it suffers from two kinds
of problems.

1. This cannot be applied to every resource.


2. After some point of time, there may arise a race condition between the processes to get space in that
spool.

We cannot force a resource to be used by more than one process at the same time since it will not be fair
enough and some serious problems may arise in the performance. Therefore, we cannot violate mutual
exclusion for a process practically.

2. Hold and Wait


Hold and wait condition lies when a process holds a resource and waiting for some other resource to
complete its task. Deadlock occurs because there can be more than one process which are holding one
resource and waiting for other in the cyclic order.

However, we have to find out some mechanism by which a process either doesn't hold any resource or
doesn't wait. That means, a process must be assigned all the necessary resources before the execution
starts. A process must not wait for any resource once the execution has been started.

!(Hold and wait) = !hold or !wait (negation of hold and wait is, either you don't hold or you don't
wait)

This can be implemented practically if a process declares all the resources initially. However, this sounds
very practical but can't be done in the computer system because a process can't determine necessary
resources initially.
29 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

Process is the set of instructions which are executed by the CPU. Each of the instruction may demand
multiple resources at the multiple times. The need cannot be fixed by the OS.

The problem with the approach is:

1. Practically not possible.


2. Possibility of getting starved will be increases due to the fact that some process may hold a resource for a
very long time.

3. No Preemption
Deadlock arises due to the fact that a process can't be stopped once it starts. However, if we take the
resource away from the process which is causing deadlock then we can prevent deadlock.

This is not a good approach at all since if we take a resource away which is being used by the process
then all the work which it has done till now can become inconsistent.

Consider a printer is being used by any process. If we take the printer away from that process and assign
it to some other process then all the data which has been printed can become inconsistent and
ineffective and also the fact that the process can't start printing again from where it has left which causes
performance inefficiency.

4. Circular Wait
To violate circular wait, we can assign a priority number to each of the resource. A process can't request
for a lesser priority resource. This ensures that not a single process can request a resource which is being
utilized by some other process and no cycle will be formed.

Among all the methods, violating Circular wait is the only approach that can be implemented practically.
30 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

(14) Hardware Support


What is Computer Hardware?
Hardware, which is abbreviated as HW, refers to all physical components of a computer system, including the
devices connected to it. You cannot create a computer or use software without using hardware. The screen on which
you are reading this information is also a hardware.

What is a hardware upgrade?


A hardware upgrade refers to a new hardware, or a replacement for the old one, or additional hardware developed
to improve the performance of the existing hardware. A common example of a hardware upgrade is a RAM
upgrade that increases the computer's total memory, and video card upgrade, where the old video card is removed
and replaced with the new one.

Computer Hardware Parts


Some of the commonly used hardware in your computer are described below:

1. Motherboard
2. Monitor
3. Keyboard
4. Mouse

1) Motherboard:
The motherboard is generally a thin circuit board that holds together almost all parts of a computer except input and
output devices. All crucial hardware like CPU, memory, hard drive, and ports for input and output devices are
located on the motherboard. It is the biggest circuit board in a computer chassis.

It allocates power to all hardware located on it and enables them to communicate with each other. It is meant to
hold the computer's microprocessor chip and let other components connect to it. Each component that runs the
computer or improves its performance is a part of the motherboard or connected to it through a slot or port.

There can be different types of motherboards based on the type and size of the computers. So, a specific
motherboard can work only with specific types of processors and memory.

Components of a Motherboard:
CPU Slot: It is provided to install the CPU. It is a link between a microprocessor and a motherboard. It facilitates
the use of CPU and prevents the damage when it is installed or removed. Furthermore, it is provided with a lock to
prevent CPU movement and a heat sink to dissipate the extra heat.

RAM Slot: It is a memory slot or socket provided in the motherboard to insert or install the RAM (Random Access
Memory). There can be two or more memory slots in a computer.
31 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

Expansion Slot: It is also called the bus slot or expansion port. It is a connection or port on the motherboard, which
provides an installation point to connect a hardware expansion card, for example, you can purchase a video
expansion card and install it into the expansion slot and then can install a new video card in the computer. Some of
the common expansion slots in a computer are AGP, AMR, CNR, PCI, etc.

Capacitor: It is made of two conductive plates, and a thin insulator sandwiched between them. These parts are
wrapped in a plastic container.

Inductor (Coil): It is an electromagnetic coil made of a conducting wire wrapped around an iron core. It acts as an
inductor or electromagnet to store magnetic energy.

Northbridge: It is an integrated circuit that allows communications between the CPU interface, AGP, and memory.
Furthermore, it also allows the southbridge chip to communicate with the RAM, CPU, and graphics controller.

USB Port: It allows you to connect hardware devices like mouse, keyboard to your computer.

PCI Slot: It stands for Peripheral Component Interconnect slot. It allows you to connect the PCI devices like
modems, network hardware, sound, and video cards.

AGP Slot: It stands for Accelerated Graphics Port. It provides the slot to connect graphics cards.

Heat Sink: It absorbs and disperses the heat generated in the computer processor.

Power Connector: It is designed to supply power to the motherboard.

CMOS battery: It stands for complementary metal-oxide-semiconductor. It is a memory that stores the BIOS
settings such as time, date, and hardware settings.

2) Monitor:
A monitor is the display unit of a computer on which the processed data, such as text, images, etc., is displayed. It
comprises a screen circuity and the case which encloses this circuity. The monitor is also known as a visual display
unit (VDU).

Types of Monitors:

1. CRT Monitor: It has cathode ray tubes which produce images in the form of video signals. Its main components are
electron gun assembly, deflection plate assembly, glass envelope, fluorescent screen, and base.
2. LCD Monitor: It is a flat panel screen. It uses liquid crystal display technology to produce images on the screen.
Advanced LEDs have thin-film transistors with capacitors and use active-matrix technology, which allows pixels to
retain their charge.
3. LED Monitor: It is an advanced version of an LCD monitor. Unlike an LCD monitor, which uses cold cathode
fluorescent light to backlight the display, it has LED panels, each of which has lots of LEDs to display the backlight.
4. Plasma Monitor: It uses plasma display technology that allows it to produce high resolutions of up to 1920 X 1080,
wide viewing angle, a high refresh rate, outstanding contrast ration, and more.
32 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

3) Keyboard:
It is the most important input device of a computer. It is designed to allow you input text, characters, and other
commands into a computer, desktop, tablet, etc. It comes with different sets of keys to enter numbers, characters,
and perform various other functions like copy, paste, delete, enter, etc.

Types of Keyboards:

1. QWERTY Keyboards
2. AZERTY Keyboards
3. DVORAK Keyboards

4) Mouse:
It is a small handheld device designed to control or move the pointer (computer screen's cursor) in a GUI (graphical
user interface). It allows you to point to or select objects on a computer's display screen. It is generally placed on a
flat surface as we need to move it smoothly to control the pointer. Types of Mouse: Trackball mouse, Mechanical
Mouse, Optical Mouse, Wireless Mouse, etc.

Main functions of a mouse:

o Move the cursor: It is the main function of the mouse; to move the cursor on the screen.
o Open or execute a program: It allows you to open a folder or document and execute a program. You are required to
take the cursor on the folder and double click it to open it.
o Select: It allows you to select text, file, or any other object.
o Hovering: Hovering is an act of moving the mouse cursor over a clickable object. During hovering over an object, it
displays information about the object without pressing any button of the mouse.
o Scroll: It allows you to scroll up or down while viewing a long webpage or document.

Parts of a mouse:

o Two buttons: A mouse is provided with two buttons for right click and left click.
o Scroll Wheel: A wheel located between the right and left buttons, which is used to scroll up and down and Zoom in
and Zoom out in some applications like AutoCAD.
o Battery: A battery is required in a wireless mouse.
o Motion Detection Assembly: A mouse can have a trackball or an optical sensor to provide signals to the computer
about the motion and location of the mouse.
33 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

(15) Semaphore and Monitor


In this article, you will learn the difference between the semaphore and monitor. But before discussing the
differences, you will need to know about the semaphore and monitor.

What is Semaphore?
A semaphore is an integer variable that allows many processes in a parallel system to manage access to a common
resource like a multitasking OS. It is an integer variable (S), and it is initialized with the number of resources in
the system. The wait() and signal() methods are the only methods that may modify the semaphore (S) value. When
one process modifies the semaphore value, other processes can't modify the semaphore value simultaneously.

Furthermore, the operating system categorizes semaphores into two types:

1. Counting Semaphore
2. Binary Semaphore

Counting Semaphore
In Counting Semaphore, the value of semaphore S is initialized to the number of resources in the system. When a
process needs to access shared resources, it calls the wait() method on the semaphore, decreasing its value by one.
When the shared resource is released, it calls the signal() method, increasing the value by 1.

When the semaphore count reaches 0, it implies that the processes have used all resources. Suppose a process needs
to utilize a resource when the semaphore count is 0. In that case, it performs the wait() method, and it is blocked
until another process using the shared resources releases it, and the value of the semaphore increases to 1.

Binary Semaphore
Semaphore has a value between 0 and 1 in binary semaphore. It's comparable to mutex lock, except that mutex is a
locking method while the semaphore is a signalling method. When a process needs to access a binary semaphore
resource, it uses the wait() method to decrement the semaphore's value from 1 to 0.

When the process releases the resource, it uses the signal() method to increase the semaphore value to 1. When the
semaphore value is 0, and a process needs to use the resource, it uses the wait() method to block until the current
process that is using the resource releases it.

Syntax:
The syntax of the semaphore may be used as:

1. // Wait Operation
2. wait(Semaphore S) {
3. while (S<=0);
4. S--;
5. }
6. // Signal Operation
34 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

7. signal(Semaphore S) {
8. S++;
9. }

Advantages and Disadvantages of Semaphore


Various advantages and disadvantages of the semaphore are as follows:

Advantages;

1. They don't allow multiple processes to enter the critical part simultaneously. Mutual exclusion is achieved in this
manner, making it much more efficient than other synchronization techniques.
2. There is no waste of process time or resources as a result of the busy waiting in semaphore. It is because processes are
only allowed to access the critical section if a certain condition is satisfied.
3. They enable resource management that is flexible.
4. They are machine-independent because they execute in the microkernel's machine-independent code.

Disadvantages

1. There could be a situation of priority inversion where the processes with low priority get access to the critical section
than those with higher priority.
2. Semaphore programming is complex, and there is a risk that mutual exclusion will not be achieved.
3. The wait() and signal() methods must be conducted correctly to avoid deadlocks.

**What is Monitor?
It is a synchronization technique that enables threads to mutual exclusion and the wait() for a given condition to
become true. It is an abstract data type. It has a shared variable and a collection of procedures executing on the
shared variable. A process may not directly access the shared data variables, and procedures are required to allow
several processes to access the shared data variables simultaneously.

At any particular time, only one process may be active in a monitor. Other processes that require access to the
shared variables must queue and are only granted access after the previous process releases the shared variables.

Syntax:
The syntax of the monitor may be used as:

1. monitor {
2.
3. //shared variable declarations
4. data variables;
5. Procedure P1() { ... }
6. Procedure P2() { ... }
35 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

7. .
8. .
9. .
10. Procedure Pn() { ... }
11. Initialization Code() { ... }
12. }

Advantages and Disadvantages of Monitor


Various advantages and disadvantages of the monitor are as follows:

Advantages

1. Mutual exclusion is automatic in monitors.


2. Monitors are less difficult to implement than semaphores.
3. Monitors may overcome the timing errors that occur when semaphores are used.
4. Monitors are a collection of procedures and condition variables that are combined in a special type of module.

Disadvantages

1. Monitors must be implemented into the programming language.


2. The compiler should generate code for them.
3. It gives the compiler the additional burden of knowing what operating system features is available for controlling
access to crucial sections in concurrent processes.

Main Differences between the Semaphore and


Monitor
Here, you will learn the main differences between the semaphore and monitor. Some of the main differences are as
follows:

1. A semaphore is an integer variable that allows many processes in a parallel system to manage access to a common
resource like a multitasking OS. On the other hand, a monitor is a synchronization technique that enables threads to
mutual exclusion and the wait() for a given condition to become true.
2. When a process uses shared resources in semaphore, it calls the wait() method and blocks the resources. When it
wants to release the resources, it executes the signal() In contrast, when a process uses shared resources in the
monitor, it has to access them via procedures.
3. Semaphore is an integer variable, whereas monitor is an abstract data type.
4. In semaphore, an integer variable shows the number of resources available in the system. In contrast, a monitor is an
abstract data type that permits only a process to execute in the crucial section at a time.
5. Semaphores have no concept of condition variables, while monitor has condition variables.
36 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

6. A semaphore's value can only be changed using the wait() and signal() In contrast, the monitor has the shared
variables and the tool that enables the processes to access them.

Head-to-head comparison between the Semaphore


and Monitor
Various head-to-head comparisons between the semaphore and monitor are as follows:

Features Semaphore Monitor

Definition A semaphore is an integer variable that allows many It is a synchronization process that enables
processes in a parallel system to manage access to a threads to have mutual exclusion and the wait()
common resource like a multitasking OS. for a given condition to become true.

Syntax // Wait Operation monitor {


wait(Semaphore S) { //shared variable declarations
while (S<=0); data variables;
S--; Procedure P1() { ... }
} Procedure P2() { ... }
// Signal Operation .
signal(Semaphore S) { .
S++; .
} Procedure Pn() { ... }
}

Basic Integer variable Abstract data type

Access When a process uses shared resources, it calls the When a process uses shared resources in the
wait() method on S, and when it releases them, it uses monitor, it has to access them via procedures.
the signal() method on S.

Action The semaphore's value shows the number of shared The Monitor type includes shared variables as
resources available in the system. well as a set of procedures that operate on them.

Condition No condition variables. It has condition variables.


Variable

Conclusion
In summary, semaphore and monitor are two synchronization mechanisms. A semaphore is an integer variable that
performs the wait() and signal() methods. In contrast, the monitor is an abstract data type that enables only a
process to use a shared resource at a time. Monitors are simpler to implement than semaphores, and there are fewer
chances of making a mistake in monitors than with semaphores.
37 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

(15) What is Message Passing?


In this message passing process model, the processes communicate with others by exchanging messages. A
communication link between the processes is required for this purpose, and it must provide at least two
operations: transmit (message) and receive (message). Message sizes might be flexible or fixed.

Key differences between the Shared Memory and


Message Passing
Here, you will learn the various key differences between Shared Memory and Message Passing. Various differences
between Shared Memory and Message Passing are as follows:

1. Shared memory is used to communicate between the single processor and multiprocessor systems. The
communication processes are on the same machine and share the same address space. On the other hand, message
passing is most commonly utilized in a distributed setting when communicating processes are spread over multiple
devices linked by a network.
2. Shared memory offers a maximum computation speed because communication is completed via the shared memory,
so the system calls are only required to establish the shared memory. On the other hand, message passing takes time
because it is performed via the kernel (system calls).
3. The shared memory region is mainly used for data communication. On the other hand, message passing is mainly
used for communication.
4. Make sure that processes in shared memory aren't writing to the same address simultaneously. On the other hand,
message passing is useful for sharing little quantities of data without causing disputes.
5. The code for reading and writing the data from the shared memory should be written explicitly by the developer. On
the other hand, no such code is required in this case because the message passing feature offers a method for
communication and synchronization of activities executed by the communicating processes.

Head-to-head comparison between Shared Memory


the Message Passing
Here, you will learn the head-to-head comparisons between the Shared Memory and the Message Passing. The
main differences between the Shared Memory and the Message Passing are as follows:

Shared Memory Message Passing

It is mainly used for data communication. It is mainly used for communication.

It offers a maximum speed of computation because It takes a huge time because it is performed via the kernel
communication is completed via the shared memory, so the (system calls).
system calls are only required to establish the shared
memory.
38 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

The code for reading and writing the data from the shared No such code is required in this case because the message
memory should be written explicitly by the developer. passing feature offers a method for communication and
synchronization of activities executed by the
communicating processes.

It is used to communicate between the single processor and It is most commonly utilized in a distributed setting when
multiprocessor systems in which the processes to be communicating processes are spread over multiple devices
communicated are on the same machine and share the same linked by a network.
address space.

It is a faster communication strategy than the message It is a relatively slower communication strategy than the
passing. shared memory.

Make sure that processes in shared memory aren't writing to It is useful for sharing little quantities of data without
the same address simultaneously. causing disputes.

(16) Principles of Deadlock


What is Deadlock in Operating System
(OS)?
Every process needs some resources to complete its execution. However, the resource is granted in a sequential
order.

1. The process requests for some resource.


2. OS grant the resource if it is available otherwise let the process waits.
3. The process uses it and release on the completion.

A Deadlock is a situation where each of the computer process waits for a resource which is being assigned to some
another process. In this situation, none of the process gets executed since the resource it needs, is held by some
other process which is also waiting for some other resource to be released.

Let us assume that there are three processes P1, P2 and P3. There are three different resources R1, R2 and R3. R1 is
assigned to P1, R2 is assigned to P2 and R3 is assigned to P3.

After some time, P1 demands for R1 which is being used by P2. P1 halts its execution since it can't complete
without R2. P2 also demands for R3 which is being used by P3. P2 also stops its execution because it can't continue
without R3. P3 also demands for R1 which is being used by P1 therefore P3 also stops its execution OOPs Concepts in
Java

In this scenario, a cycle is being formed among the three processes. None of the process is progressing and they are
all waiting. The computer becomes unresponsive since all the processes got blocked.
39 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

Difference between Starvation and Deadlock

Sr Deadlock Starvation
.

1 Deadlock is a situation where no process got blocked and Starvation is a situation where the low priority process
no process proceeds got blocked and the high priority processes proceed.

2 Deadlock is an infinite waiting. Starvation is a long waiting but not infinite.

3 Every Deadlock is always a starvation. Every starvation need not be deadlock.

4 The requested resource is blocked by the other process. The requested resource is continuously be used by the
higher priority processes.

5 Deadlock happens when Mutual exclusion, hold and wait, It occurs due to the uncontrolled priority and resource
No preemption and circular wait occurs simultaneously. management.

Necessary conditions for Deadlocks


1. Mutual Exclusion

A resource can only be shared in mutually exclusive manner. It implies, if two process cannot use the same
resource at the same time.

2. Hold and Wait

A process waits for some resources while holding another resource at the same time.

3. No preemption

The process which once scheduled will be executed till the completion. No other process can be scheduled
by the scheduler meanwhile.

4. Circular Wait

All the processes must be waiting for the resources in a cyclic manner so that the last process is waiting for
the resource which is being held by the first process.
40 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

(17) Deadlock Prevention


If we simulate deadlock with a table which is standing on its four legs then we can also simulate four legs with the
four conditions which when occurs simultaneously, cause the deadlock.

However, if we break one of the legs of the table then the table will fall definitely. The same happens with
deadlock, if we can be able to violate one of the four necessary conditions and don't let them occur together then we
can prevent the deadlock.

Let's see how we can prevent each of the conditions.

1. Mutual Exclusion
Mutual section from the resource point of view is the fact that a resource can never be used by more than one
process simultaneously which is fair enough but that is the main reason behind the deadlock. If a resource could
have been used by more than one process at the same time then the process would have never been waiting for any
resource.

However, if we can be able to violate resources behaving in the mutually exclusive manner then the deadlock can
be prevented.

Spooling
For a device like printer, spooling can work. There is a memory associated with the printer which stores jobs from
each of the process into it. Later, Printer collects all the jobs and print each one of them according to FCFS. By
using this mechanism, the process doesn't have to wait for the printer and it can continue whatever it was doing.
Later, it collects the output when it is produced.

Although, Spooling can be an effective approach to violate mutual exclusion but it suffers from two kinds of
problems.

1. This cannot be applied to every resource.


2. After some point of time, there may arise a race condition between the processes to get space in that spool.

We cannot force a resource to be used by more than one process at the same time since it will not be fair enough
and some serious problems may arise in the performance. Therefore, we cannot violate mutual exclusion for a
process practically.

2. Hold and Wait


Hold and wait condition lies when a process holds a resource and waiting for some other resource to complete its
task. Deadlock occurs because there can be more than one process which are holding one resource and waiting for
other in the cyclic order.

However, we have to find out some mechanism by which a process either doesn't hold any resource or doesn't wait.
That means, a process must be assigned all the necessary resources before the execution starts. A process must not
wait for any resource once the execution has been started.

!(Hold and wait) = !hold or !wait (negation of hold and wait is, either you don't hold or you don't wait)
41 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

This can be implemented practically if a process declares all the resources initially. However, this sounds very
practical but can't be done in the computer system because a process can't determine necessary resources initially.

Process is the set of instructions which are executed by the CPU. Each of the instruction may demand multiple
resources at the multiple times. The need cannot be fixed by the OS.

The problem with the approach is:

1. Practically not possible.


2. Possibility of getting starved will be increases due to the fact that some process may hold a resource for a very long
time.

3. No Preemption
Deadlock arises due to the fact that a process can't be stopped once it starts. However, if we take the resource away
from the process which is causing deadlock then we can prevent deadlock.

This is not a good approach at all since if we take a resource away which is being used by the process then all the
work which it has done till now can become inconsistent.

Consider a printer is being used by any process. If we take the printer away from that process and assign it to some
other process then all the data which has been printed can become inconsistent and ineffective and also the fact that
the process can't start printing again from where it has left which causes performance inefficiency.

4. Circular Wait
To violate circular wait, we can assign a priority number to each of the resource. A process can't request for a lesser
priority resource. This ensures that not a single process can request a resource which is being utilized by some other
process and no cycle will be formed.

Among all the methods, violating Circular wait is the only approach that can be implemented practically.
42 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

(18) Deadlock avoidance


In deadlock avoidance, the request for any resource will be granted if the resulting state of the system doesn't cause
deadlock in the system. The state of the system will continuously be checked for safe and unsafe states.

In order to avoid deadlocks, the process must tell OS, the maximum number of resources a process can request to
complete its execution.

The simplest and most useful approach states that the process should declare the maximum number of resources of
each type it may ever need. The Deadlock avoidance algorithm examines the resource allocations so that there can
never be a circular wait condition.

Safe and Unsafe States

The resource allocation state of a system can be defined by the instances of available and allocated resources, and
the maximum instance of the resources demanded by the processes.

A state of a system recorded at some random time is shown below.

Resources Assigned
Process Type 1 Type 2 Type 3 Type 4

A 3 0 2 2

B 0 0 1 1

C 1 1 1 0

D 2 1 4 0

Resources still needed


Process Type 1 Type 2 Type 3 Type 4

A 1 1 0 0

B 0 1 1 2

C 1 2 1 0

D 2 1 1 2

1. E = (7 6 8 4)
2. P = (6 2 8 3)
43 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

3. A = (1 4 0 1)

Above tables and vector E, P and A describes the resource allocation state of a system. There are 4 processes and 4
types of the resources in a system. Table 1 shows the instances of each resource assigned to each process.

Table 2 shows the instances of the resources, each process still needs. Vector E is the representation of total
instances of each resource in the system.

Vector P represents the instances of resources that have been assigned to processes. Vector A represents the number
of resources that are not in use.

A state of the system is called safe if the system can allocate all the resources requested by all the processes without
entering into deadlock.

If the system cannot fulfill the request of all processes then the state of the system is called unsafe.

The key of Deadlock avoidance approach is when the request is made for resources then the request must only be
approved in the case if the resulting state is also a safe state.

(19) Deadlock Detection and Recovery


In this approach, The OS doesn't apply any mechanism to avoid or prevent the deadlocks. Therefore the system
considers that the deadlock will definitely occur. In order to get rid of deadlocks, The OS periodically checks the
system for any deadlock. In case, it finds any of the deadlock then the OS will recover the system using some
recovery techniques.

The main task of the OS is detecting the deadlocks. The OS can detect the deadlocks with the help of Resource
allocation graph.

In single instanced resource types, if a cycle is being formed in the system then there will definitely be a deadlock.
On the other hand, in multiple instanced resource type graph, detecting a cycle is not just enough. We have to apply
the safety algorithm on the system by converting the resource allocation graph into the allocation matrix and
request matrix.

In order to recover the system from deadlocks, either OS considers resources or processes.
44 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

For Resource
Preempt the resource
We can snatch one of the resources from the owner of the resource (process) and give it to the other process with
the expectation that it will complete the execution and will release this resource sooner. Well, choosing a resource
which will be snatched is going to be a bit difficult.

Rollback to a safe state


System passes through various states to get into the deadlock state. The operating system canrollback the system to
the previous safe state. For this purpose, OS needs to implement check pointing at every state.

The moment, we get into deadlock, we will rollback all the allocations to get into the previous safe state.

For Process
Kill a process
Killing a process can solve our problem but the bigger concern is to decide which process to kill. Generally,
Operating system kills a process which has done least amount of work until now.

Kill all process


This is not a suggestible approach but can be implemented if the problem becomes very serious. Killing all process
will lead to inefficiency in the system because all the processes will execute again from starting.

(20)lntegrated Deadlock Strategy


The following are the strategies used for Deadlock Handling in Distributed System:
45 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

 Deadlock Prevention
 Deadlock Avoidance
 Deadlock Detection and Recovery
1. Deadlock Prevention: As the name implies, this strategy ensures that deadlock can never happen
because system designing is carried out in such a way. If any one of the deadlock-causing conditions is not
met then deadlock can be prevented. Following are the three methods used for preventing deadlocks by
making one of the deadlock conditions to be unsatisfied:
 Collective Requests: In this strategy, all the processes will declare the required resources for their
execution beforehand and will be allowed to execute only if there is the availability of all the required
resources. When the process ends up with processing then only resources will be released. Hence, the
hold and wait condition of deadlock will be prevented.
 But the issue is initial resource requirements of a process before it starts are based on an assumption
and not because they will be required. So, resources will be unnecessarily occupied by a process and
prior allocation of resources also affects potential concurrency.
 Ordered Requests: In this strategy, ordering is imposed on the resources and thus, process requests for
resources in increasing order. Hence, the circular wait condition of deadlock can be prevented.
 An ordering strictly indicates that a process never asks for a low resource while holding a
high one.
 There are two more ways of dealing with global timing and transactions in distributed
systems, both of which are based on the principle of assigning a global timestamp to each
transaction as soon as it begins.
 During the execution of a process, if a process seems to be blocked because of the resource
acquired by another process then the timestamp of the processes must be checked to identify
the larger timestamp process. In this way, cycle waiting can be prevented.
 It is better to give priority to the old processes because of their long existence and might be
holding more resources.
 It also eliminates starvation issues as the younger transaction will eventually be out of the
system.
 Preemption: Resource allocation strategies that reject no-preemption conditions can be used to avoid
deadlocks.
 Wait-die: If an older process requires a resource held by a younger process, the latter will
have to wait. A young process will be destroyed if it requests a resource controlled by an
older process.
 Wound-wait: If an old process seeks a resource held by a young process, the young process
will be preempted, wounded, and killed, and the old process will resume and wait. If a young
process needs a resource held by an older process, it will have to wait.
2. Deadlock Avoidance: In this strategy, deadlock can be avoided by examining the state of the system at
every step. The distributed system reviews the allocation of resources and wherever it finds an unsafe state,
the system backtracks one step and again comes to the safe state. For this, resource allocation takes time
whenever requested by a process. Firstly, the system analysis occurs whether the granting of resources will
make the system in a safe state or unsafe state then only allocation will be made.
 A safe state refers to the state when the system is not in deadlocked state and order is there for the
process regarding the granting of requests.
 An unsafe state refers to the state when no safe sequence exists for the system. Safe sequence implies
the ordering of a process in such a way that all the processes run to completion in a safe state.
3. Deadlock Detection and Recovery: In this strategy, deadlock is detected and an attempt is made to
resolve the deadlock state of the system. These approaches rely on a Wait-For-Graph (WFG), which is
generated and evaluated for cycles in some methods. The following two requirements must be met by a
deadlock detection algorithm:
46 | M . s c S e m 1 ADVANCED OPERATING SYSTEM

 Progress: In a given period, the algorithm must find all existing deadlocks. There should be no
deadlock existing in the system which is undetected under this condition. To put it another way, after
all, wait-for dependencies for a deadlock have arisen, the algorithm should not wait for any additional
events to detect the deadlock.
 No False Deadlocks: Deadlocks that do not exist should not be reported by the algorithm which is
called phantom or false deadlocks.
There are different types of deadlock detection techniques:
 Centralized Deadlock Detector: The resource graph for the entire system is managed by a central
coordinator. When the coordinator detects a cycle, it terminates one of the processes involved in the
cycle to break the deadlock. Messages must be passed when updating the coordinator’s graph.
Following are the methods:
 A message must be provided to the coordinator whenever an arc is created or removed from
the resource graph.
 Every process can transmit a list of arcs that have been added or removed since the last
update periodically.
 When information is needed, the coordinator asks for it.
 Hierarchical Deadlock Detector: In this approach, deadlock detectors are arranged in a hierarchy.
Here, only those deadlocks can be detected that fall within their range.
 Distributed Deadlock Detector: In this approach, detectors are distributed so that all the sites can
fully participate to resolve the deadlock state. In one of the following below four classes for the
Distributed Detection Algorithm- The probe-based scheme can be used for this purpose. It follows
local WFGs to detect local deadlocks and probe messages to detect global deadlocks.
There are four classes for the Distributed Detection Algorithm:
 Path-pushing: In path-pushing algorithms, the detection of distributed deadlocks is carried out by
maintaining an explicit global WFG.
 Edge-chasing: In an edge-chasing algorithm, probe messages are used to detect the presence of a cycle
in a distributed graph structure along the edges of the graph.
 Diffusion computation: Here, the computation for deadlock detection is dispersed throughout the
system’s WFG.
 Global state detection: The detection of Distributed deadlocks can be made by taking a snapshot of
the system and then inspecting it for signs of a deadlock.
To recover from a deadlock, one of the methods can be followed:
 Termination of one or more processes that created the unsafe state.
 Using checkpoints for the periodic checking of the processes so that whenever required, rollback of
processes that makes the system unsafe can be carried out and hence, maintained a safe state of the
system.
 Breaking of existing wait-for relationships between the processes.
 Rollback of one or more blocked processes and allocating their resources to stopped processes,
allowing them to restart operation.

You might also like