Unit 2
Unit 2
Depending on the type of code it is running, the processor switches between the
two modes. User mode is used for applications, whereas kernel mode is used for
the essential parts of the operating system. Some drivers may operate in user
mode, though the majority operate in kernel mode.
When you start a user-mode application, Windows creates a process for the
application.
The process provides the application with a private virtual address space and a
private handle table. Because an application's virtual address space is private, one
application can't alter data that belongs to another application.
A single virtual address space is shared by all kernel mode code. A kernel-mode
driver isn't separate from other drivers or the operating system as a whole because
of this. Data belonging to the operating system or another driver may be
compromised if a kernel-mode driver unintentionally writes to the incorrect
virtual address. The entire operating system fails if a kernel-mode driver
malfunctions.
System calls are the programming interface that allows user-level applications to
interact with the operating system (OS) kernel. They provide essential services
such as file manipulation, process management, and communication. System
programs, on the other hand, are utilities or applications that facilitate the
operation of the system, offering user interfaces to execute system calls more
conveniently. They include shells, file management tools, and system
management utilities.
There are mainly five kinds of system calls. These are classified as follows:
1. Process Control
2. File Management
3. Device Management
4. Information Maintenance
5. Communication
Now, you will learn all these different types of system calls one by one.
Process Control
It is responsible for file manipulation jobs, including creating files, deleting files,
reading, opening, writing, closing, etc.
File Management
It is responsible for file manipulation jobs, including creating files, opening files,
deleting files, closing files, etc.
Device Management
These are responsible for device manipulation, including reading from device
buffers, writing into device buffers, etc.
Information Maintenance
These are used to manage the data and its share between the OS and the user
program. Some common instances of information maintenance are getting time
or date, getting system data, setting time or date, setting system data, etc.
Communication
These are used for inter process communication (IPC). Some examples of IPC
are creating, sending, receiving messages, deleting communication connections,
etc.
The system program is a component of the OS, and it typically lies between
the user interface (UI) and system calls. The system user view is defined by the
system programs, not the system call, because the user view interacts with system
programs and is closer to the user interface.
Types of the System Program
There are mainly six types of system programs. These are classified as follows:
1. File Management
2. Status Information
3. File Modification
4. Programming-Language support
6. Communication
File Management
Status Information
Status information is information about the input, output process, storage, and
CPU utilization time, how the process will be computed in how much memory is
necessary to execute a task.
File Modification
These system programs are utilized to change files on hard drives or other storage
media. Besides modification, these programs are also utilized to search for
content within a file or to change content within a file.
Programming-Language Support
After Assembling and Compiling, the program must be loaded into the memory
for execution. A loader is a component of an operating system responsible for
loading programs and libraries, and it is one of the most important steps to starting
a program. The system includes linkage editors, relocatable loaders, Overlay
loaders, and loaders.
Communication
System program offers virtual links between processes, people, and computer
systems. Users may browse websites, log in remotely, communicate messages to
other users via their screens, send emails, and transfer files from one user to
another.
The OS has various head-to-head comparisons between System Call and System
Program. Some comparisons of the System Call and System Program are as
follows:
Features System Call System Program
It is a technique in which a
computer system program It offers an environment for a
Definition
requests a service from the OS program to create and run.
kernel.
The operating system can be observed from the point of view of the user or the
system. This is known as the user view and the system view respectively. More
details about these are given as follows −
User View
The user view depends on the system interface that is used by the users. The
different types of user view experiences can be explained as follows −
There are some devices that contain very less or no user view because there is no
interaction with the users. Examples are embedded computers in home devices,
automobiles etc.
• Process Abstraction:
• Process Hierarchy:
The creating process is called the Parent Process and the new process is
called Child Process.
There are different ways for creating a new process. These are as follows
−
• Sharing − The parent or child process shares all resources like memory or
files or children process shares a subset of parent’s resources or parent and
children process share no resource in common.
The reasons that parent process terminates the execution of one of its
children are as follows −
• The child process has exceeded its usage of resources that have been
allocated. Because of this there should be some mechanism which allows
the parent process to inspect the state of its children process.
• Example
• Consider a Business process to know about process hierarchy.
• Step 1 − Business processes can become very complicated, making it
difficult to model a large process with a single graphical model.
• Step 2 − It makes no sense to condense an end-to-end mechanism like
"order to cash" into a single graphical model that includes "article
collection to shopping cart," "purchase order request," "money transfer,"
"packaging," and "logistics," among other things.
• Step 3 − To break down large processes into smaller chunks, you'll need a
process hierarchy. The "from abstract to real" theory is followed by a
process hierarchy.
• Step 4 − This indicates that it includes data on operations at various levels
of granularity. As a result, knowledge about the abstract value chain or very
basic method steps and their logical order can be obtained.
• Step 5 − The levels of a process hierarchy, as well as the details included
in these levels, determine the hierarchy.
• Step 6 − It is critical to have a given knowledge base at each level;
otherwise, process models would not be comparable later.
• The model below depicts the process hierarchy model which includes
examples for each level – there are six levels in all.
Level 6 − Activity
The thread refers to one string that offers the functionality of cutting a program
into multiple jobs performing concurrently parallelly. A thread refers to the
smallest unit of computation and comprises a program counter, a register set, and
a stack space.
For instance, multiple tabs, a browser, text editor (Spell checking, and formatting
of text occur simultaneously with the type and saving of the text by various
threads).
Threading Issues in OS
• System Call
• Thread Cancellation
• Signal Handling
• Thread Pool
They are the system calls fork() and exec(). Fork() function gives rise to an
identical copy of process which initiated fork call. The new duplication process
is referred to as a child process, while the invoker process is identified by fork().
The instruction after fork continues the execution with both the parent process
and the child process.
Discussing fork() system call, therefore. Let us assume that one of the threads
belonging to a multi-threaded program has instigated a fork() call. Therefore, the
new process is a duplication of fork(). Here, the question is as follows; will the
new duplicate process made by fork() be multi-threaded like all threads of the old
process or it will be a unique thread?
Now, certain UNIX systems have two variants of fork(). fork can either duplicate
all threads of the parent process to a child process or just those that were invoked
by the parent process. The application will determine which version of fork() to
use.
When the next system call, namely exec() system call is issued, it replaces the
whole programming with all its threads by the program specified in the exec()
system call’s parameters. Ordinarily, the exec() system call goes into queue after
the fork() system call.
However, this implies that the exec() system call should not be queued
immediately after the fork() system call because duplicating all the threads of the
parent process into the child process will be superfluous. Since the exec() system
call will overwrite the whole process with the one given in the arguments passed
to exec().
This means that in cases like this; a fork() which only replicates one invoking
thread will do.
Thread Cancellation
The process of prematurely aborting an active thread during its run is called
‘thread cancellation’. So, let’s take a look at an example to make sense of it.
Suppose, there is a multithreaded program whose several threads have been given
the right to scan a database for some information. The other threads however will
get canceled once one of the threads happens to return with the necessary results.
The target thread is now the thread that we want to cancel. Thread cancellation
can be done in two ways:
On the other hand, the target thread receives this message first and then checks
its flag to see if it should cancel itself now or later. They are called the
Cancellation Points of threads under which thread cancellation occurs safely.
Signal Handling
Alternatively, you could give one thread the job of receiving all signals.
So, the way in which the signal shall be passed to the thread depends on how the
signal has been created. The generated signals can be classified into two types:
synchronous signal and asynchronous signal.
At this stage, the synchronous signals are routed just like any other signal was
generated. Since these signals are triggered by events outside of the running
process, they are received by the running process in an asynchronous manner,
referred to as asynchronous signals.
In contrast with UNIX where a thread specifies that it can or cannot receive a
thread, all control process instances (ACP) are sent to a particular thread.
Thread Pool
The establishment of a fresh thread is another thing that worries us. The creation
of a new thread should not take more than the amount of time used up by the
thread in dealing with the request and quitting after because this will be wasted
CPU resources.
Hence, thread pool could be the remedy for this challenge. The
he notion is that as many fewer threads as possible are established during the
beginning of the process. A group of threads that forms this collection of threads
is referred as a thread pool. There are always threads that stay on the thread pool
waiting for an assigned request to service.
Thread Pool
A new thread is spawned from the pool every time an incoming request reaches
the server, which then takes care of the said request. Having performed its duty,
it goes back to the pool and awaits its second order.
Whenever the server receives the request, and fails to identify a specific thread at
the ready thread pool, it may only have to wait until some of the threads are
available at the ready thread pool. It is better than starting a new thread whenever
a request arises because this system works well with machines that cannot support
multiple threads at once.
Of course, we all know that a thread belongs to data of one and the same process,
right?. The challenge here will be when every thread in that process must have its
own copy of the same data. Consequently, any data uniquely related to a particular
thread is referred to as thread-specific data.
Hence, these are the threading problems that arise in multithreaded programming
environments. Additionally, we examine possible ways of addressing these
concerns.
Process Scheduling
The process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another process
on the basis of a particular strategy.
1. Non-preemptive: Here the resource can’t be taken from a process until the
process completes execution. The switching of resources occurs when the running
process terminates and moves to a waiting state.
2. Preemptive: Here the OS allocates the resources to a process for a fixed amount
of time. During resource allocation, the process switches from running state to
ready state or from waiting state to ready state. This switching occurs as the CPU
may give priority to other processes and replace the process with higher priority
with the running process.
Different Algorithm
• Priority Scheduling
• Arrival Time: The time at which the process arrives in the ready queue.
• Completion Time: The time at which the process completes its execution.
• Waiting Time (W. T): Time Difference between turnaround time and burst
time Ie.
P0 0-0=0
P1 5-1=4
P2 8-2=6
P3 16 - 3 = 13
If a process with very least priority is being executed, more like daily routine
backup process, which takes more time, and all of a sudden some other high
priority process arrives, like interrupt to avoid system crash, the high priority
process will have to wait, and hence in this case, the system will crash, just
because of improper process scheduling.
Convoy Effect is a situation where many processes, who need to use a resource
for short time are blocked by one process holding that resource for a long time.
• The processer should know in advance how much time process will take.
Arrival Execution
Process
Time Time
P0 0 5
P1 1 3
P2 2 8
P3 3 6
Waiting time of each process is as follows −
P0 0-0=0
P1 5-1=4
P2 14 - 2 = 12
P3 8-3=5
Disadvantages
• Starvation can become a concern when using the SJN algorithm. In this
situation, longer-duration jobs may never have an opportunity to execute if
there is a continuous arrival of shorter jobs. This issue can be particularly
problematic in systems where fairness is a key consideration.
• Predicting the length of jobs accurately can be a challenging task in
practice. When the predicted job lengths turn out to be inaccurate, it can
affect the performance of SJN and result in frequent preemptions.
• In situations where fast responses are crucial, SJN might not be the most
responsive option for interactive systems or real-time environments. When
shorter jobs dominate the CPU, longer jobs may experience lengthy waits,
which is often detrimental.
• Processes with same priority are executed on first come first served basis.
Given: Table of processes, and their Arrival time, Execution time, and priority.
Here we are considering 1 is the lowest priority.
Service
Process Arrival Time Execution Time Priority
Time
P0 0 5 1 0
P1 1 3 2 11
P2 2 8 1 14
P3 3 6 3 5
P0 0-0=0
P1 11 - 1 = 10
P2 14 - 2 = 12
P3 5-3=2
• Shortest remaining time (SRT) is the preemptive version of the SJN algorithm.
• It is often used in batch environments where short jobs need to give preference.
• Once a process is executed for a given time period, it is preempted and other
process executes for a given time period.
P0 (0 - 0) + (12 - 3) = 9
P1 (3 - 1) = 2
P3 (9 - 3) + (17 - 12) = 11
For example, CPU-bound jobs can be scheduled in one queue and all I/O-
bound jobs in another queue. The Process Scheduler then alternately selects
jobs from each queue and assigns them to the CPU based on the algorithm
assigned to the queue.