0% found this document useful (0 votes)
10 views27 pages

Unit 2

Class notes prepared by me.

Uploaded by

voshit2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views27 pages

Unit 2

Class notes prepared by me.

Uploaded by

voshit2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Processor and User Modes in Operating System

The user mode and kernel mode(Processor) of a CPU on a computer running


Windows are two different operating systems.

Depending on the type of code it is running, the processor switches between the
two modes. User mode is used for applications, whereas kernel mode is used for
the essential parts of the operating system. Some drivers may operate in user
mode, though the majority operate in kernel mode.

User Mode in Operating System

When you start a user-mode application, Windows creates a process for the
application.

The process provides the application with a private virtual address space and a
private handle table. Because an application's virtual address space is private, one
application can't alter data that belongs to another application.

Each application runs in isolation, and if an application crashes, the crash is


limited to that one application. Other applications and the operating system aren't
affected by the crash.

A user-mode application's virtual address space is constrained in addition to being


private. Virtual addresses that are set aside for the operating system cannot be
accessed by a process running in user mode. A user-mode application's virtual
address area can be restricted to stop it from changing and potentially harming
crucial operating system data.

Processor / Kernel Mode in Operating System

A single virtual address space is shared by all kernel mode code. A kernel-mode
driver isn't separate from other drivers or the operating system as a whole because
of this. Data belonging to the operating system or another driver may be
compromised if a kernel-mode driver unintentionally writes to the incorrect
virtual address. The entire operating system fails if a kernel-mode driver
malfunctions.

The interaction between kernel-mode and user-mode components is shown


in this figure.

User mode when executing harmless code in user applications

• Kernel mode ( a.k.a. system mode, supervisor mode, privileged mode )


when executing potentially dangerous code in the system kernel.

• Certain machine instructions ( privileged instructions ) can only be


executed in kernel mode.
• Kernel mode can only be entered by making system calls. User code cannot
flip the mode switch.

• Modern computers support dual-mode operation in hardware, and


therefore most modern OSes support dual-mode operation.

System Calls and System Programs

System calls are the programming interface that allows user-level applications to
interact with the operating system (OS) kernel. They provide essential services
such as file manipulation, process management, and communication. System
programs, on the other hand, are utilities or applications that facilitate the
operation of the system, offering user interfaces to execute system calls more
conveniently. They include shells, file management tools, and system
management utilities.

Types of System call

There are mainly five kinds of system calls. These are classified as follows:

1. Process Control

2. File Management

3. Device Management

4. Information Maintenance

5. Communication

Now, you will learn all these different types of system calls one by one.

Process Control

It is responsible for file manipulation jobs, including creating files, deleting files,
reading, opening, writing, closing, etc.
File Management

It is responsible for file manipulation jobs, including creating files, opening files,
deleting files, closing files, etc.

Device Management

These are responsible for device manipulation, including reading from device
buffers, writing into device buffers, etc.

Information Maintenance

These are used to manage the data and its share between the OS and the user
program. Some common instances of information maintenance are getting time
or date, getting system data, setting time or date, setting system data, etc.

Communication

These are used for inter process communication (IPC). Some examples of IPC
are creating, sending, receiving messages, deleting communication connections,
etc.

What is System Program?

System programming may be defined as the act of creating System Software by


using the System Programming Languages. A system program offers an
environment in which programs may be developed and run. In simple terms, the
system programs serve as a link between the user interface (UI) and system calls.
Some system programs are only user interfaces, and others are complex. For
instance, a compiler is complicated system software.

The system program is a component of the OS, and it typically lies between
the user interface (UI) and system calls. The system user view is defined by the
system programs, not the system call, because the user view interacts with system
programs and is closer to the user interface.
Types of the System Program

There are mainly six types of system programs. These are classified as follows:

1. File Management

2. Status Information

3. File Modification

4. Programming-Language support

5. Program Loading and Execution

6. Communication

File Management

It is a collection of specific information saved in a computer system's memory.


File management is described as manipulating files in a computer system,
including the creation, modification, and deletion of files.

Status Information

Status information is information about the input, output process, storage, and
CPU utilization time, how the process will be computed in how much memory is
necessary to execute a task.

File Modification

These system programs are utilized to change files on hard drives or other storage
media. Besides modification, these programs are also utilized to search for
content within a file or to change content within a file.
Programming-Language Support

The OS includes certain standard system programs that allow programming


languages such as C, Visual Basic, C++, Java, and Pearl. There are various system
programs, including compilers, debuggers, assemblers, interpreters, etc.

Program Loading and Execution

After Assembling and Compiling, the program must be loaded into the memory
for execution. A loader is a component of an operating system responsible for
loading programs and libraries, and it is one of the most important steps to starting
a program. The system includes linkage editors, relocatable loaders, Overlay
loaders, and loaders.

Communication

System program offers virtual links between processes, people, and computer
systems. Users may browse websites, log in remotely, communicate messages to
other users via their screens, send emails, and transfer files from one user to
another.

Key differences between System Call and System Program in Operating


System

The OS has various head-to-head comparisons between System Call and System
Program. Some comparisons of the System Call and System Program are as
follows:
Features System Call System Program

It is a technique in which a
computer system program It offers an environment for a
Definition
requests a service from the OS program to create and run.
kernel.

It fulfils the low-level requests of It fulfils the high-level request or


Request
the user program. requirement of the user program.

It is usually written in C and C++


programming languages. It is commonly written in high-
Programming
Assemble-level language is used level programming languages
Languages
in system calls where direct only.
hardware access is required.

It defines the interface between


It defines the user interface (UI)
User View the services and the user process
of the OS.
provided by the OS.

It transforms the user request


The user process requests an OS
Action into a set of system calls needed
service using a system call.
to fulfil the requirement.

It may be categorized into file


It may be categorized into file
management, program loading
manipulation, device
and execution, programming-
Classification manipulation, communication,
language support, status
process control, information
information, file modification,
maintenance, and protection.
and communication.
User View vs System View in Operating System

An operating system is a construct that allows the user application programs to


interact with the system hardware. Operating system by itself does not provide
any function but it provides an atmosphere in which different applications and
programs can do useful work.

The operating system can be observed from the point of view of the user or the
system. This is known as the user view and the system view respectively. More
details about these are given as follows −

User View

The user view depends on the system interface that is used by the users. The
different types of user view experiences can be explained as follows −

• If the user is using a personal computer, the operating system is largely


designed to make the interaction easy. Some attention is also paid to the
performance of the system, but there is no need for the operating system to
worry about resource utilization. This is because the personal computer
uses all the resources available and there is no sharing.

• If the user is using a system connected to a mainframe or a minicomputer,


the operating system is largely concerned with resource utilization. This is
because there may be multiple terminals connected to the mainframe and
the operating system makes sure that all the resources such as
CPU,memory, I/O devices etc. are divided uniformly between them.
• If the user is sitting on a workstation connected to other workstations
through networks, then the operating system needs to focus on both
individual usage of resources and sharing though the network. This
happens because the workstation exclusively uses its own resources but it
also needs to share files etc. with other workstations across the network.

• If the user is using a handheld computer such as a mobile, then the


operating system handles the usability of the device including a few remote
operations. The battery level of the device is also taken into account.

There are some devices that contain very less or no user view because there is no
interaction with the users. Examples are embedded computers in home devices,
automobiles etc.

Process abstraction and hierarchy in operating systems:

Process abstraction in an operating system refers to the way the OS provides an


interface for creating, managing, and terminating processes while hiding the
complex details involved. Hierarchy in OS typically involves structuring
processes in a parent-child relationship, where processes can create subprocesses
or threads, leading to a tree-like organization. This enhances resource
management, simplifies process control, and facilitates better multitasking and
scheduling.

• Process Abstraction:

• OS provides process abstraction.

• When you run an executable file, the OS creates a process.

• The OS virtualizes the CPU and timeshares it across multiple


processes.
• The CPU scheduler picks one of the active processes to execute.

• Process Hierarchy:

In an operating system, the process hierarchy is structured in a tree-like


model where processes can spawn child processes. Each process is
assigned a unique Process ID (PID). The parent process can create multiple
child processes, but each child process has only one parent. This hierarchy
allows for organized resource management and process control. The root
process at the top is often referred to as the init process (or systemd in many
Linux systems), from which all other processes descend. This structure
facilitates both the management of processes and communication between
them.

Now-a-days all general purpose operating systems permit a user to create


and destroy processes. A process can create several new processes during
its time of execution.

The creating process is called the Parent Process and the new process is
called Child Process.

There are different ways for creating a new process. These are as follows

• Execution − The child process is executed by the parent process


concurrently or it waits till all children get terminated.

• Sharing − The parent or child process shares all resources like memory or
files or children process shares a subset of parent’s resources or parent and
children process share no resource in common.

The reasons that parent process terminates the execution of one of its
children are as follows −
• The child process has exceeded its usage of resources that have been
allocated. Because of this there should be some mechanism which allows
the parent process to inspect the state of its children process.

• The task that is assigned to the child process is no longer required.

• Example
• Consider a Business process to know about process hierarchy.
• Step 1 − Business processes can become very complicated, making it
difficult to model a large process with a single graphical model.
• Step 2 − It makes no sense to condense an end-to-end mechanism like
"order to cash" into a single graphical model that includes "article
collection to shopping cart," "purchase order request," "money transfer,"
"packaging," and "logistics," among other things.
• Step 3 − To break down large processes into smaller chunks, you'll need a
process hierarchy. The "from abstract to real" theory is followed by a
process hierarchy.
• Step 4 − This indicates that it includes data on operations at various levels
of granularity. As a result, knowledge about the abstract value chain or very
basic method steps and their logical order can be obtained.
• Step 5 − The levels of a process hierarchy, as well as the details included
in these levels, determine the hierarchy.
• Step 6 − It is critical to have a given knowledge base at each level;
otherwise, process models would not be comparable later.
• The model below depicts the process hierarchy model which includes
examples for each level – there are six levels in all.

Level 1 − Business Area

Level 2 − Process group


Level 3 − Business process

Level 4 − Business process variant

Level 5 − Process step

Level 6 − Activity

Thread in an Operating System

The thread refers to one string that offers the functionality of cutting a program
into multiple jobs performing concurrently parallelly. A thread refers to the
smallest unit of computation and comprises a program counter, a register set, and
a stack space.

Within a process, a single sequence stream. It is part of a process that executes


independently of other processes. Light-weight process. This is used in achieving
parallelism by dividing a process’s tasks that are independent paths of execution.

For instance, multiple tabs, a browser, text editor (Spell checking, and formatting
of text occur simultaneously with the type and saving of the text by various
threads).

Threading Issues in OS

• System Call

• Thread Cancellation

• Signal Handling

• Thread Pool

• Thread Specific Data


Threading Issues

fork() and exec() System Calls

They are the system calls fork() and exec(). Fork() function gives rise to an
identical copy of process which initiated fork call. The new duplication process
is referred to as a child process, while the invoker process is identified by fork().
The instruction after fork continues the execution with both the parent process
and the child process.

Discussing fork() system call, therefore. Let us assume that one of the threads
belonging to a multi-threaded program has instigated a fork() call. Therefore, the
new process is a duplication of fork(). Here, the question is as follows; will the
new duplicate process made by fork() be multi-threaded like all threads of the old
process or it will be a unique thread?

Now, certain UNIX systems have two variants of fork(). fork can either duplicate
all threads of the parent process to a child process or just those that were invoked
by the parent process. The application will determine which version of fork() to
use.
When the next system call, namely exec() system call is issued, it replaces the
whole programming with all its threads by the program specified in the exec()
system call’s parameters. Ordinarily, the exec() system call goes into queue after
the fork() system call.

However, this implies that the exec() system call should not be queued
immediately after the fork() system call because duplicating all the threads of the
parent process into the child process will be superfluous. Since the exec() system
call will overwrite the whole process with the one given in the arguments passed
to exec().

This means that in cases like this; a fork() which only replicates one invoking
thread will do.

Thread Cancellation

The process of prematurely aborting an active thread during its run is called
‘thread cancellation’. So, let’s take a look at an example to make sense of it.
Suppose, there is a multithreaded program whose several threads have been given
the right to scan a database for some information. The other threads however will
get canceled once one of the threads happens to return with the necessary results.

The target thread is now the thread that we want to cancel. Thread cancellation
can be done in two ways:

• Asynchronous Cancellation: The asynchronous cancellation involves only one


thread that cancels the target thread immediately.

• Deferred Cancellation: In the case of deferred cancellation, the target thread


checks itself repeatedly until it is able to cancel itself voluntarily or decide
otherwise.

The issue related to the target thread is listed below:

How is it managed when resources are assigned to a canceled target thread?


Suppose the target thread exits when updating the information that is being shared
with other threads.

However, in here, asynchronous threads cancellation whereas thread cancels its


target thread irrespective of whether it owns any resource is a problem.

On the other hand, the target thread receives this message first and then checks
its flag to see if it should cancel itself now or later. They are called the
Cancellation Points of threads under which thread cancellation occurs safely.

Signal Handling

Signal is easily directed at the process in single threaded applications. However,


in relation to multithreaded programs, the question is which thread of a program
the signal should be sent.

Suppose the signal will be delivered to:

• Every line of this process.

• Some special thread of a process.

• thread to which it applies

Alternatively, you could give one thread the job of receiving all signals.

So, the way in which the signal shall be passed to the thread depends on how the
signal has been created. The generated signals can be classified into two types:
synchronous signal and asynchronous signal.

At this stage, the synchronous signals are routed just like any other signal was
generated. Since these signals are triggered by events outside of the running
process, they are received by the running process in an asynchronous manner,
referred to as asynchronous signals.

Therefore, if the signal is synchronous, it will be sent to a thread that generated


such a signal. The asynchronous signal cannot be determined into which thread
in a multithreaded program delivery it should go. The asynchronous signal that is
telling a process to stop, will result in all threads of the process receiving the
signal.

Many UNIX UNITS have addressed, to some extent, the problem of


asynchronous signals. Here, the thread is given an opportunity to identify the
relevant or valid signals and those that it does not support. Windows OS on the
other hand, has no idea about signals but does use ACP as equivalent for
asynchronous signals adopted in Unix platforms.

In contrast with UNIX where a thread specifies that it can or cannot receive a
thread, all control process instances (ACP) are sent to a particular thread.

Thread Pool

The server develops an independent thread every time an individual attempts to


access a page on it. However, the server also has certain challenges. Bear in mind
that no limit in the number of active threads in the system will lead to exhaustion
of the available system resources because we will create a new thread for each
request.

The establishment of a fresh thread is another thing that worries us. The creation
of a new thread should not take more than the amount of time used up by the
thread in dealing with the request and quitting after because this will be wasted
CPU resources.

Hence, thread pool could be the remedy for this challenge. The

he notion is that as many fewer threads as possible are established during the
beginning of the process. A group of threads that forms this collection of threads
is referred as a thread pool. There are always threads that stay on the thread pool
waiting for an assigned request to service.
Thread Pool

A new thread is spawned from the pool every time an incoming request reaches
the server, which then takes care of the said request. Having performed its duty,
it goes back to the pool and awaits its second order.

Whenever the server receives the request, and fails to identify a specific thread at
the ready thread pool, it may only have to wait until some of the threads are
available at the ready thread pool. It is better than starting a new thread whenever
a request arises because this system works well with machines that cannot support
multiple threads at once.

Thread Specific Data

Of course, we all know that a thread belongs to data of one and the same process,
right?. The challenge here will be when every thread in that process must have its
own copy of the same data. Consequently, any data uniquely related to a particular
thread is referred to as thread-specific data.

For example, a transaction processing system can process a transaction in


individual threads for each one.I Each transaction we perform shall be assigned
with a special identifier which in turn, shall uniquely identify that particular
transaction to us. The system would then be in a position to distinguish every
transaction among others.

Because we render each transaction separately on a thread. In this way, thread-


specific datas will allow associating every thread with a definite transaction and
some transaction ID. For example, libraries that support threads, namely Win32,
Pthreads and Java, provide for thread-specific data (TSD).

Hence, these are the threading problems that arise in multithreaded programming
environments. Additionally, we examine possible ways of addressing these
concerns.

As a result, multithreading may be regarded as an integral operation within


computer programming enhancing task concurrency and improved system
performance. however, it is also associated with some problems and threading
issues. There were numerous threading concerns such as system calls, thread
cancellation, threads’ pools and thread specific date.

Process Scheduling

The process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another process
on the basis of a particular strategy.

Process scheduling is an essential part of a Multiprogramming operating systems.


Such operating systems allow more than one process to be loaded into the
executable memory at a time and the loaded process shares the CPU using time
multiplexing.
Categories of Scheduling

There are two categories of scheduling:

1. Non-preemptive: Here the resource can’t be taken from a process until the
process completes execution. The switching of resources occurs when the running
process terminates and moves to a waiting state.

2. Preemptive: Here the OS allocates the resources to a process for a fixed amount
of time. During resource allocation, the process switches from running state to
ready state or from waiting state to ready state. This switching occurs as the CPU
may give priority to other processes and replace the process with higher priority
with the running process.

Different Algorithm

Process Scheduler schedules different processes to be assigned to the CPU based


on particular scheduling algorithms. There are six popular process scheduling
algorithms which we are going to discuss in this chapter −

• First-Come, First-Served (FCFS) Scheduling

• Shortest-Job-Next (SJN) Scheduling

• Priority Scheduling

• Shortest Remaining Time

• Round Robin(RR) Scheduling

• Multiple-Level Queues Scheduling

These algorithms are either non-preemptive or preemptive. Non-preemptive


algorithms are designed so that once a process enters the running state, it cannot
be preempted until it completes its allotted time, whereas the preemptive
scheduling is based on priority where a scheduler may preempt a low priority
running process anytime when a high priority process enters into a ready state.
Terminologies Used in CPU Scheduling

• Arrival Time: The time at which the process arrives in the ready queue.

• Completion Time: The time at which the process completes its execution.

• Turn Around Time/Service Time: Time Difference between completion


time and arrival time.

Turn Around Time = (Completion Time – Arrival Time)

• Waiting Time (W. T): Time Difference between turnaround time and burst
time Ie.

Waiting Time = (Turn Around Time – Burst Time).

First Come First Serve (FCFS)

• Jobs are executed on first come, first serve basis.

• It is a non-preemptive, pre-emptive scheduling algorithm.

• Easy to understand and implement.

• Its implementation is based on FIFO queue.

• Poor in performance as average wait time is high.


Wait time of each process is as follows −

Process Wait Time : Service Time - Arrival Time

P0 0-0=0

P1 5-1=4

P2 8-2=6

P3 16 - 3 = 13

Average Wait Time: (0+4+6+13) / 4 = 5.75

Problems with FCFS Scheduling

Below we have a few shortcomings or problems with the FCFS scheduling


algorithm:

1. It is Non Pre-emptive algorithm, which means the process


priority doesn't matter.

If a process with very least priority is being executed, more like daily routine
backup process, which takes more time, and all of a sudden some other high
priority process arrives, like interrupt to avoid system crash, the high priority
process will have to wait, and hence in this case, the system will crash, just
because of improper process scheduling.

2. Not optimal Average Waiting Time.

3. Resources utilization in parallel is not possible, which leads to Convoy


Effect, and hence poor resource(CPU, I/O etc) utilization.
What is Convoy Effect?

Convoy Effect is a situation where many processes, who need to use a resource
for short time are blocked by one process holding that resource for a long time.

Shortest Job Next (SJN)

• This is also known as shortest job first, or SJF

• This is a non-preemptive, pre-emptive scheduling algorithm.

• Best approach to minimize waiting time.

• Easy to implement in Batch systems where required CPU time is known in


advance.

• Impossible to implement in interactive systems where required CPU time is not


known.

• The processer should know in advance how much time process will take.

Given: Table of processes, and their Arrival time, Execution time

Arrival Execution
Process
Time Time

P0 0 5

P1 1 3

P2 2 8

P3 3 6
Waiting time of each process is as follows −

Process Waiting Time

P0 0-0=0

P1 5-1=4

P2 14 - 2 = 12

P3 8-3=5

Average Wait Time: (0 + 4 + 12 + 5)/4 = 21 / 4 = 5.25

Disadvantages

• Starvation can become a concern when using the SJN algorithm. In this
situation, longer-duration jobs may never have an opportunity to execute if
there is a continuous arrival of shorter jobs. This issue can be particularly
problematic in systems where fairness is a key consideration.
• Predicting the length of jobs accurately can be a challenging task in
practice. When the predicted job lengths turn out to be inaccurate, it can
affect the performance of SJN and result in frequent preemptions.

• In situations where fast responses are crucial, SJN might not be the most
responsive option for interactive systems or real-time environments. When
shorter jobs dominate the CPU, longer jobs may experience lengthy waits,
which is often detrimental.

Priority Based Scheduling

• Priority scheduling is a non-preemptive algorithm and one of the most common


scheduling algorithms in batch systems.

• Each process is assigned a priority. Process with highest priority is to be


executed first and so on.

• Processes with same priority are executed on first come first served basis.

• Priority can be decided based on memory requirements, time requirements or


any other resource requirement.

Given: Table of processes, and their Arrival time, Execution time, and priority.
Here we are considering 1 is the lowest priority.

Service
Process Arrival Time Execution Time Priority
Time

P0 0 5 1 0

P1 1 3 2 11
P2 2 8 1 14

P3 3 6 3 5

Waiting time of each process is as follows –

Process Waiting Time

P0 0-0=0

P1 11 - 1 = 10

P2 14 - 2 = 12

P3 5-3=2

Average Wait Time: (0 + 10 + 12 + 2)/4 = 24 / 4 = 6


Shortest Remaining Time

• Shortest remaining time (SRT) is the preemptive version of the SJN algorithm.

• The processor is allocated to the job closest to completion but it can be


preempted by a newer ready job with shorter time to completion.

• Impossible to implement in interactive systems where required CPU time is


not known.

• It is often used in batch environments where short jobs need to give preference.

Round Robin Scheduling

• Round Robin is the preemptive process scheduling algorithm.

• Each process is provided a fix time to execute, it is called a quantum.

• Once a process is executed for a given time period, it is preempted and other
process executes for a given time period.

• Context switching is used to save states of preempted processes.

Wait time of each process is as follows −

Process Wait Time : Service Time - Arrival Time

P0 (0 - 0) + (12 - 3) = 9
P1 (3 - 1) = 2

P2 (6 - 2) + (14 - 9) + (20 - 17) = 12

P3 (9 - 3) + (17 - 12) = 11

Average Wait Time: (9+2+12+11) / 4 = 8.5

Multiple-Level Queues Scheduling

Multiple-level queues are not an independent scheduling algorithm. They make


use of other existing algorithms to group and schedule jobs with common
characteristics.

• Multiple queues are maintained for processes with common characteristics.

• Each queue can have its own scheduling algorithms.

• Priorities are assigned to each queue.

For example, CPU-bound jobs can be scheduled in one queue and all I/O-
bound jobs in another queue. The Process Scheduler then alternately selects
jobs from each queue and assigns them to the CPU based on the algorithm
assigned to the queue.

Some more Example

You might also like