0% found this document useful (0 votes)
99 views

DOS-Unit 1 Complete Notes

The document discusses different types of operating systems. It provides details on batch operating systems, multi-programming systems, multi-processing systems, multi-tasking systems, time-sharing systems, and distributed operating systems. For each type, it outlines their key advantages and disadvantages. The document also provides an overview of operating system concepts, functions, and components of a computer system.

Uploaded by

FIRZA AFREEN
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
99 views

DOS-Unit 1 Complete Notes

The document discusses different types of operating systems. It provides details on batch operating systems, multi-programming systems, multi-processing systems, multi-tasking systems, time-sharing systems, and distributed operating systems. For each type, it outlines their key advantages and disadvantages. The document also provides an overview of operating system concepts, functions, and components of a computer system.

Uploaded by

FIRZA AFREEN
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 78

Distributed Operating System - Unit I

OPERATING SYSTEM CONCEPTS

What is Operating System?

An operating system is a program that manages a computer's hardware. It


also provides a basis for application programs and acts as an intermediary
between the computer user and the computer hardware.

Mainframe operating systems are designed primarily to optimize


utilization of hardware. Personal computer (PC) operating systems
support complex games, business applications, and everything in between.

Operating systems for mobile computers provide an environment in which


user can easily interface with the computer to execute programs. Thus, some
operating systems are designed to be convenient, others to be efficient, and
others to be some combination of the two.

Abstract view of the components of the computer system

1
A computer system can be divided roughly into four components: the
hardware, the operating system, the application programs, and the
users.

The hardware-the central processing unit (CPU), the memory, and the
input/output devices-provides the basic computing resources for the system.

The application programs-such as word processors, spreadsheets,


compilers, and Web browsers-define the ways in which these resources are
used to solve users' computing problems.

The operating system controls the hardware and coordinates its use among
the various application programs for the various users. The operating system
provides the means for proper use of these resources in the operation of the
computer system.

Functions of Operating Systems

Security

To safeguard user data, the operating system employs password protection


and other related measures.

It also protects programs and user data from illegal access.

Control over System Performance

The operating system monitors the overall health of the system in order to
optimize performance.

To get a thorough picture of the system’s health, keep track of the time
between system responses and service requests.

2
This can aid performance by providing critical information for
troubleshooting issues.

Job Accounting

The operating system maintains track of how much time and resources are
consumed by different tasks and users, and this data can be used to measure
resource utilization for a specific user or group of users.

Error Detecting Aids

The OS constantly monitors the system in order to discover faults and


prevent a computer system from failing.

Coordination between Users and Other Software

Operating systems also organize and assign interpreters, compilers,


assemblers, as well as other software to computer users.

Memory Management

The operating system is in charge of managing the primary memory, often


known as the main memory.

The main memory consists of a vast array of bytes or words, each of which
is allocated an address.

Main memory is rapid storage that the CPU can access directly.

A program must first be loaded into the main memory before it can be
executed.

3
For memory management, the OS performs the following tasks:

The OS keeps track of primary memory – meaning, which user program can
use which bytes of memory, memory addresses that have already been
assigned, as well as memory addresses yet to be used.

The OS determines the order in which processes would be permitted


memory access.

It allocates memory to the process when the process asks for it and
deallocates memory when the process exits or performs an I/O activity.

Process Management

The operating system determines which processes have access to the


processor and how much processing time every process has in a
multiprogramming environment.

For processor management, the OS performs the following tasks:

It keeps track of how processes are progressing.

A traffic controller is a program that accomplishes this duty.

Allocates a CPU to a process. When a process is no longer needed, the


processor is deallocated.

TYPES OF OPERATING SYSTEMS

There are several types of Operating Systems which are mentioned below

1. Batch Operating System

2. Multi-Programming System

4
3. Multi-Processing System

4. Multi-Tasking Operating System

5. Time-Sharing Operating System

6. Distributed Operating System

7. Network Operating System

8. Real-Time Operating System

1. Batch Operating System

This type of operating system does not interact with the computer directly.

There is an operator which takes similar jobs having the same


requirement and groups them into batches.

It is the responsibility of the operator to sort jobs with similar needs.

5
Advantages of Batch Operating System

It is very difficult to guess or know the time required for any job to
complete. Processors of the batch systems know how long the job would
be when it is in the queue.

Multiple users can share the batch systems.

The idle time for the batch system is very less.

It is easy to manage large work in batch systems.

Disadvantages of Batch Operating System

Batch systems are hard to debug.

It is sometimes costly.

The other jobs will have to wait for an unknown time if any job fails.

2. Multi-Programming Operating System

Multiprogramming Operating Systems can be simply illustrated as more


than one program is present in the main memory and any one of them can be
kept in execution. This is basically used for better utilization of
resources.

6
Advantages of Multi-Programming Operating System

Multi Programming increases the throughput of the System.

It helps in reducing the response time.

Disadvantages of Multi-Programming Operating System

No user interaction with any program during execution.

3. Multi-Processing Operating System

Multi-Processing Operating System is a type of Operating System in which


more than one CPU is used for the execution of processes.

It betters the throughput of the System.

Advantages of Multi-Processing Operating System

It increases the throughput of the system.

7
As it has several processors, so, if one processor fails, we can proceed
with another processor.

Disadvantages of Multi-Processing Operating System

Due to the multiple CPU, it can be more complex and difficult to


manage.

4. Multi-Tasking Operating System

Multitasking Operating System is simply a multiprogramming Operating


System with having facility of a Round-Robin Scheduling Algorithm. It can
run multiple programs.

Advantages of Multi-Tasking Operating System

Multiple Programs can be executed simultaneously in Multi-Tasking


Operating System.

It comes with proper memory management.

8
Disadvantages of Multi-Tasking Operating System

The system gets heated in case of heavy programs.

5. Time-Sharing Operating Systems

Each task is given some time to execute so that all the tasks work
smoothly.

Each user gets the time of the CPU as they use a single system.

The task can be from a single user or different users also.

The time that each task gets to execute is called quantum.

After this time interval is over OS switches over to the next task.

Advantages of Time-Sharing OS

Each task gets an equal opportunity.

CPU idle time can be reduced.

9
Resource Sharing: Time-sharing systems allow multiple users to share
hardware resources such as the CPU, memory, and peripheral and increasing
efficiency.

Disadvantages of Time-Sharing OS

Reliability problem.

High Overhead: Time-sharing systems have a higher overhead than other


operating systems due to the need for scheduling, context switching, and
other overheads that come with supporting multiple users.

Complexity: Time-sharing systems are complex and require advanced


software to manage multiple users. This complexity increases the chance of
bugs and errors.

Security Risks: With multiple users sharing resources, the risk of security
breaches increases. Time-sharing systems require careful management of
user access, authentication, and authorization to ensure the security of data
and software.

6. Distributed Operating System

A distributed operating system is one in which several computer systems


connect through a single communication channel. Moreover, these systems
have their individual processors and memory.

These types of operating system is a recent advancement in the world of


computer technology and are being widely accepted all over the world.

Various autonomous interconnected computers communicate with each


other using a shared communication network.

10
These are referred to as loosely coupled systems or distributed systems.

These systems’ processors differ in size and function.

The major benefit of working with these types of the operating system is
that it is always possible that one user can access the files or software which
are not actually present on his system but some other system connected
within this network i.e., remote access is enabled within the devices
connected in that network.

Advantages of Distributed Operating System

Failure of one will not affect the other network communication, as all
systems are independent of each other.

Since resources are being shared, computation is highly fast and durable
[strong].

These systems are easily scalable as many systems can be easily added to
the network.

11
Delay in data processing reduces.

Disadvantages of Distributed Operating System

Failure of the main network will stop the entire communication.

These types of systems are not readily available as they are very expensive.

The underlying software is highly complex.

7. Network Operating System

These systems run on a server and provide the capability to manage data,
users, applications, and other networking functions.

These types of operating systems allow shared access to files, printers


and other networking functions over a small private network.

One more important aspect of Network Operating Systems is that all the
users are well aware of the underlying configuration, of all other users
within the network, their individual connections, etc. and that’s why these
computers are popularly known as tightly coupled systems.

12
Advantages of Network Operating System

Highly stable centralized servers.

Security concerns are handled through servers.

New technologies and hardware up-gradation are easily integrated into the
system.

Server access is possible remotely from different locations.

Disadvantages of Network Operating System

Servers are costly.

User has to depend on a central location for most operations.

Maintenance and updates are required regularly.

8. Real-Time Operating System

These types of OSs serve real-time systems. The time interval required to
process and respond to inputs is very small. This time interval is
called response time.

Real-time systems are used when there are time requirements that are very
strict like missile systems, air traffic control systems, robots, etc.

Types of Real-Time Operating Systems

Hard Real-Time Systems


Hard Real-Time OSs are meant for applications where time constraints are
very strict and even the shortest possible delay is not acceptable.

13
Soft Real-Time Systems
These OSs are for applications where time-constraint is less strict.

Advantages of RTOS

Task Shifting: The time assigned for shifting tasks in these systems is very
less. For example, in older systems, it takes about 10 microseconds in
shifting from one task to another, and in the latest systems, it takes 3
microseconds.

Focus on Application: Focus on running applications and less importance


on applications that are in the queue.

Real-time operating system in the embedded system: Since the size of


programs is small, RTOS can also be used in embedded systems like in
transport and others.

Disadvantages of RTOS

Complex Algorithms: The algorithms are very complex and difficult for the
designer to write on.

Thread Priority: It is not good to set thread priority as these systems are
very less prone to switching tasks.

FUNCTIONS OF OPERATING SYSTEMS

Memory Management

Process Management

Device Management

File Management

14
User Interface or Command Interpreter

Booting the Computer

Control Over System Performance

Memory Management

The operating system manages the Primary Memory or Main Memory.

Main memory is fast storage and it can be accessed directly by the CPU.

For a program to be executed, it should be first loaded in the main memory.

An Operating System performs the following activities for Memory


Management

It keeps track of primary memory, i.e., which bytes of memory are used
by which user program.

The memory addresses that have already been allocated and the memory
addresses of the memory that has not yet been used.

OS decides the order in which processes are granted memory access,


and for how long.

It allocates the memory to a process when the process requests it and


deallocates the memory when the process has terminated or is performing an
I/O operation.

15
Process Management

An Operating System performs the following activities for Processor


Management.

Keeps track of the status of processes.

The program which performs this task is known as a traffic controller.

Allocates the CPU when process needs a processor.

De-allocates processor when a process is no more required.

16
Device Management

An OS manages device communication via its respective drivers.

It performs the following activities for device management.

Keeps track of all devices connected to the system, designates a program


responsible for every device known as the Input / Output controller.

Decides which process gets access to a certain device and for how long.

Allocates devices effectively and efficiently. Deallocates devices when they


are no longer required.

File Management

A file system is organized into directories for efficient or easy navigation


and usage.

These directories may contain other directories and other files.

An Operating System carries out the following file management activities.

It keeps track of where information is stored, user access settings, the


status of every file, and more.

These facilities are collectively known as the file system.

17
User Interface and Command Interpreter

The user interacts with the computer system through the operating system.

Hence OS act as an interface between the user and the computer hardware.

This user interface is offered through a set of commands or a Graphical


User Interface (GUI).

Through this interface, the user makes interaction with the applications and
the machine hardware.

18
Booting the Computer

The process of starting or restarting the computer is known as booting.

If the computer is switched off completely and if turned on then it is called


cold booting.

Warm booting is a process of using the operating system to restart the


computer.

Control Over System Performance

Monitors overall system health to help improve performance.

Records the response time between service requests and system


response to having a complete view of the system’s health.

This can help improve performance by providing important information


needed to troubleshoot problems.

19
CHARACTERISTICS OF OPERATING SYSTEMS

Virtualization

Operating systems can provide Virtualization capabilities, allowing


multiple operating systems or instances of an operating system to run on a
single physical machine.

This can improve resource utilization and provide isolation between


different operating systems or applications.

Networking

Operating systems provide networking capabilities, allowing the computer


system to connect to other systems and devices over a network.

This can include features such as network protocols, network interfaces,


and network security.

Scheduling

Operating systems provide scheduling algorithms that determine the


order in which tasks are executed on the system.

These algorithms prioritize tasks based on their resource requirements and


other factors to optimize system performance.

Interprocess Communication

Operating systems provide mechanisms for applications to communicate


with each other, allowing them to share data and coordinate their
activities.

20
Performance Monitoring

Operating systems provide tools for monitoring system performance,


including CPU usage, memory usage, disk usage, and network activity.

This can help identify performance bottlenecks and optimize system


performance.

Backup and Recovery

Operating systems provide backup and recovery mechanisms to protect


data in the event of system failure or data loss.

Debugging

Operating systems provide debugging tools that allow developers to


identify and fix software bugs and other issues in the system.

SERVICES PROVIDED BY AN OPERATING SYSTEM

Program Execution

The Operating System is responsible for the execution of all types of


programs whether it be user programs or system programs.

The Operating System utilizes various resources available for the efficient
running of all types of functionalities.

Handling Input / Output Operations

The Operating System is responsible for handling all sorts of inputs and
outputs, i.e., from the keyboard, mouse, desktop, etc.

21
For example, there is a difference between all types of peripheral devices
such as mouse or keyboards, the Operating System is responsible for
handling data.

Manipulation of File System

The Operating System is responsible for making decisions regarding the


storage of all types of data or files, i.e., floppy disk/hard disk/pen drive, etc.

The Operating System decides how the data should be manipulated and
stored.

Error Detection and Handling

The Operating System is responsible for the detection of any type of error
or bugs that can occur while any task.

The well-secured OS sometimes also acts as a countermeasure for


preventing any sort of breach of the Computer System from any external
source and probably handling them.

Resource Allocation

The Operating System ensures the proper use of all the resources
available by deciding which resource to be used by whom for how much
time.

All the decisions are taken by the Operating System.

Accounting

The Operating System tracks and keeps an account of all the


functionalities taking place in the computer system at a time.

22
All the details such as the types of errors that occurred are recorded by the
Operating System.

SYSTEM CALLS

What is System Call?

Working of a System Call

Need for System Calls

Services provided by System Calls

Features of System Calls

System Calls Advantages

Examples of System Calls

What is System Call?

A system call is a way for programs to interact with the operating


system.

A System Call is a programmatic way in which a computer program


requests a service from the kernel of the operating system.

Kernel acts as a bridge between applications and data processing performed


at hardware level using inter-process communication and system calls.

System calls are the only entry points into the kernel system.

In simpler terms, it is a way for a program to interact with the


underlying system.

23
A system call is initiated by the program executing a specific instruction,
which triggers a switch to kernel mode, allowing the program to request a
service from the OS.

The OS then handles the request, performs the necessary operations, and
returns the result back to the program.

System calls are essential for the proper functioning of an operating system,
as they provide a standardized way for programs to access system resources.

Without system calls, each program would need to implement its own
methods for accessing hardware and system services, leading to inconsistent
and error-prone behavior.

The Applications run in an area of memory known as user space.

A system call connects to the operating system's kernel, which executes


in kernel space.

24
When an application creates a system call, it must first obtain permission
from the kernel.

It achieves this using an interrupt request, which pauses the current process
and transfers control to the kernel.

If the request is permitted, the kernel performs the requested action,


like creating or deleting a file.

As input, the application receives the kernel's output.

The application resumes the procedure after the input is received.

When the operation is finished, the kernel returns the results to the
application and then moves data from kernel space to user space in memory.

A simple system call may take few nanoseconds to provide the result, like
retrieving the system date and time.

A more complicated system call, such as connecting to a network device,


may take a few seconds.

Most operating systems launch a distinct kernel thread for each system call
to avoid bottlenecks.

Modern operating systems are multi-threaded, which means they can handle
various system calls at the same time.

Need for System Calls

Following are the reasons we need system calls:

To read and write from files.

25
To create or delete files.

To create and manage new processes.

To send and receive packets, through network connections.

To access hardware devices.

Services Provided by System Calls

Process Control

File Management

Device Management

Information Maintenance

Communication

Process Control

Process control is the system call performs the task of process creation,
process termination etc.

Functions of process Control:

End and Abort

Loading and Execution of a process

Creation and termination of a Process

Wait and Signal Event

Allocation of free memory

26
File Management

File management is a system call that is used to handle the files.

Functions of File Management:

Creation of a file

Deletion of a file

Opening and closing of a file

Reading, writing, and repositioning

Getting and setting file attributes

Device Management

Device management is a system call that is used to deal with devices.

Functions of Device Management:

Requesting and releasing devices

Attaching and detaching devices logically

Getting and setting device attributes

Information Maintenance

Information maintenance is a system call that is used to maintain


information.

Functions of Information maintenance:

Getting system data, set system data etc

27
Getting or setting time and date

Getting process attributes

Communication

Communication is a system call that is used for communication between


devices in the network & also for an interprocess communication.

Functions of communication:

Creation and deletion of communications connections

Sending and receiving messages

Helping OS transfer status information

Attaching or detaching remote devices

Features of System Calls

Interface

System calls provide a well-defined interface between user programs


and the operating system.

Programs make requests by calling specific functions, and the operating


system responds by executing the requested service and returning a result.

Protection

The operating system uses this privilege to protect the system from
malicious or unauthorized access.

28
Kernel Mode

When a system call is made, the program is switched from user mode to
kernel mode.

In kernel mode, the program has access to all system resources,


including hardware, memory etc.

Context Switching

A system call requires a context switch, which involves saving the state of
the current process and switching to the kernel mode to execute the
requested service.

This can introduce overhead, which can impact system performance.

Error Handling

System calls can return error codes to indicate problems with the
requested service.

Programs must check for these errors and handle them appropriately.

Synchronization

System calls can be used to synchronize access to shared resources, such as


files or network connections.

The operating system provides synchronization mechanisms, such as


locks, to ensure that multiple programs can access these resources safely.

System Calls Advantages

Access to hardware resources

29
System calls allow programs to access hardware resources such as disk
drives, printers, and network devices.

Memory management

System calls provide a way for programs to allocate and deallocate


memory.

Process management

System calls allow programs to create and terminate processes, as well as


manage inter-process communication.

Standardization

System calls provide a standardized interface for programs to interact


with the operating system, ensuring consistency and compatibility across
different hardware platforms and operating system versions.

Examples of a System Call

open()

Accessing a file on a file system is possible with the open() system call.

It gives the file resources it needs.

A file can be opened by multiple processes simultaneously or just one


process.

Everything is based on the requirements of the process.

30
wait()

In some systems, a process might need to hold off until another process
has finished running before continuing.

When a parent process creates a child process, the execution of the parent
process is halted until the child process is complete.

The parent process is stopped using the wait() system call.

The parent process regains control once the child process has finished
running.

fork()

The fork() system call is used by processes to create copies of themselves.

It is one of the methods used most frequently in operating systems to create


processes.

exit()

A system call called exit() is used to terminate a program.

In environments with multiple threads, this call indicates that the thread
execution is finished.

After using the exit() system function, the operating system recovers the
resources used by the process.

OS STRUCTURE

Operating system can be implemented with the help of various structures.

31
The structure of the OS depends mainly on how the various common
components of the operating system are interconnected and melded into
the kernel.

It is easier to create an operating system in pieces, much as we break down


larger issues into smaller, more manageable sub problems.

Every segment is also a part of the operating system.

Depending on this we have following structures of the operating system:

Simple structure

Layered structure

Micro-kernel

Modular structure or approach

Simple structure

Such operating systems do not have well defined structure and are small,
simple and limited systems.

The interfaces and levels of functionality are not well separated.

MS-DOS is an example of such operating system.

In MS-DOS application programs are able to access the basic I/O routines.

These types of operating system cause the entire system to crash if one of
the user programs fails.

32
The following figure illustrates layering in simple structure

There are four layers that make up the MS-DOS operating system, and
each has its own set of features.

These layers include ROM BIOS device drivers, MS-DOS device drivers,
application programs, and system programs.

The MS-DOS operating system benefits from layering because each level
can be defined independently and, when necessary, can interact with one
another.

If the system is built in layers, it will be simpler to design, manage, and


update.

Because of this, simple structures can be used to build constrained systems


that are less complex.

When a user program fails, the operating system as whole crashes.

33
Because MS-DOS systems have a low level of abstraction, programs and I/O
procedures are visible to end users, giving them the potential for unwanted
access.

Advantages of Simple structure

It delivers better application performance because of the few interfaces


between the application program and the hardware.

Easy for kernel developers to develop such an operating system.

Disadvantages of Simple structure

The structure is very complicated as no clear boundaries exists between


modules.

It does not enforce data hiding in the operating system.

Layered structure

An OS can be broken into pieces and retain much more control on system.

In this structure the OS is broken into number of layers (levels).

The bottom layer (layer 0) is the hardware and the topmost layer (layer N) is
the user interface.

These layers are so designed that each layer uses the functions of the
lower level layers only.

This simplifies the debugging process as if lower level layers are debugged
and an error occurs during debugging then the error must be on that layer
only as the lower level layers have already been debugged.

34
The main disadvantage of this structure is that at each layer, the data
needs to be modified and passed on which adds overhead to the system.

Moreover careful planning of the layers is necessary as a layer can use only
lower level layers.

UNIX is an example of this structure.

Advantages of Layered structure

Layering makes it easier to enhance the operating system as


implementation of a layer can be changed easily without affecting the
other layers.

It is very easy to perform debugging and system verification.

Disadvantages of Layered structure

In this structure the application performance is degraded as compared to


simple structure.

35
It requires careful planning for designing the layers as higher layers use the
functionalities of only the lower layers.

Micro-kernel

This structure designs the operating system by removing all non-


essential components from the kernel and implementing them as system
and user programs.

This result in a smaller kernel called the micro-kernel.

Advantages of this structure are that all new services need to be added to
user space and does not require the kernel to be modified.

Thus it is more secure and reliable as if a service fails then rest of the
operating system remains untouched.

Mac OS is an example of this type of OS.

36
Advantages of Micro-kernel structure

It makes the operating system portable to various platforms.

As micro kernels are small so these can be tested effectively.

Disadvantages of Micro-kernel structure

Increased level of inter module communication degrades system


performance.

Modular structure or approach

It is considered as the best approach for an OS.

It involves designing of a modular kernel.

The kernel has only set of core components and other services are added as
dynamically loadable modules to the kernel either during run time or boot
time.

It resembles layered structure due to the fact that kernel has defined and
protected interfaces but it is more flexible than the layered structure as a
module can call any other module.

PROCESS AND THREADS

PROCESS MANAGEMENT

A process is a program in execution.

For example, when we write a program in C or C++ and compile it,


the compiler creates binary code.

37
The original code and binary code are both programs. When we actually
run the binary code, it becomes a process.

A process is an ‘active’ entity instead of a program, which is considered a


‘passive’ entity.

A single program can create many processes when run multiple times; for
example, when we open a .exe or binary file multiple times, multiple
instances begin (multiple processes are created).

Process Management

If the operating system supports multiple users then operating systems have
to keep track of all the completed processes, schedule them, and dispatch
them one after another.

Some of the systems call in this category are as follows.

Create a child’s process identical to the parent’s.

Terminate a process

Wait for a child process to terminate

Change the priority of the process

Block the process

Ready the process

Dispatch a process

Suspend a process

Resume a process

38
Delay a process

Fork a process

How does a process look like in memory?

Explanation of Process

Text Section

A Process, sometimes known as the Text Section, includes the current


activity represented by the value of the Program Counter.

Data Section

Contains the global variable.

Heap Section

Dynamically allocated memory to process during its run time.

Stack

The stack contains temporary data, such as function parameters, returns


addresses, and local variables.

39
Process Control Block

An Operating System helps in process creation, scheduling, and termination


with the help of Process Control Block.

The Process Control Block (PCB), which is part of the Operating


System, aids in managing how processes operate.

Every OS process has a Process Control Block related to it.

By keeping data on different things including their state, I/O status, and CPU
Scheduling, a PCB maintains track of processes.

A Process Control Block consists of:

1. Process ID

2. Process State

3. Program Counter

4. CPU Registers

5. CPU Scheduling Information

6. Memory Management Information

7. Input Output Status Information

1. Process Id

A unique identifier assigned by the operating system.

40
2. Process State

i) New State

A process is said to be in new state when a program present in the


secondary memory is initiated for execution.

ii) Ready State

The ready state, when a process waits for the CPU to be assigned.

The operating system pulls new processes from secondary memory and
places them all in main memory.

The term "ready state processes" refers to processes that are in the main
memory and are prepared for execution.

Numerous processes could be active at the moment.

iii) Running State

The Operating System will select one of the processes from the ready
state based on the scheduling mechanism.

41
As a result, if our system only has one CPU, there will only ever be one
process operating at any given moment.

We can execute n processes concurrently in the system if there are n


processors.

iv) Waiting or Blocking State

Depending on the scheduling mechanism or the inherent behavior of the


process, a process can go from the Running state to the Block or Wait
states.

The OS switches a process to the block or wait state and allots the CPU to
the other processes while it waits for a specific resource to be allocated or
for user input.

v) Terminated State

A process enters the termination state once it has completed its


execution.

The operating system will end the process and erase the whole context of the
process.

3) Program Counter

The address of the following instruction to be executed from memory is


stored in a CPU register called a program counter (PC) in the computer
processor.

It is a digital counter required for monitoring the present stage of execution.

42
An instruction counter, instruction pointer, instruction addresses register, or
sequence control register are other names for a program counter.

4) CPU Registers

When the process is in a running state, here is where the contents of the
processor registers are kept.

Accumulators, index and general-purpose registers, instruction registers are


the many categories of CPU registers.

5) CPU Scheduling Information

It is necessary to arrange a procedure for execution.

This schedule determines when it transitions from ready to running.

Process priority, scheduling queue pointers (to indicate the order of


execution), and several other scheduling parameters are all included in CPU
scheduling information.

6) Memory Management Information

The Memory Management Information section contains information on the


page, segment tables, and the value of the base and limit registers.

It relies on the operating system's memory system.

7) Input Output Status Information

This Input Output Status Information section consists of Input and Output
related information which includes about the process statuses etc.

43
THREADS

What is Thread?

Thread is an execution unit that consists of its own program counter, a


stack, and a set of registers where the program counter mainly keeps track
of which instruction to execute next, a set of registers mainly hold its current
working variables, and a stack mainly contains the history of execution.

Thread is frequently described as a light weight process.

The process can be easily broken down into numerous different threads.

Why Do We Need Thread?

Creating a new thread in a current process requires significantly less time


than creating a new process.

Threads can share common data without needing to communicate with


each other.

44
When working with threads, context switching is faster.

Terminating a thread requires less time than terminating a process.

Types of Threads

1. User-Level Thread

2. Kernel-Level Thread

1. User-Level Thread

The user-level thread is ignored by the operating system.

User threads are simple to implement and are done so by the user.

The entire process is blocked if a user executes a user-level operation of


thread blocking.

The kernel-level thread is completely unaware of the user-level thread.

User-level threads are managed as single-threaded processes by the


kernel-level thread.

Threads in Java and other languages are examples.

Pros of User-Level Thread

User threads are easier to implement than kernel threads.

It is more effective [useful] and efficient[well-organized].

Context switching takes less time than kernel threads.

45
The representation of user-level threads is relatively straightforward. The
user-level process’s address space contains the register, stack, PC, and
mini thread control blocks.

Threads may be easily created, switched, and synchronized without the need
for process interaction.

Cons of User-Level Thread

Threads at the user level are not coordinated with the kernel.

The entire operation is halted if a thread creates a page fault.

User-Level Thread

2. Kernel-Level Thread

The operating system implements the kernel-level thread.

46
Each thread in the kernel-level thread has its own thread control block
in the system.

The kernel is aware of all threads and controls them.

The kernel-level thread provides a system call for user-space thread


creation and management.

Kernel threads are more complex to build than user threads.

The kernel thread’s context switch time is longer.

Kernel-Level Thread

Pros of Kernel-Level Thread

If a thread in the kernel is blocked, it does not block all other threads in
the same process.

47
Several threads of the same process might be scheduled on different CPUs
in kernel-level threading.

The scheduler may decide to allocate extra CPU time to threads with large
numerical values.

Cons of Kernel-Level Thread

All threads are managed and scheduled by the kernel thread.

Kernel threads are more complex to build than user threads.

Kernel-level threads are slower than user-level threads.

User Level threads Kernel Level Threads

These threads are implemented by users. These threads are implemented by


Operating systems.
In User Level threads, the Context In Kernel Level threads, hardware
switch requires no hardware support. support is needed.
These threads are mainly designed as These threads are mainly designed as
dependent threads. independent threads.
In User Level threads, if one user-level On the other hand, if one kernel thread
thread performs a blocking operation performs a blocking operation then
then the entire process will be blocked. another thread can continue the
execution.
Implementation of User Level thread is While the Implementation of the kernel-
done by a thread library and is easy. level thread is done by the operating
system and is complex.

48
Threading Models

The user threads must be mapped to kernel threads, by one of the following
strategies:

Many to One Model

One to One Model

Many to Many Model

Many to One Model

In the many to one model, many user-level threads are all mapped onto a
single kernel thread.

Thread management is handled by the thread library.

Many to One Model

49
One to One Model

The one to one model creates a separate kernel thread to handle each
and every user thread.

Most implementations of this model place a limit on how many threads


can be created.

Linux and Windows from 95 to XP implement the one-to-one model for


threads.

This model provides more concurrency than that of many to one Model.

One to One Model

Many to Many Model

The many to many model multiplexes any number of user threads onto
an equal or smaller number of kernel threads.

Users can create any number of threads.

Processes can be split across multiple processors.


50
Many to Many Model

Benefits of Threads

Enhanced system throughput

The number of jobs completed per unit time increases when the process
is divided into numerous threads, and each thread is viewed as a job.

As a result, the system’s throughput likewise increases.

Effective use of a Multiprocessor system

You can schedule multiple threads in multiple processors when you have
many threads in a single process.

Faster context switch

The thread context switching time is shorter than the process context
switching time.

51
Communication

Multiple-thread communication is straightforward because the threads


use the same address space, while communication between two processes
is limited to a few exclusive communication mechanisms.

Resource sharing

Code, data, and files, for example, can be shared among all threads in a
process. Note that threads cannot share the stack or register.

Each thread has its own stack and register.

Process Thread

A Process simply means any program in Thread simply means a segment of


execution a process
The process consumes more resources Thread consumes fewer resources

The process requires more time for Thread requires comparatively less
creation. time for creation than process.
The process is a heavyweight process Thread is known as a lightweight
process
The process takes more time to terminate The thread takes less time to
terminate
The process takes more time for context The thread takes less time for context
switching. switching.
For some reason, if a process gets blocked In case if a user-level thread gets
then the remaining processes can blocked, all of its peer threads also
continue their execution get blocked.

52
INTERPROCESS COMMUNICATION

Inter-process communication (IPC) is a mechanism that allows processes


to communicate with each other and synchronize their actions.

The communication between these processes can be seen as a method of co-


operation between them.

A process can be of two types

Independent process

Co-operating process

An independent process is not affected by the execution of other


processes while a co-operating process can be affected by other executing
processes.

Processes can communicate with each other through both:

Shared Memory

Message passing

53
An operating system can implement both methods of communication.

First, let’s discuss the shared memory methods of communication and


then message passing.

Shared Memory Methods

Communication between processes using shared memory requires


processes to share some variable, and it completely depends on how the
programmer will implement it.

One way of communication using shared memory can be like this:

Suppose process1 and process2 are executing simultaneously, and they share
some resources or use some information from another process.

Process1 generates information about certain computations or resources


being used and keeps it as a record in shared memory.

When process2 needs to use the shared information, it will check in the
record stored in shared memory and take note of the information generated
by process1 and act accordingly.

Processes can use shared memory for extracting information as a record


from another process as well as for delivering any specific information
to other processes.

Ex: Producer-Consumer problem

There are two processes: Producer and Consumer.

The producer produces some items and the Consumer consumes that item.

54
The two processes share a common space or memory location known as
a buffer where the item produced by the Producer is stored and from which
the Consumer consumes the item if needed.

There are two versions of this problem

The first one is known as the unbounded buffer problem in which the
Producer can keep on producing items and there is no limit on the size of the
buffer, the second one is known as the bounded buffer problem in which
the producer can produce up to a certain number of items before it starts
waiting for Consumer to consume it.

Messaging Passing Method

In this method, processes communicate with each other without using


any kind of shared memory.

If two processes p1 and p2 want to communicate with each other, they


proceed as follows:

Establish a communication link

Start exchanging messages using basic primitives.

We need at least two primitives:


– send
– receive

55
The message size can be of fixed size or of variable size.

If it is of fixed size, it is easy for an OS designer but complicated for a


programmer and if it is of variable size then it is easy for a programmer but
complicated for the OS designer.

A standard message can have two parts: header and body.

The header part is used for storing message type, destination id, source id.

The body part is used for storing control information, it contains


information like what to do if runs out of buffer space, sequence number,
priority.

Generally, message is sent using FIFO style.

A link has some capacity that determines the number of messages that
can reside in it temporarily for which every link has a queue associated
with it which can be of zero capacity.

56
In zero capacity, the sender waits until the receiver informs the sender that
it has received the message.

Message Passing through Communication Link

Direct Communication link and

Indirect Communication link

Direct message passing

The process which wants to communicate must explicitly name the recipient
or sender of the communication.

Eg: send(p1, message) means send the message to p1.


Similarly, receive(p2, message) means to receive the message from p2.

In this method of communication, the communication link gets established


automatically, which can be either unidirectional or bidirectional, but
one link can be used between one pair of the sender and receiver and one
pair of sender and receiver should not possess more than one pair of links.

Indirect message passing

Processes use mailboxes (also referred to as ports) for sending and


receiving messages.

Suppose two processes want to communicate through Indirect message


passing, the required operations are: create a mailbox, use this mailbox for
sending and receiving messages, then destroy the mailbox.

57
The standard primitives used are: send(A, message) which means send the
message to mailbox A. The primitive for the receiving the message also
works in the same way e.g. receive (A, message).

There is a problem with this mailbox implementation.

Suppose there are more than two processes sharing the same mailbox
and suppose the process p1 sends a message to the mailbox, which process
will be the receiver?

This can be solved by either enforcing that only two processes can share a
single mailbox or enforcing that only one process is allowed to execute the
receive at a given time.

A mailbox can be made private to a single sender/receiver pair and can also
be shared between multiple senders and one receiver.

Port is an implementation of such mailbox that can have multiple senders


and a single receiver.

It is used in client/server applications (in this case the server is the


receiver).

Message Passing through Exchanging the Messages

Synchronous and Asynchronous Message Passing

IPC is possible between the processes on same computer as well as on the


processes running on different computer i.e. in networked/distributed
system.

Synchronous message passing means that the message is passed directly


between the sender and the receiver without being buffered in-between.
58
This requires the sender to block until the receiver has received the message,
before continuing doing other things.

Asynchronous message passing involves buffering the message between


the sending and receiving process.

This allows a sender to continue doing other things as soon as the message
has been sent.

Blocking is considered synchronous and blocking send means the sender


will be blocked until the message is received by receiver.

Similarly, blocking receive has the receiver block until a message is


available.

Non-blocking is considered asynchronous and Non-blocking send has the


sender sends the message and continue.

Similarly, in Non-blocking receive, receiver receive a valid message or null.

SCHEDULING

What is Process Scheduling?

Process Scheduling is the process of the process manager handling the


removal of an active process from the CPU and selecting another process
based on a specific strategy.

There are various algorithms which are used by the Operating System to
schedule the processes on the processor in an efficient way.

Objectives of Process Scheduling Algorithm

Utilization of CPU at maximum level. Keep CPU as busy as possible.

59
Allocation of CPU should be fair.

Throughput should be Maximum. i.e. Number of processes that complete


their execution per time unit should be maximized.

Minimum turnaround time, i.e. time taken by a process to finish execution


should be the least.

There should be a minimum waiting time and the process should not starve
in the ready queue.

Minimum response time. It means that the time when a process produces
the first response should be as less as possible.

Different types of CPU Scheduling Algorithms

There are mainly two types of scheduling methods:

Preemptive Scheduling

Preemptive scheduling is used when a process switches from running state to


ready state or from the waiting state to the ready state.

Non-Preemptive Scheduling

Non-Preemptive scheduling is used when a process terminates , or when a


process switches from running state to waiting state.

Types of CPU Scheduling Algorithms

First Come First Serve

Shortest Job First(SJF)

Round Robin

60
Priority Scheduling

1.FIRST COME FIRST SERVE SCHEDULING

FCFS considered to be the simplest of all operating system scheduling


algorithms.

First come first serve scheduling algorithm states that the process that
requests the CPU first is allocated the CPU first and is implemented by
using FIFO queue.

Characteristics of FCFS

FCFS supports non-preemptive and preemptive CPU scheduling


algorithms.

Tasks are always executed on a First-come, First-serve concept.

FCFS is easy to implement and use.

This algorithm is not much efficient in performance, and the wait time is
quite high.

Advantages of FCFS

Easy to implement

First come, first serve method

Disadvantages of FCFS

FCFS suffers from Convoy effect.

The Convoy Effect is a phenomenon in which the entire Operating System


slows down owing to a few slower processes in the system.

61
The average waiting time is much higher than the other algorithms.

FCFS is very simple and easy to implement and hence not much
efficient.

Example

Process Burst
S.No ID Process Name Arrival Time Time
1 P1 A 0 9

2 P2 B 1 3

3 P3 C 1 2

4 P4 D 1 4

5 P5 E 2 3

6 P6 F 3 2

Solution

62
The Average Completion Time is:

Average CT = ( 9 + 12 + 14 + 18 + 21 + 23 ) / 6

Average CT = 97 / 6

Average CT = 16.16667

2.SHORTEST JOB FIRST(SJF) SCHEDULING

Shortest job first (SJF) is a scheduling process that selects the waiting
process with the smallest execution time to execute next.

Significantly reduces the average waiting time for other processes waiting
to be executed.

The full form of SJF is Shortest Job First.

Characteristics of SJF

Shortest Job first has the advantage of having a minimum average


waiting time among all operating system scheduling algorithms.

It is associated with each task as a unit of time to complete.

It may cause starvation if shorter processes keep coming.

Starvation is a problem of resource management where in the OS, the


process does not have resources because it is being used by other processes.

This problem can be solved using the concept of aging.

Aging in OS is a scheduling technique used to prevent starvation in


operating systems. It involves gradually increasing the priority of processes
that have been waiting for a long time.

63
Advantages of Shortest Job first

As SJF reduces the average waiting time thus, it is better than the first
come first serve scheduling algorithm.

SJF is generally used for long term scheduling.

A long-term scheduler is a scheduler that is responsible for bringing


processes from the JOB queue (or secondary memory) into the READY
queue (or main memory). In other words, a long-term scheduler determines
which programs will enter into the RAM for processing by the CPU.

Disadvantages of SJF

One of the demerit SJF has is starvation.

Many times it becomes complicated to predict the length of the upcoming


CPU request

Example

Arrival Burst Completion


PID Time Time Time
P1 1 7 8

P2 3 3 13

P3 6 2 10

P4 7 10 31

P5 9 8 21

64
Since, No Process arrives at time 0 hence; there will be an empty slot in
the Gantt chart from time 0 to 1 (the time at which the first process
arrives).

According to the algorithm, the OS schedules the process which is


having the lowest burst time among the available processes in the ready
queue.

Till now, we have only one process in the ready queue hence the scheduler
will schedule this to the processor no matter what is its burst time.

This will be executed till 8 units of time. Till then we have three more
processes arrived in the ready queue hence the scheduler will choose the
process with the lowest burst time.

Among the processes given in the table, P3 will be executed next since it is
having the lowest burst time among all the available processes.

So that's how the procedure will go on in shortest job first


(SJF) scheduling algorithm.

Average Completion Time=(0+1+8+10+13+21+31)/6

=84/6 = 14

65
3. ROUND ROBIN SCHEDULING

Round Robin is a CPU scheduling algorithm where each process is


cyclically assigned a fixed time slot.

It is the preemptive version of First come First Serve CPU Scheduling


algorithm.

Round Robin CPU Algorithm generally focuses on Time Sharing


technique.

Characteristics of Round Robin

It’s simple, easy to use, and starvation-free as all processes get the
balanced CPU allocation.

One of the most widely used methods in CPU scheduling as a core.

It is considered preemptive as the processes are given to the CPU for a


very limited time.

Advantages of Round Robin CPU Scheduling Algorithm

There is fairness since every process gets equal share of CPU.

The newly created process is added to end of ready queue.

A round-robin scheduler generally employs time-sharing, giving each job a


time slot or quantum.

While performing a round-robin scheduling, a particular time quantum is


allotted to different jobs.

66
Each process get a chance to reschedule after a particular quantum time in
this scheduling.

Disadvantages of Round Robin CPU Scheduling Algorithm:

There is Larger waiting time and Response time.

There is Low throughput.

There is Context Switches.

Gantt chart seems to come too big (if quantum time is less for scheduling.
For Example:1 ms for big scheduling.)

Time consuming scheduling for small quantum.

Example

S.No Process ID Arrival Time Burst Time

1 P1 0 7

2 P2 1 4

3 P3 2 15

4 P4 3 11

5 P5 4 20

6 P6 4 9

67
Assume Time Quantum TQ=5

68
4. PRIORITY SCHEDULING

Priority CPU Scheduling Algorithm is a pre-emptive method of CPU


scheduling algorithm that works based on the priority of a process.

In the case of any conflict, that is, where there are more than one
processor with equal value, then the most important CPU planning
algorithm works on the basis of the FCFS (First Come First Serve)
algorithm.

Characteristics of Priority Scheduling

Schedules tasks based on priority.

When the higher priority work arrives while a task with less priority is
executed, the higher priority work takes the place of the less priority one and
The latter is suspended until the execution is complete.

Lower is the number assigned, higher is the priority level of a process.

Advantages of Priority Scheduling

The average waiting time is less than FCFS.

Less complex

Disadvantages of Priority Scheduling

One of the most common demerits of the Preemptive priority CPU


scheduling algorithm is the Starvation Problem.

This is the problem in which a process has to wait for a longer amount
of time to get scheduled into the CPU. This condition is called the
starvation problem.

69
In Priority scheduling, there is a priority number assigned to each
process.

In some systems, the lower the number, the higher the priority. While, in the
others, the higher the number, the higher will be the priority.

The Process with the higher priority among the available processes is
given the CPU.

There are two types of priority scheduling algorithm exists. One


is Preemptive priority scheduling while the other is Non
Preemptive Priority scheduling.

70
Non Preemptive Priority Scheduling

In the Non Preemptive Priority scheduling, the processes are scheduled


according to the priority number assigned to them.

Once the process gets scheduled, it will run till the completion.

Generally, the lower the priority number, the higher is the priority of the
process.

Example

Process ID Priority Arrival Time Burst Time

P1 2 0 3

P2 6 2 5

P3 3 1 4

P4 5 4 2

P5 7 6 9

P6 4 5 4

P7 10 7 10

The Gantt chart according to the Non Preemptive priority scheduling.

The Process P1 arrives at time 0 with the burst time of 3 units and the
priority number 2. Since No other process has arrived till now hence the
OS will schedule it immediately.

71
Meanwhile all the processes get available in the ready queue. The
Process with the lowest priority number will be given the priority.

Since all the jobs are available in the ready queue hence All the Jobs will get
executed according to their priorities.

If two jobs have similar priority number assigned to them, the one with the
least arrival time will be executed.

72
Turn
Process Priority Arrival Burst Completion Around Waiting
ID Time Time Time Time Time
P1 2 0 3 3 3 0

P2 6 2 5 18 16 11

P3 3 1 4 7 6 2

P4 5 4 2 13 9 7

P5 7 6 9 27 21 12

P6 4 5 4 11 6 2

P7 10 7 10 37 30 18

Average Completion Time = (3+18+7+13+27+11+37)

=116/7=16.5

CLASSICAL IPC PROBLEMS

Classical IPC (Interprocess Communication) problems refer to a set of


well-known synchronization problems that arise when multiple
processes or threads attempt to access shared resources concurrently.

These problems highlight the challenges in coordinating the execution of


concurrent processes to ensure correctness and prevent race conditions.

Some of the most commonly encountered classical IPC problems:

The Producer-Consumer Problem

The Dining Philosophers Problem

73
The Readers-Writers Problem

The Sleeping Barber Problem

The Cigarette Smokers Problem

The Producer-Consumer Problem

The Producer-Consumer problem involves two types of processes:


producers, which generate data items, and consumers, which consume these
items.

The goal is to ensure that producers and consumers can work


concurrently without data corruption or deadlock.

Key points of this problem include

Producers must wait if the buffer is full, and consumers must wait if the
buffer is empty.

Mutual exclusion must be maintained to prevent simultaneous access to the


buffer.

Synchronization is required to allow the producer to signal the consumer


when data is available.

74
The Dining Philosophers Problem

The Dining Philosophers problem involves a group of philosophers sitting


around a circular table, with each philosopher alternately thinking and
eating.

There is a chopstick placed between each pair of adjacent philosophers.

The problem is to devise a scheme that allows the philosophers to dine


without causing deadlock or starvation.

Key points include

Each philosopher needs two chopsticks to eat (left and right).

Deadlock can occur if all philosophers pick up their left chopstick


simultaneously.

To prevent deadlock, a strategy like resource hierarchy [Hierarchical


organization of resources], limiting the number of philosophers, or
asymmetric picking can be used.

75
The Readers-Writers Problem

The Readers-Writers problem involves multiple readers and writers


accessing a shared resource, such as a database.

Readers can access the resource concurrently, but writers require exclusive
access.

Key points include

Readers can access the resource simultaneously unless a writer is currently


writing.

Writers must have exclusive access to the resource and block any other
readers or writers from accessing it.

The problem is to balance the trade-off [exchange] between read


concurrency and write exclusivity to prevent problems.

76
The Sleeping Barber Problem

The Sleeping Barber problem represents a scenario where a barber operates


a single chair in his barbershop for haircutting.

Customers arrive and either wait in a queue or leave if the queue is full.

The problem is to synchronize the arrival of customers and the barber's


activities.

Key points include

The barber sleeps if no customers are available and wakes up when a


customer arrives.

Customers must wait if the barber is busy cutting someone else's hair or
leave if the waiting area is full.

Synchronization mechanisms are needed to coordinate the barber and


customers' activities.

77
The Cigarette Smokers Problem

The Cigarette Smokers problem involves three smokers and an agent.

Each smoker has an infinite supply of one ingredient needed to roll a


cigarette (e.g., tobacco, paper, and matches).

The agent places two different ingredients on the table, and the smoker who
has the missing ingredient can pick up the ingredients and roll a cigarette.

The challenge is to ensure that only one smoker can pick up the
ingredients at a time and that all smokers get a fair chance.

These classical IPC problems are commonly used to ensure proper


coordination and mutual exclusion between processes or threads.

Solving these problems requires careful design and implementation to


prevent race conditions, deadlocks, and other synchronization issues.

78

You might also like